CQRLOG multicast wsjtx

14 posts / 0 new
Last post
NO2Y
CQRLOG multicast wsjtx

I'm trying to write a Linux app similar to JT Alert in Python.

I've coded my rudimentary app to use multicast UDP. I've set WSJTX up to use multicast. They work together. However, cqrlog seems to work intermittently and never completely. I set it up with 224.1.1.1 as the wsjtx address and kept the port the same.

Sometimes cqrlog receives some packets in the wsjtx monitor, but not all of them. When I log a contact in wsjtx with both cqrlog and my program running, cqrlog doesn't get the information... Even though that's sent unicast to a different port (enabled nvmm logger in wsjtx)

Any ideas? Thanks

oh1kh
CQRLOG multicast wsjtx

Hi!
Perhaps because multicast is not properly set.
See this http://synapse.ararat.cz/doc/help/blcksock.TUDPBlockSocket.html (AddMulticast)
And make needed changes to fNewQSO.pas (around line 6770, TfrmNewQSO.GoToRemoteMode, case rmtWsjt )

--
Saku
OH1KH

oh1kh
CQRLOG multicast wsjtx

" When I log a contact in wsjtx with both cqrlog and my program running, cqrlog doesn't get the information... Even though that's sent unicast to a different port (enabled nvmm logger in wsjtx)"

If your program uses "UDP server" port 2232 with multicast IP and "secondary UDP server" is 127.0.0.1 port 2333 both set in wsjt-x settings logging should work if you use "n1mm remote" with cqrlog. You do not get cqrlog's CQ monitor then, just the qso is logged when you press "OK" at wsjt-x logging window.
There is no way to see it if you do not have "preferences/NewQSO/Refresh data after save qso" and "preferences/NewQSO/Show recent QSO records for last..." cheked.

Test that wsjt-x sends both multicast and localhost UDP packets when logging using a tool program. The simplest one for that is command line program "tcpdump"

--
Saku
OH1KH

oh1kh
CQRLOG multicast wsjtx

Hi!
Just one question more:

For what reason do you write JTAlert-like program? (Except that programming is fun!)

Cqrlog wsjt-x remote, when Cq-monitor window is opened from window/wsjtxmonitor, (wsjt-x remote must be on to see the selection).
does all alerting, comparing against worked calls and locators, initiate wanted qso with double click etc.

I have never used JTAlert and do not know all features it has, but I think CQ-monitor has all that is really needed.
Hmm... it does not fetch state from web for ALL US-callsigns decoded during a period, just for that one that qso is started with.

At least it has been enough for me.

--
Saku
OH1KH

NO2Y
It's because of the states. I

It's because of the states. I want to display the states and also alert me if I need that state on that band/mode

Thanks for the info

oh1kh
It's because of the states

Yep!
Guessed that!

I removed routine that uses cqrlog's dxcc database for that because it uses callsign prefixes to guess state and that does not work any more like in the old days. Same here in Finland. Prefix numbering does not any more show station location.

Is there any possibility to fetch that information as text file from FCC database? Or is the qrz/HamCom web fetch the only possibility?

Why I have not done qrz/Hamqth fetch for every decoded US callsign is that I think there is not much time between decode periods to do this. There is just few seconds to make all information visible for user decision and most of that time is spent for operator's reflexes to make up his mind and do double clicking to start qso.
Also it increases web traffic a lot because you can not rely on once fetched information. It must be done again every time, or at least daily basis.

And on the other hand I have very poor testing conditions here. Not so much US callsigns on the band. Testing should be done using low bands (80-40-30m) in the middle of USA to get enough calls for every decode to get view how much more time it will need.

If you have good ideas how to make it reasonable fast with some kind of daily buffering please tell.

--
Saku
OH1KH

NO2Y
You can find the FCC database

You can find the FCC database here: http://wireless.fcc.gov/uls/data/complete/l_amat.zip

But, the problem is that it's everyone who has ever had a license. If someone gets someone else's callsign after they die, they'll both be in the database. But, it's from oldest to newest. So, you have to read the file from backwards to forwards and discard any duplicates.

The way I did it in Python3 is found here: https://pastebin.com/mkQqAMmu

Basically, I made a folder called FCC_database, put EN.dat inside it...along with this program, file_read_backwards.py and buffer_workspace_py from the filereadbackwards package. From the directory above FCC_database, I ran python3 -m FCC_database.dataparse.py It will throw an error about no __path__ but that's when it's closing after it's worked.

Then, you have a CSV file called calldata.txt with the current owners of the callsigns in the pattern:

callsign, lastname, first and middle initial, street address, city, state, zip code

oh1kh
HI!

HI!

The file EN.dat can be very easily converted to form with awk.:
KB3PFV=PA
AB3FM=PA
AI4YU=KY
W1RVP=OR
KR1MRW=CA
KN3ICK=CA

But after dupe check the size of file is still over 13Mb. Solutions to handle this are sql table or ini filetype (like Xcqrlog.ini files) that is loaded into memory for runtime.
I think modern PCs can handle this quite well and so I'm going to test first with memory loaded ini filetype.

Order from oldest to newest is good. Reading from source and writing to database or ini file with overwriting existing callsigns will leave the latest (the newest) in use.

Let's see how it works.

--
Saku
OH1KH

NO2Y
If you want to use that file

If you want to use that file to populate the fields for cqrlog with name, address, city, state etc. you can. I captured all that data along with callsign granted and expiration dates (from HD.dat) and put it in a sqlite database. You have to watch because, if a ham club has a callsign, the whole ham club name is in (first name I believe?)

oh1kh
If you want to use that file

HI!

No, I'm just interested to get right states to CQ-monitor because that has been the main problem for long.
Just made first version using stringlist that is loaded from externally created call=state pair file.
Worked!

Still lots to do to get zip loaded and EN.dat converted to states.tab that is used to load string list.
And then get state printed to monitor either "newer worked before in this band"-color or black (worked here).

All other logging data for worked qsos can be found from qrz or Hamqth.
Just the state to all decoded US calls is the critical one that I do not want to get from web (too slow).

--
Saku
OH1KH

oh1kh
You can find the FCC database

Hi!

How fast is your Python code?
I have now played a bit with data. Downloading it from here in the evening (US daytime) takes about 7 minutes and uses around quarter of max speed that my ISP offers. So delay must come from overseas connect.

My bash script, that I later found to be buggy removing too many lines, removes duplicates around 45 minutes.
I'm just running a test with mysql table insert and it seems to take even more than 45min (not finished yet)

The fastest way to do this is:
tac EN.dat | awk -F"|" '{print$5"="$18}' > fcc_state.tab

tac reads file backwards and awk pics items 5th and 18th column making "call=state" pairs. That takes just few seconds to complete.

When this resulting ffc_state.tab is loaded to cqrlog stringlist (as being "backwards") seek of callsign will find the latest(newest) first and stop there, so no duplicate removing should not be needed.

I have now version to test, https://github.com/OH1KH/cqrlog/tree/states but no US stations. Conditions are poor and I just heard around 5 US callsigns calling CQs during evening.
Cq-monitor uses same color coding with states printed like "USA-MA" as it uses for callsigns. Seek start date can be limited with same "WB4 seek starts from /call" checkbox as callsigns (preferences/fldigi/wsjt interface). This should be nice for contest use.
Alerting can be done (for one state at time) with "text alert" using "USA-state_needed" string.

--
Saku
OH1KH

NO2Y
Yes... it has to be from

Yes... it has to be from overseas. I have a 100MB fiber optic connection and get it within a minute.

My whole program: flipping the files around backwards, creating the database, populating the data then coming back and populating the license granted and expired columns...it's done in about 10 seconds.

Here is my code: https://pastebin.com/mPShdc0h

Of course, it does more than your code: getting the mailing address, date license granted and expired, etc. and putting them in the database... but if yours is taking so long this may help you figure out why

oh1kh
Yes... it has to be from

Hi!
OK. I have a version now in my Git that does everything else than getting zip (wget) and unzipping EN.dat (unzip).

Works quite fast, under 5 seconds.

My idea was to collect the record number (#2), call and state. Then sort all by callsign. That way all records by a call are side by side and I just have to select the one that has biggest record number. That is done by delayed write; As long as the call is same in read state is swapped if record number is bigger. When callsign in read changes the highest number record of last call=state is written.

I found out that having duplicate state entries for call there are also entries without state, and even entries without a callsign. So besides writing fcc_states.tab program will also write fcc_rejected and fcc_duplicate files.
By random checks it seems to work ok (no time and will to check them all manually :-)

--
Saku
OH1KH

oh1kh
You can find the FCC database

Thanks!

I look forward could this be solution.
I already made a new git branch with just comments what to do in various places to get states show up again in correct way.

That is a good start in building new version :-)

How ever I must first solve problem that came with latest Fedora 30 updates. I lost the functionality of udev symlinking. Sometimes it works but also then rigctld cannot access the linked port. Something weird did happen.

--
Saku
OH1KH