Syncing Log-data between two or more machines

22 posts / 0 new
Last post
do2hg
Syncing Log-data between two or more machines

Hi there,

I have two machines with cqrlog. - One in my shack and a notebook for portable operation.
When using the notebook I often do not have internet access.

So I am wondering, what is the easiest way to keep the databases on both machines in sync?

Currently I'm moving adif-files back and forth ... but that is not really comfortable.
Before going on summer vacation, I'll probably copy the whole cqrlog tree, but since some settings are different on the machines (TRX, ROT, Log uplad etc.) this is also not ideal.

Has anyone come up with a nice and smooth solution for this yet?

cheers & 73
Stefan.

oh1kh
Syncing Log-data between two or more machines

Hi Stefan!

I have solution you are looking for. Unfortunately it is not easy if you are not familiar with linux.
How ever let me tell you how it works.

At home I have server machine that has mysql server running with cqrlog database.
I can connect that database from local WiFi and also from internet. AT internet side use OpenVPN tunneling to connect either to cqrlog database directly with travel laptop that has cqrlog installed, or I can also log qsos to that database with any device that have web browser (my smart phone, tablet PC etc.) by connecting servers webserver that has CqrWebLog ( https://github.com/dl8bh/cqrweblog ) running.

Ham Shack laptop has also it's own mysql server running. I do not use "save log data to local machine" checked even I still save data to local machine, but via standard mysql port localhost:3306.

Now we come to point that is interesting.
I can log qsos to servers database from remote locations and when I come home and start Ham Shack laptop those two mysql databases synchronize them self automatic when laptop is turned on.
Other way round: I could take my Ham Shack laptop with me when I travel, log the qsos and when I come back home and my WiFi connects laptop and server databases will automatic synchronize.

Both mysql servers are MariaDBs

So. You need to use the "real" mysql server of your computer's instead of using "save data to local machine".
When you install cqrlog you have "real" database server running at port 3306 of localhost as standard, just start to use it.
When both computers log that way you have to generate the synchronization between them.
There are many help pages found with Google when using search words like:
mysql two way replication
Or similar words.
Replication can be one way, but this case two way replication is more useful.

Replication is a standard feature of mysql (MariaDB databases) that is commonly used by big companies who have offices around the world. Everyone are running a database server of their own and databases replicate themselfs either immediately or timed schedule with other ones.

You need first to learn how to use PC's database server localhost port 3306 for cqrlog and after that how you can make two way replication between databases.

--
Saku
OH1KH

do2hg
Hi Saku,

Hi Saku,
let me see, if I get you right:
* "save data to local machine" logs only to ~/.config/cqrlog, but does *not* use the mariadb-server, despite its mandatory installation under ubuntu.
* if I uncheck that box, mariadb is used on Port 3306, but *no* local log written to ~/.config/cqrlog

If I recall correctly, the feature has to be enabled or disabled upon first run in the database connection dialog (open or create new log ...). - Or can I switch that on later somehow?

So I'd need a clean empty cqrlog config and would have to import my whole log as .adif - however, when doing this in the past, I was loosing data on some fields, despite the settings in the export preferences. - That still needs some sorting out [e.g. I was entering contest numbers in the RST fields separated by space (e.g. "59 123") and then the contestnumber did not get re-imported again].

But first I'd need to convert both machines to use mariadb for the logging and import all the log data as a prerequisite. - Not fully comfortable with that yet ... well, and the rest would have to follow then. Setting up the db-syncing would be final step.

Not a 100% sure, about the switching to the mariadb. - Haven't worked with mysql-databases for 10 years ... though the idea of the syncing sounds cool.
Can you recommend a gui-tool to check mariadb data?

Thanks & 73
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Syncing Log-data between two or more machines

Hi Stefan!

I put a copy of my notes about replication. Text file is originally made 8/2015 and modified last time 3/2018

[some translations inside brackets]

[saku@hamtpad database]$ cat Master-slave_db_setup.txt

Serveri [Server]

my.cnf
------

log-bin
binlog-do-db=cqrlog_common # input the database which should be replicated
binlog-do-db=cqrlog001
binlog-do-db=cqrlog002
binlog-do-db=cqrlog003
binlog-do-db=cqrlog004
binlog-ignore-db=mysql # input the database that should be ignored for replication
binlog-ignore-db=test
binlog-ignore-db=mittaus
binlog-ignore-db=maraton
binlog-ignore-db=zmmy
binlog-ignore-db=ais
binlog-ignore-db=sata
binlog-ignore-db=cqrlog_web

server-id=1

Konsoli: [At console]

mysql> grant replication slave on *.* to 'replication'@'%' identified by 'slave';
mysql> flush privileges;
mysql> flush tables with read lock;
mysql> show master status;
mysql> unlock tables;
mysql> show grants;
mysql> CHANGE MASTER TO MASTER_HOST='192.168.60.222', MASTER_USER='replication', MASTER_PASSWORD='slave2';
mysql> start slave;
mysql> show slave status\G;

LÄppäri [Laptop]
Muten samat, mutta... [Basicly same, but...]
MariaDB [(none)]> change master to master_host='192.168.60.1', master_user='replication', master_password='slave'

---------------FIX broken replication.--------------------------------------
+++molemmissa: [at both machines]
stop slave;
reset slave all;
reset master;

+++Oikessa järjestyksessä olevassa koneessa: [at machine that have latest qsos up to date]
mysqldump -ucqrlog -pmypasswd --databases cqrlog001 cqrlog_common > mysqlbackcqr.sql

+++Sitten siirto jäljessä olevaan: [then move to machine that is behind with qsos]
mysql -ucqrlog -pmypasswd < mysqlbackcqr.sql

+++serveri:[server] CHANGE MASTER TO MASTER_HOST='192.168.60.222', MASTER_USER='replication', MASTER_PASSWORD='slave2';
+++läppäri:[laptop] change master to master_host='192.168.60.1', master_user='replication', master_password='slave';

+++molemmissa: [both machines]
start slave;
show slave status\G;
(no errors-> testaa luomalla qso ja tarkista että tuli toiseen. Poista ja tarkista että häipyi toisesta,
sama toisinkin päin.) [no errors > test by logging a qso and check that other DB received it. Then at other DB remove the qso and check that it disappeared at first DB]

Then I have done some scripts to reset replication if it fails. When I have upgraded last few versions of Fedora the replication has become very stabile and fails now very seldom (nearly never).

First one is run first at both machines when starting sync reset:

[saku@hamtpad database]$ cat fix_sync1.sh
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
else
mysql -e "stop slave"
mysql -e "reset slave all"
mysql -e "reset master"

##mysqldump -ucqrlog -pmypasswd --databases cqrlog001 cqrlog_common > /tmp/mysqlbackcqrL.sql
mysql -ucqrlog -pmypasswd -B -N -e 'show databases like "cqr%"' | xargs echo -n mysqldump -ucqrlog -pmypaswd --databases > /tmp/mycmd.txt
echo ' > /tmp/mysqlbackcqrL.sql' >> /tmp/mycmd.txt
chmod a+x /tmp/mycmd.txt
source /tmp/mycmd.txt
rm -f /tmp/mycmd.txt
echo "stop slave, reset slave all, reset master, mysqldump /tmp/mysqlbackcqrL.sql DONE !"
rsync -avzu /tmp/mysqlbackcqrL.sql saku@192.168.60.1:/tmp/
echo "Jos tässä oli oikea kanta niin palauta servossa komennolla" [if this DB was up to date restore at server with command]
echo "mysql -ucqrlog -pmypaswd < /tmp/mysqlbackcqrL.sql"

fi

There are 2 copies of this script. The other one, at other end is similar, but filename is /tmp/mysqlbackcqrS.sql and rsync ip address is different (.222) and help text talks about laptop, not server.
In fIlenames letter L=laptop, S=server meaning from what database the dump was made.

Then there is second script to run next at server side:

saku@hamtpad database]$ cat fix_sync_serveri.sh
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
else
mysql -e "CHANGE MASTER TO MASTER_HOST='192.168.60.222', MASTER_USER='replication', MASTER_PASSWORD='slave2'"
mysql -e "start slave"
mysql -e "show slave status\G"

echo "Testaa kun ajat ensin myös lapparin fixin" [Test after you have run also laptop fix_sync]

And the other script for laptop side:

[saku@hamtpad database]$ cat fix_sync_lappari.sh
#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
else
mysql -e "change master to master_host='192.168.60.1', master_user='replication', master_password='slave'"
mysql -e "start slave"
mysql -e "show slave status\G"

echo "Testaa kun ajat ensin myös serverin fixin" [Test after you have run also server fix_sync]

It does not matter what order server and laptop scripts are executed as long as the common script for both are run as first script.

--
Saku
OH1KH

do2hg
Hi Saku,

Hi Saku,
... this seems really to be a bit tricky. It seems to work now, but a few things are not fully clear to me.

So far I did the following:
Added to /etc/mysql/conf.d/mysql.cnf
log-bin
binlog-do-db=cqrlog_common # input the database which should be replicated
binlog-do-db=cqrlog001
binlog-do-db=cqrlog002
binlog-ignore-db=mysql # input the database that should be ignored for replicat>
binlog-ignore-db=information_schema
binlog-ignore-db=performance_schema

server-id=1
... then restarted mariadb-server (on the Laptop server-id=2).

Next I did the SQL-commands at the console as specified, however, I named the user replicator, but that should not be a problem.
Question: Does the server remember these? I'd think I need to put that somewhere into a config, or not?

When you write for the server

CHANGE MASTER TO MASTER_HOST='192.168.60.222', MASTER_USER='replication', MASTER_PASSWORD='slave2';

then is 192.168.60.222 the ip of the Laptop and 'slave2' the password for the replication-user there, right?

Now my problem is, that I get a "connection refused" at both ends. - show slave status\G; replies:
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: laputopu
Master_User: replicator
Master_Port: 3306
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: mysqld-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 256
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 2003
Last_IO_Error: error connecting to master 'replicator@laputopu:3306' - retry-time: 60 maximum-retries: 86400 message: Can't connect to MySQL server on 'laputopu' (111 "Connection refused")
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0

OK, so I got a connection problem. Adding bind-address=0.0.0.0 to /etc/mysql/conf.d/mysql.cnf did not change anything
(I know, that bind-address=0.0.0.0 is potenially a risk, but for the moment I am testing only).

Only putting the bind-address=0.0.0.0 to /etc/mysql/mariadb.conf.d/50-server.cnf did resolve this. Then I get:

*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: laputopu
Master_User: replicator
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysqld-bin.000007
Read_Master_Log_Pos: 343
Relay_Log_File: mysqld-relay-bin.000010
Relay_Log_Pos: 556
Relay_Master_Log_File: mysqld-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 343
Relay_Log_Space: 866
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0

And I can sync back and forth between the machines. - Very well.

However, I'm unhappy with the bind-address=0.0.0.0 statement in the config.
I'd rather have something that specifies 127.0.0.1 *AND* 192.168.2.0/24. Tried various things and ways to put this, but to no avail.
Multiple statements do not work, neither do komma-separated addresses. Ranges with "0" or "0/24" or "*" also do not seem to work. - Has anybody an idea for this? - Googled quite a bit, but did not find a working solution. I run mariadb-server-10.3 on Umbutu 20.04 "Focal Fossa".

Thanks a lot and 73!

Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Syncing Log-data between two or more machines

HI !
>When you write for the server

CHANGE MASTER TO MASTER_HOST='192.168.60.222', MASTER_USER='replication', MASTER_PASSWORD='slave2';

then is 192.168.60.222 the ip of the Laptop and 'slave2' the password for the replication-user there, right?

Yes you are right those are server settings how it sees the laptop.

My server has bind 0.0.0.0 it is the only way to get it work from localhost and from network interface. ( https://serverfault.com/questions/139323/how-to-bind-mysql-server-to-mor... )

Firewall and specifying IP from where a mysql user can connect are the way to make security.

>Question: Does the server remember these? I'd think I need to put that somewhere into a config, or not?

open console connect:
sudo mysql

Give
show databases;

You should see database named "mysql". That is the one where mysql saves config values.

use mysql;
show tables;

--
Saku
OH1KH

do2hg
*** all working now ! ***

Hi Saku,
it seems, that I got everything working now. - Thanks to your help!
This is very much appreciated!

I hope, that documenting everything here will be a help to others, too. - although the topic has become a bit garbeld up regarding legibility.
73 and all the best.

Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
*** all working now ! ***

Hi Stefan!

Nice to hear!
You will see that it is very handy way to keep two or more machines in sync.
Enjoy !

--
Saku
OH1KH

do2hg
Saku,

Saku,
... ok. - The "two or more" statement is interesting.
Say, if you have three machines, would you enable syncing "all to all", i.e.

  • A to B
  • B to C, and
  • C to A

or just pick one as Master (say A, as it runs 24/7 in the shack anyway) and do only A to B and A to C?
Cheers and 73
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
*** all working now ! ***

HI!
I think both ways work, but "circle" needs that all are running and replication turns only one way. Cross replicating to both directions may cause problems.

"Star" with one master could be the clearest way.
I have never tried, but it should work.

--
Saku
OH1KH

oh1kh
Syncing Log-data between two or more machines

Hi Stefan!

"save data to local machine" logs to another "mysql_save" thread that is started by cqrlog.
That database uses ~/.config/cqrlog/database folder for data.
it does not have username/password, so it is "wide open" (also to local network https://cqrlog.com/node/2314 ).

You can access it while cqrlog is running via socket from same pc. ( https://www.cqrlog.com/comment/8025#comment-8025 )

If you use "real" mysql server instead you have to give some rights to user "cqrlog" to access databases. If there is nothing else using db engine you can give full rights to whole db server.
( https://www.cqrlog.com/comment/6776#comment-6776 )

You can use socket connection also for mysqldump, so that is the way to transfer full db data from mysql_safe to mysql server. Just use full path for sock file, not the "~/." way. mysql and mysqldump does not like ~ paths (at least in my Fedora).

For tool I use: https://www.cqrlog.com/comment/8453#comment-8453

--
Saku
OH1KH

do2hg
Hi Saku,

Hi Saku,
thanks a lot so far.

I just tried on another machine (debian buster) installing cqrlog (buster comes with 2.3.0) and mariadab-server-10.3.
Want to know this is working, before I fiddle with my productive machines.

I did (just for further reference):
$ sudo mysql
MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'cqrlog'@'%' IDENTIFIED BY 'my-secret';
Query OK, 0 rows affected (0.001 sec)

... then I fired up cqrlog:
$ cqrlog --debug=1
... clicked "no" to the save to local machine qustion and enterd the following:

Server name: 127.0.0.1 Port: 3306
User name: cqrlog Passwort: my-secret
X remember me X auto connect

... then I hit "Connect", button stays clicked for a while ...

And then I can rename the Log #1 and open it.

Very well.

What I do not understand is using the socket (I assume /home/myself/.config/cqrlog/database/sock) with the mysqldump to migrate the data into mariadb.
If you could help me with that I think I can migrate in step one. - Then I only need to look into the syncing later.

Thanks & 73
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Syncing Log-data between two or more machines

--
Saku
OH1KH

oh1kh
Syncing Log-data between two or more machines

Hi!
When cqrlog is running and "save data to local machine" is checked you can connect database from command line with:

mysql --socket=/home/saku/.config/cqrlog/database/sock cqrlog001

Change saku to your username and cqrlog001 to log that you use. (The number part changes, 001 is the first log)

In that case cqrlog starts a new database server mysql_safe (you see it if you start cqrlog from console with debug=1 psrameter)
For accessing that server you can use socket file (it is like "serial port") but also localhost port 64000 (you see that defined from debug text output when cqrlog starts)
Either way should work, also with mysqldump.
Cqrlog just must be running behind with "save data to local machine" checked. When cqrlog closes mysql_safe server also shuts down.

--
Saku
OH1KH

do2hg
... almost there ...

Hi Saku,
thanks for your patient assistance.

In theory I'm almost there. I did the following:

$mysqldump --socket=/home/myself/.config/cqrlog/database/sock cqrlog001 >/tmp/cqrlog001-dump.sql

The file looks fine to me (as an superficial observer). Then I tried to import it into mariadb with the command from the other thread mentioned above:

$mysql -ucqrlog -pmypassword < /tmp/cqrlog001-dump.sql

and I get as response:
ERROR 1046 (3D000) at line 22: No database selected

... seems like I need to add the DB-Name ... No problem:
$mysql -Dcqrlog001 -ucqrlog -pmypassword < /tmp/cqrlog001-dump.sql

... but then I get:
ERROR 1959 (OP000) at line 314: Invalid role specification ``

But this could be due to the fact, that I have for the current proof-of-concept-testing two different versions of cqrlog running.
I'll try the other day again with the same versions on another machine.

However, as I see this, I need to create the DB to import into with cqrlog first, then I can import the dump ...

Thanks again, Saku, I keep you posted on my progress ...
73
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Syncing Log-data between two or more machines

HI Stefan!

... but then I get:
ERROR 1959 (OP000) at line 314: Invalid role specification ``

Yes. I see that error too. Something has changed since I last made this kind of restore. Actually so long that I think database was then Mysql, not MariaDB.

That needs further investigation. I am not SQL guru, maybe someone could immediately say what is wrong.

--
Saku
OH1KH

do2hg
Error importing SQL Dump

Hi Saku,
had a closer look at the dump.
The problematic Command sequence in question is:

/*!50003 CREATE*/ /*!50017 DEFINER=``*/ /*!50003 TRIGGER `cqrlog_main_ai` AFTER INSERT ON `cqrlog_main` FOR EACH ROW
insert into log_changes(id_cqrlog_main,cmd,qsodate,time_on,callsign,mode,freq,band) values
(NEW.id_cqrlog_main,'INSERT',NEW.qsodate,NEW.time_on,NEW.callsign,NEW.mode,NEW.freq,NEW.band) */;;

Looks to me like a change to the db-structure.
It does not allow the empty "´´" in the "/*!50017 DEFINER=``*/" command. But I got no clue what the heck that means...
As the data itself follows later in the dump and is imported, I assume it is only this command that fails. At worst the whole line. But not more.

What I did upon start of CQRLOG is the "Utils - Repair log database" ... just in case.
I also exported and restored the config of cqrlog.

Upon first inspection, it seems that the log looks ok and complete.
However, I see that I lost some preference-settings:

  • Modes - manually defined modes
  • QTH Profiles

But that must be related to the export and import of the settings ...
After defining the QTH-Profiles as before, the log-entries are matched to the right QTH-Profile. - Good!

The rest looks OK at superficial inspection.
However, I am still unsure ...
What I'll try tomorrow is a full adif export from both dbs and run a diff on it. Just to gain assurance. ... For today I had enough.

Thanks again & 73
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Error importing SQL Dump

Hi !

Some test results:

I made backup from my servo2 server's cqrlog databases with command:

First listed the databases:
sudo /usr/bin/mysql -B -N -e 'show databases like "cqr%"' | /usr/bin/xargs /bin/echo -n mysqldump --databases > /tmp/mycmd.sh

Then added filter form stackoverflow link I sent before:
echo "| sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > /tmp/cqr.sql" >> /tmp/mycmd.sh

Resulted command in file /tmp/mycmd.sh was then:

mysqldump --databases cqrlog001 cqrlog002 cqrlog003 cqrlog004 cqrlog005 cqrlog006 cqrlog007 cqrlog008 cqrlog009 cqrlog_common cqrlog_web | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' > /tmp/cqr.sql

Did set execute bit and run it with sudo (as well could have been run it as cqrlog DB user)

Then copied file cqr.sql to my second server servo4's /tmp folder.
Made DB user cqrlog with webmin and granded all permissions for it.
Then just:
[saku@servo4 tmp]$ mysql -ucqrlog -pmypasswd < cqr.sql

No errors, but took some time.
After that started cqrlog at ham shack laptop and defined database connection as servo4 port 3306.

And there were all my logs. Looked my main log and everything is there. Even preferences/modes/user defined modes (happened to have mode T10 there).
I do not use profiles but had one test log that had profiles rig1 and rig2. They were also there.

So far it seems everything is ok.

Difference now is that I did dump from real server DB, not from mysql:safe started by cqrlog but I believe it does not matter. The key is pipe via sed filter that modifies DEFINER.

--
Saku
OH1KH

do2hg
Hi Saku,

Hi Saku,
the restore did not work for me:
$ mysql -ucqrlog -pmypasswd < cqr.sql
I got the following error:
ERROR 1064 (42000) at line 1987: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 1

This is kind of strange, as the manual restore of the single db seemed to work despite the DEFINER error.

Therefore next try: make single dumps of the dbs, as this worked before ... That works for cqrlog_common but not for the logs cqrlog0*.
Hmmm. This is strange.

The sed command changes at one place in the log-dbs /*!50017 DEFINER=``*/ into /*!50017 */.
But for some reason is creating other trouble ...

What I'll try tomorrow is importing with the definer statement the dbs one by one and then doing a full adif export and run a diff on that.
If the data is OK, I'll probably go with it.

73 many thanks and good night!
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

do2hg
Hi Saku,

Hi Saku,
so, now I re-imported the dbs one by one without the change of the DEFINER-statement and ignored the error
ERROR 1959 (OP000) at line 314: Invalid role specification ``

Then I ran a full .adif export from cqrlog - once from the localdb and once from the mariadb.
After aligning the qth-profile definitions 100% there were only differences in the "Notes" field, which I rarely use.
So basically, after some manual fiddeling I now got my logs successfully transferred to mariadb.

At least this part is now finished ...
Tomorrow I'll port the database to the other machine and try to look into the syncing issue.
73 and many thanks for all your help and patience Saku.
Stefan.

--
Stefan
DB4ST (ex DO2HG) --- German Ham Radio Station
D-32584 Löhne ----- Locator JO42IE

oh1kh
Error importing SQL Dump

Hi Stefan!

I dropped "ERROR 1959 (OP000) at line 314: Invalid role specification ``" to Google search and found interesting reading.

Seems like role is some kind of MariaDB innovation. I had to read more about this:
https://mariadb.com/kb/en/roles/

That could explain why importing worked before (at Mysql time).

I really like to hear a comment/explanation/fix from a SQL database guru !

--
Saku
OH1KH

oh1kh
Error importing SQL Dump

--
Saku
OH1KH