• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
batchSync
#1
Hi all and LiveCloud team...

Just a quick question about BatchSync - I am using cdb_batchSync to update the cloud database from the local database at certain intervals... 
It certainly does the job, BUT... it seems to be a 'blocking' command, meaning that everything in the app stalls until the command completes.

Is there a 'non-blocking' version of this, or am i doing this wrong?

this is my algorithm for syncing the entire db in either direction:

command syncDB pSource
   local tTableNames, tTableID, tInputA, tMessage
   put cdb_tableNames() into tTableNames
   repeat for each line tTable in tTableNames
       put cdb_tableID(tTable) into tTableID
       put "*" into tInputA[tTableID]["cdbRecordID"]
       put pSource into tInputA[tTableID]["source"]
       put true into tInputA[tTableID]["allowDeletes"]
       put false into tInputA[tTableID]["detectCollisions"]
   end repeat
   get cdb_batchSync(tInputA)

   if cdb_result() is not true then
       put "Error syncing cloud <-> local database: " & cdb_result("response"into tMessage
       answer tMessage
   end if
end syncDB


I think it may just be a bit too complex to try and maintain a system that flags changes and only syncs them, not in the least as the data is relational involving multiple tables and is still being developed, so i would prefer a system that would just sync everything 'in the background' if that were possible... 

As it stands, the process to sync does not appear asynchronous and blocks the interface - grateful for any advice/tips (or if appropriate a feature request for an asynchronous sync!)

Many thanks
Stam
  Reply
#2
Hi Stam,

Sync is blocking as things are today. I have some thoughts on how the performance could be improved. One idea we have bounced around is moving heavy cloud calls to a separate process. We developed that technology many years ago and use it in our backend. That does come with its own challenges though. Racing conditions would apply and pose a new level of complexity.

The other thought is to sync as efficiently as possible. For example, if we are syncing from local to cloud, we could do a local local query for records that have a modified date of a certain value (like today's changes as one example). Or, you could track your last sync date and make that the look up value. Then, sync only those records. This would be very efficient and save a lot of time.

While this is not built into the API today, you have all the power to do the query now and sync the results with what is available today.

-Mark
  Reply
#3
Thanks Mark,

the blocking nature of sync is really quite an issue with large data sets with waiting 5-10 seconds (some times much more).
It's actually much more responsive to just eschew the local database and only use cloud database. 

In an effort to streamline this, I've modified all my code successfully, but am still encountering 1 long-ish delay: cdb_auth.

For me, using the London server, it takes between 4.6 - 5.1 seconds to log in (i log the milliseconds each function executing)
I use a hard coded login as i'm maintaining a separate user database, so at startup the following code will execute from the openCard hander:

Code:
put cdb_auth(tEmail, tPassword,"User") into gCDBAuthKey


cdb_ping gives an average of 239.6 ms
cdb_pingNode gives an average of 11.7 ms
Both of these seem a little bit longer than when i've previously tested but still seem reasonable.

So, questions are:
1. is it normal to have a 4-5 second delay to log in as above (in which case i'll just throw up a spinner widget and give it no further thought)
2. if not and given the slight higher server latency, is there a server problem?

Many thanks as always (and sorry for monopolising the forums!)
Stam
  Reply
#4
Hi Stam,

cdb_auth() costs a bit more than expected because it is doing a fair amount of work under the hood.

1. It generates a connection with your region and requests a nonse value. This model is designed to prevent middle-man attacks. (Cloud call - trip one)
2. The nonce value that is received by the client is then sent back to the region to authenticate the client. (Receive cloud call - trip two) (Cloud call - trip three)
3. After the region agrees that you are who you say you are, it then generates and API key. The parts of the region that your data touches is updated with the new API key.
4. The API key is then sent back to the client. (Receive cloud call - trip four)

A total of four trips back and forth are required to secure your connection.

The value of doing all of this is big. The authentication is more secure than not having the middle-man attack check. Also, you do not need to worry about the API key because it is managed for you during that session. There is not much weight from that point forward as it relates to transaction cost.

Your ping values are good. The inflight time is reasonable from the values you posted. At this time, this is the cost for security. Luckily, you do not have to do this for the entire session you are using your app.

I hope this helps explain the whys of it.

-Mark
  Reply
#5
Your thoughts on working with local data or not are dependent on your needs. This topic brings up the question of data-modeling and its importance.

The work begins with how to structure your tables. There are opportunities for optimization from this phase of database development. I cannot make any assertions on what we should do in your case without more details.

The value of holding local data can be performance-related, data availability when there is no connection or lost connection support.

It complicates things because it generates data inconsistency. There are considerations like, are other people in need of your new data, potential collisions, how often we need to bring back consistency, and many other thoughts.

Data modeling is time-consuming to get right. I liken it to writing code. You might need to go back and rethink it throughout the life cycle of your service.

I realize that none of this helps you directly. I am merely affirming that your consideration to move to cloud-only may make sense.

This forum is for everyone's benefit. The topics you have brought up are valuable to all. Please keep them coming.
  Reply
#6
Thanks as always for the advice Mark.

That's fine re: cdb_auth -- just needed to know that's normal and will adjust interface to inform users of delay.

Data modelling is indeed a big part of the work - although choices with liveCloud are a lot simpler that with a traditional SQL DBMS, I'm still figuring out best compromise between speed, consistency and interface blocking...
  Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)