Welcome, Guest
You have to register before you can post on our site.

Username
  

Password
  





Search Forums



(Advanced Search)

Forum Statistics
» Members: 349
» Latest member: customrubber
» Forum threads: 240
» Forum posts: 1,069

Full Statistics

Online Users
There are currently 35 online users.
» 0 Member(s) | 34 Guest(s)
Google

Latest Threads
using cdb_auth with Live...
Forum: General
Last Post: JereMiami
12-19-2024, 01:58 AM
» Replies: 0
» Views: 35
BLOBs - not possible to a...
Forum: General
Last Post: stamatis
10-06-2024, 06:28 PM
» Replies: 0
» Views: 262
Allow users to change the...
Forum: General
Last Post: mark_talluto
01-18-2024, 01:09 AM
» Replies: 8
» Views: 11,445
cdb_sendEmail
Forum: General
Last Post: mark_talluto
11-07-2023, 09:57 PM
» Replies: 0
» Views: 688
cdb_sync & cdb_lookupvalu...
Forum: General
Last Post: mark_talluto
11-07-2023, 09:45 PM
» Replies: 1
» Views: 1,241
cdb cdb_sendEmail
Forum: Bug Reports
Last Post: JereMiami
10-05-2023, 01:57 PM
» Replies: 0
» Views: 1,770
LCM: "Please inform Canel...
Forum: General
Last Post: mark_talluto
07-12-2023, 07:58 PM
» Replies: 24
» Views: 23,253
San Franciso Region Outag...
Forum: Announcements
Last Post: mark_talluto
05-09-2023, 03:31 AM
» Replies: 1
» Views: 2,880
Error, CDB_Header checksu...
Forum: General
Last Post: cpuandnet
02-01-2023, 05:00 AM
» Replies: 4
» Views: 3,937
CDBCache sync from local ...
Forum: General
Last Post: Hendricus
01-29-2023, 09:39 PM
» Replies: 2
» Views: 2,780

 
  New API: cdb_importCSV()
Posted by: mark_talluto - 03-11-2020, 04:32 PM - Forum: Announcements - No Replies

We just released an update to LCM (version 2.5.3) that contains a new API called: cdb_importCSV()
Enjoy!


  allowing new user signups to create cloned tables that hold their own data within
Posted by: sid - 03-07-2020, 10:15 AM - Forum: General - Replies (1)

Hi guys

I have been puzzling over an issue. Currently normal users authenticated trough the app you create cannot 'create' tables or keys programatically from within the app. unless you hardwire your developer credentials.

so what happens in this case is that you end up with one table that contains all the users data and you create queries based on the user as one of the rules. user = sid@acczap.com  etc

It would be great to be able to clone a table , for example , let say you have a table called 'contacts' and within this table you have a key called 'common' which is a boolean key (true/false). this is your main table. 

So sid@acczap.com signs up and then the system automatically creates a table called sid@acczap.com[contacts] and also duplicates the common records with a 'true' value in the key {common}

this will be awesome, as you can program the thing to put the user in a variable && "[contacts]" and use that as the table name to tell livecloud which table to use.

this feature will be even better if you could do this on a PROJECT basis, that means the system will clone a project (on signup) where the main one is acczap.com ( with tables and fields in them- some common records you want to clone will also be marked true in the common key. In this case , you will have to add a unique field to the login screen , such as telephone number which will be the identifier on a cloned project , example: acczap.com[27343124003] ..this is because the 'cloned' project will have its own bunch of users..the main user can create a form to add users to the cloned project.

This way you can bill clients based on their own 'project' based on the metrics within livecloud manager.

and more importantly, you do not mix up your clients data in one gargantuan table ...

and even more importantly, you don't hardwire livecloud developer credentials into your app ...

Can this be done?

Sid


  Importing a csv to livecloud from your app, and notes on importing CSV data
Posted by: sid - 03-07-2020, 10:00 AM - Forum: General - Replies (7)

Hi Everyone:

First of all

the CSV import in livecloud is an awesome feature. , Can you show us if its accessible via command or code to do from the client side app? For instance , take the contacts app example.. A client would want to import his own data into his clients app.

It would also be cool to show how it can be done from a datagrid.

A note for people using MSExcel 2010 to generate .csv files : It errors  , but if you upload the file to google sheets and then download it as a csv..it works wonderfully.

I'm not sure if its a purely 2010 excel version issue. Maybe later editions work. 

Sid


  Query Builder and Code Snippet
Posted by: efrain.c - 03-02-2020, 09:08 PM - Forum: Code Samples - No Replies

Hi everyone,

We've received a few questions regarding query. This video shows how you can use the query builder and code snippet to build a query and generate the code needed to perform the query: https://youtu.be/6g2pFEvdLcc

You can then take the code from the code snippet and use the output of the query to generate a list of names as follows:

Code:
local tInputA, tOutputA, tList

put cdb_query("trip","=","trip1","testTable","cloud","recordData") into tOutputA

//EACH KEY OF tOutputA CONTAINS THE recordID OF A RECORD THAT MATCHES THE QUERY CRITERIA
//WE REPEAT THROUGH EACH KEY TO ACCESS THE DATA IN EACH RECORD
repeat for each key xRecordID in tOutputA
    //GRAB THE NAME OF THE PERSON IN THIS RECORD TO CREATE A LIST
    put tOutputA[xRecordID]["person"] & lf after tList
end repeat

//DELETE TRAILING LF
delete char -1 of tList

We hope examples like this with short videos will be helpful. Let us know if there's a topic you would like us to cover.


  cdb_Update for numeric and calculated data
Posted by: sid - 02-20-2020, 02:39 PM - Forum: General - Replies (10)

Hi Guys

I have a question regarding cloud updates to records with NUMBERS  or Currency figures (well-all are numbers)in them:

for example , l'm doing stock control and I need to update 2 records , onHand and grossProfit (these are keys on the cloud database

so , as I understand it, you have to get the original key value for the above and store them into local variables and then do the calculation, for example, initially theres a 100 in onHand and $1000 in Gross profit. You sold 10 and made a buck on each thingummy you sold. so the calc is simple  and your updated values are 90 in onHand and $1010 in grossProfit.

Imagine that there are 2 users hitting the same record in the scenario, one of them is offline ..What happens then?
one guys sells the 10 , but the offline guy sells 50, so the correct values when the offline guy comes online and the cloud databases sync should be 40 in onHand and $1060 in grossProfit. It becomes more syncfusing if there are more people hitting that calculated key and a few of them are offline.

will it sync correctly, ie parse and do the calculations correctly ? (remember there are local variables involved) 


Correct me if this already exists , but ideally this should be a query operator update , meaning you minus 10 from the existing cloud value rather than overwriting the value .

cdb_updateValue(tTable,tRecordID,tKey,tOperator,tValue,tTarget) into tCalcEffect

cdb_updateValue("transactions",tRecordID,"onHand","-","10","cloud") into tCalcEffect

this be a very important feature. Does it exist? Or do you have to handle calculated syncs by writing code?


For anyone else reading this:

I forgot to say that the above command does not exit vis, cdb_updateValue.

I just used it as an example of how i think this should be handled, It will should also only work on numeric values for obvious reasons,


  LiveCloud architecture and a look at the future
Posted by: mark_talluto - 02-19-2020, 08:39 PM - Forum: Announcements - No Replies

Hi John,

CanelaDB stores data locally in a single array. This array encapsulates the entire project for a single region. The data is stored to disk in clusters. Clusters are a collection of records that have a defined number chars of their recordIDs in common.

-LiveCloud structure as seen in LCM
An account contains the following:
 Projects
  Tables
   Keys (columns)
    Data

-LiveCloud structure in terms of VMs
Each region consists of many VMs that hold our data. The following core sections are in walled-off VMs: Developer account management, Metric & Billing data, Users (cdbUsers), App Data, BLOBs. The region can scale both vertically and horizontally.
 Vertical: VMs can always get improved resources (RAM, CPUs, Disk) as needed.
 Horizontal: More VMs can be added to a region as needed.

-Some useful facts
 -No region is aware of the other regions.
 -We have developed software that analyzes the regions in near real-time to determine the general health of all the VMs. This software is responsible for building new VMs, moving data from one instance to another.
 

When the toolkit is exported, LCM builds the following structure for your project.
CanelaDB folder

  • Config folder with config file
  • Database folder with database data stored locally
  • Libraries folder with CanelaDB libraries

When considering locally stored data, the following array is created:
-Structure of local array
 dataBaseArray
  tableID
   cluster
    list of recordIDs that fit in the cluster
     individual recordID
      app keys
       app data
      cdb keys
       cdbDateCreated
       cdbDateModified
       etc...

-Limitations
 -Mobile devices have less RAM available than desktops. Thus, it is essential to sync data using defined recordIDs if your data is growing beyond what mobile devices can support.
 -Arrays consume more memory than the actual char count of the data being stored. The LiveCode indexing for each array consumes more RAM but makes lookups very fast. Thus, your data is not stored as a one-to-one relationship.
 -The instances that reside on the cloud side of things is also subject to these considerations. But, we have much more extensive systems to store all of this data. We are continuously developing improved methods that will ultimately improve the database architecture. Some incremental improvements are already in place and are being tested in isolated regions. Using RAM as a storage medium is expensive for the LiveCloud service as RAM costs much more than disk storage. We have plans to improve on this further.

-Future considerations
We are planning many additions to the architecture to improve scaling for large data sets, assignable processes, project tracking/analytics, application support for multiple regions, relationships, developer-defined indexing, sharing projects with other accounts.

Some of these topics fall into the architecture side of things, while others will have a front end API to improve access. It is important to note that these are complex systems to develop, test, improve, and test a lot more before we can make them publicly available. Thus, they take time to prepare. We are not committing to the development of any of these future technologies by discussing them here. This is not an exhaustive list of what is written on whiteboards in our studio. I am openly bringing them to this discussion to demonstrate that we are looking at our next generation of improvements. All this said we enjoy watching people dig this technology. It is exhilarating to see its adoption grow. We will do everything possible to bring some of these to light as soon as possible.

Scaling
We have been testing accessing data from our cloud-side cache system. NurseNotes has been using this on and off for the last 15 months. Improvements to the system have allowed us to rely on it for the previous 6 months. Cloud reads are faster and scale to more simultaneous hits for data. This improvement will make it to the LiveCloud regions very soon.

Project tracking and analytics
We have been using this feature to help us track down NurseNotes performance issues. The feature allowed us to identify areas where we could do better. It could be used for tracking how your clients use your software and provide the timing of code. From the data collected, you can make critical decisions based on real data from your apps. The feature programmatically can place the triggers for you in common areas. If your code is encrypted, LCM will ask you for your passcode so it can crawl through your code. You can optionally place triggers anywhere you want. We have developed a robust data visualization view. You can see a simple example of it in the Account/Usage section in LCM. The data collected from this feature can be varied and quite extensive. We have not dedicated time to flesh the rest of this feature out. This feature will be released when it is ready.

Application support for multiple regions
We figured this would come up eventually. We do not need it ourselves at this time. It has not been prioritized because we notice that the majority of development appears to be geographically bound. I think this could be developed quickly. We have discussed the requirements. There aren't any plans to start this right now.

Developer indexing, assigned processes, and more caching
We have been planning this for a few weeks now. Our goal is to improve scaling, distribute power to those that need it, and generally lower the cost of providing the LiveCloud service. We have already worked out a caching system, as previously discussed for NurseNotes, that brings us closer to moving forward with the other ideas listed here. 

Assigned processes would allow a developer to buy one or more agents to process their data. Currently, all transactions are queued to be processed by shared instances. You could fast track your processing with dedicated agents that have less responsibility than instances have today. A given instance is responsible for an enormous amount of tasks. They are capable of understanding and working on all possible transactions. The agents would focus solely on a given teamID's projects. Multiple agents could be ready to pounce on traffic as it scales upward. We could even scale down as traffic subsides.

Taking the load off of RAM will lower costs. And, it forces us to develop the supporting technologies to meet performance expectations. Allowing developers to choose which data should be in indexed RAM and which does not need to be there will be the first step.

I hope this in-depth look is useful to everyone. If this generates questions, please share them with us here in the forum.


  Scalability and Livecloud arrays
Posted by: Bizbuzz - 02-19-2020, 11:25 AM - Forum: General - Replies (3)

Hi,

Ok, so I have a general but important question regarding scalability and how, let's call it LiveCloud's Array architecture is built and how we should design our apps and software with that in mind.

So first question (A) is: Is each account one very big array? Or is each project one array? Or is each table a separate array?

Let's say I'm building mobile apps that will eventually reach a worldwide audience with potentially millions of users in different countries. We have tables with login info (cdbUsers), others with profile information etc. If we have one million users then they would have 1 million rows. Then let's say that each user has it's own table with contacts, and the number of contacts could be in average 100 contacts for each user. 

What would be the best approach designing tables in this case. 

1. All contacts (100) for every user (1.000.000) is saved in the same table with some 100.000.000 rows

2. For every user a new table is created and thus these tables will only hold 100 contacts / rows in average.

But if everything is one super Big Array anyway, is there any difference between 1 and 2 above? Nr 2 is what I have in mind designing this, but on the other hand that will create potentially 1 million tables, and perhaps that will cause problems? On the other hand we can fetch the tables directly with the help of tableID so that should be faster right?

If there are limitations it would be good to know if we should, for example, have different projects for each country/state, or even different accounts. And of course we would use different Regional LiveCloud Servers for different parts of the world, and that would mean different accounts as it is now, right?

So in any case, some design guidelines would be helpful, and also if you made some really large tests yourselves with a ridiculous amount of data Smile

Kindly
John


  LC Manager crashes when adding billing info
Posted by: Bizbuzz - 02-17-2020, 09:21 AM - Forum: Bug Reports - Replies (3)

Hi,

Trying to add credit card info in Billing section under Account in LC Manager causes the application to crash...

Br /John


  cdbUsers table non-responsive on all projects (UK)
Posted by: Bizbuzz - 02-16-2020, 11:59 AM - Forum: Bug Reports - Replies (1)

Hi,

Using UK server.

cdbUsers table in all projects has some problems now. The table is non-responsive. All other tables can be read in LiveCloud Manager, but not cdbUsers.

In apps this is the response given:

(2020-02-16 12:44:43.151): Local authentication unsure: no local data to verify
(2020-02-16 12:44:43.519): Cloud authentication - received nonse value
(2020-02-16 12:44:43.802): Cloud authentication - received apiKey
(2020-02-16 12:44:43.802): Cloud authentication passed
(2020-02-16 12:45:14.149): Error: The server response could not be downloaded at this time.
(2020-02-16 12:45:14.149): Error: There was a problem getting a response from the server for cdb_batchQueryCloud request.

It's been like this a couple of hours now.

What's happening? Updates? 

Br /John


  Error: cdb_batchCreateCloud
Posted by: JereMiami - 02-13-2020, 10:56 AM - Forum: General - Replies (4)

After creating (either create or batch create) a record for two tables, and then immediately: (1) querying those tables; (2) reading those records; (3) and deleting those records ---

I get these three errors in combination: 

1) "The server response could not be downloaded at this time"
2) "Error: There was a problem getting a response from the server for cdb_batchCreateCloud request"
3) "Could not create record in cloud"

Any suggestion on how to avoid these errors?