04-13-2020, 06:24 PM
(This post was last modified: 04-13-2020, 06:36 PM by mark_talluto.)
Q1: If you read the blob with a target of 'cloud' then the call will go to the cloud. It stores the data locally for quick access. The time to get the data will be dependent on the size of the blob and the performance of your internet connection. Resizing the images to be smaller will improve performance. You can store binary and text data in your non-blob tables. You have more blob space than database space. Thus, it behooves you to take advantage of the blob space for non-queryable data. The blobs are stored in an S3 bucket. If you store binary data in a standard key, remember to treat the CRUD operations as you would if the data was text. You will not be able to use the BLOB APIs on data stored as a standard record. LCM will not be able to display that data as it does when viewing the blob table.
Q3: You can put images in a datagrid. You will have to download the image and load the local blob into memory. Put the imageData into an image in your stack. I am not sure if a datagrid can reference an image on disk directly. But you can have your images reference image data on disk. The datagrid pages data, much like how LCM pages data. As you scroll through your data in LCM, it is reading the data in real-time from the cloud and displaying it in the table view. Our table is not a datagrid. The DG should be relatively efficient in its display of data.
For your large data sets, you will want to develop a cursor like system where you are tracking your place in the table. You can efficiently load only the portion of data that is to be displayed at one time. When your user page flips or scrolls, you would then do another cloud call on that data and display it. You can forget the previous data by deleting that variable. This method will allow a mobile device to access millions of records. The trick is to not store millions of records in your view control nor in memory at one time. Only show what they can see at any moment in time.
Q4: You can have only one Blob table. This is not a limitation in performance since the data is stored efficiently in an S3 bucket. These buckets scale for you automatically and hold an almost infinite amount of data. You might consider syncing ranges of records instead of the whole table. Your performance will increase dramatically. There are multiple methods for doing this. One method would be to create ranges of records and store them in a record in a table just for that purpose. You can then read on a range of those records. They would hold the line delimited recordIDs of the blobs that you would like to group together. Then do a batch read on those recordIDs and enjoy better performance.
Thoughts on your video: When doing reads, you may consider blocking access to your UI until the read is complete. I can see that you are allowing the user to click a button that will generate a cloud call. LiveCode is not completely blocking the internet transactions. So, you have to prevent a user from clicking a button that will cause another call. Otherwise, your current downloads can be interrupted because a new read has been initiated. You can do this by displaying a control over your UI that will catch all user interaction.
In another post, we discussed our caching system. When we release that, the cloud reads are done using async methods. Your use will be able to request multiple reads at the same time. You have control over sync or async in uploads. You have to use the command cdb_setUploadMethod with a parameter of 'async' or 'sync'. We default to sync. I just noticed that we do not have this API in our docs. I'll work on that in the future.
Q3: You can put images in a datagrid. You will have to download the image and load the local blob into memory. Put the imageData into an image in your stack. I am not sure if a datagrid can reference an image on disk directly. But you can have your images reference image data on disk. The datagrid pages data, much like how LCM pages data. As you scroll through your data in LCM, it is reading the data in real-time from the cloud and displaying it in the table view. Our table is not a datagrid. The DG should be relatively efficient in its display of data.
For your large data sets, you will want to develop a cursor like system where you are tracking your place in the table. You can efficiently load only the portion of data that is to be displayed at one time. When your user page flips or scrolls, you would then do another cloud call on that data and display it. You can forget the previous data by deleting that variable. This method will allow a mobile device to access millions of records. The trick is to not store millions of records in your view control nor in memory at one time. Only show what they can see at any moment in time.
Q4: You can have only one Blob table. This is not a limitation in performance since the data is stored efficiently in an S3 bucket. These buckets scale for you automatically and hold an almost infinite amount of data. You might consider syncing ranges of records instead of the whole table. Your performance will increase dramatically. There are multiple methods for doing this. One method would be to create ranges of records and store them in a record in a table just for that purpose. You can then read on a range of those records. They would hold the line delimited recordIDs of the blobs that you would like to group together. Then do a batch read on those recordIDs and enjoy better performance.
Thoughts on your video: When doing reads, you may consider blocking access to your UI until the read is complete. I can see that you are allowing the user to click a button that will generate a cloud call. LiveCode is not completely blocking the internet transactions. So, you have to prevent a user from clicking a button that will cause another call. Otherwise, your current downloads can be interrupted because a new read has been initiated. You can do this by displaying a control over your UI that will catch all user interaction.
In another post, we discussed our caching system. When we release that, the cloud reads are done using async methods. Your use will be able to request multiple reads at the same time. You have control over sync or async in uploads. You have to use the command cdb_setUploadMethod with a parameter of 'async' or 'sync'. We default to sync. I just noticed that we do not have this API in our docs. I'll work on that in the future.