10-11-2019, 07:59 PM
Hi Clarence,
I am copying my responses to this thread for everyone to see.
For most applications, I would recommend doing your work locally. I would then suggest syncing your data on occasion. If multiple tables are being touched by sync, I would use the new cdb_batchSync API.
Blob tables are pretty light. We are not storing the blob in the table. We only store a reference to the blob in the table. You should be able to hold hundreds of thousands of blob references in the current architecture. In a future update, you will be able to hold unlimited data in that table on the cloud. You can already store virtually unlimited BLOB data today. These are stored in an S3 style bucket with CDN for performance.
I am copying my responses to this thread for everyone to see.
For most applications, I would recommend doing your work locally. I would then suggest syncing your data on occasion. If multiple tables are being touched by sync, I would use the new cdb_batchSync API.
Blob tables are pretty light. We are not storing the blob in the table. We only store a reference to the blob in the table. You should be able to hold hundreds of thousands of blob references in the current architecture. In a future update, you will be able to hold unlimited data in that table on the cloud. You can already store virtually unlimited BLOB data today. These are stored in an S3 style bucket with CDN for performance.