Hi everyone!
For those who haven’t been following this topic, I’ve been working on a major update to Gladys since the end of June.
This update will change the way the storage of historical sensor values works in Gladys, moving from storage in an OLTP database (SQLite) to an OLAP database (DuckDB).
We’re keeping SQLite for all relational data (users, house, scenes, etc.), and only sensor values will be stored in DuckDB.
In concrete terms, this update will offer all users :
- Much smaller databases. @GBoulvin has gone from a 11 GB database to 65 MB with DuckDB, for 11 million stored states. This is possible because these 2 database systems don’t work at all in the same way, and DuckDB aggressively compresses time-series data.
- Precise, ultra-fast graphics, even over long periods. Gone are Gladys’s imprecise aggregated data, and it’s now possible to chart over long periods of time, up to the last sensor value, something that wasn’t possible until now!
- Better Gladys performance in general: no more hourly data aggregation processes occupying disk bandwidth on small machines.
**Why wait until now to switch to this revolutionary system?
DuckDB is the only OLAP-file database on the market.
Until then, DuckDB was in alpha/beta, and each version brought more breaking changes, which was not compatible with our need for stability in Gladys.
In June 2024, DuckDB was upgraded to 1.0 (stable), and I immediately started work to switch to this system.
What’s next?
I’ve already been installing this version on my home instance since the start of the week to check that everything’s running smoothly, and so far it’s going perfectly.
If things continue as they are, I’ll be releasing this version on Monday August 26
Migrate now?
If you’re comfortable with the command line, and want to migrate to this version right now, you can! (And get back to me).
The Docker image is available here:
gladysassistant/gladys:duckdb
You can pull this version:
docker pull gladysassistant/gladys:duckdb
Then stop your existing Gladys container:
docker stop gladys && docker rm gladys
Then relaunch a Gladys container with the DuckDB image:
docker run -d \
--log-driver json-file \
--log-opt max-size=10m \
--cgroupns=host \
--restart=always \
--privileged \
--network=host \
--name gladys \
-e NODE_ENV=production \
-e SERVER_PORT=80 \
-e TZ=Europe/Paris \
-e SQLITE_FILE_PATH=/var/lib/gladysassistant/gladys-production.db \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/gladysassistant:/var/lib/gladysassistant \
-v /dev:/dev \
-v /run/udev:/run/udev:ro \
gladysassistant/gladys:duckdb
(Don’t forget to modify this command for your personal use)
Then you can view the migration status on your dashboard.
And once the migration is complete, you can delete the old SQLite statements in Gladys Settings → “System”:
Note: Of course, you’ll have to switch back to the gladysassistant/gladys:v4
image once the update has been deployed!
If you want to wait a while
If you’re not at home on August 26 and don’t want the update to run automatically when you’re away, you can pause Watchtower until then and restart it when you get back, thus avoiding any problems you can’t solve remotely.
I’m not pitting SQLite against DuckDB
SQlite is a great tool, which we continue to use in Gladys.
However, for the specific use case we have (time-series data storage), DuckDB is much better suited.
For the rest of the tables, SQLite remains the best solution on the market for our use, and I’m still a firm believer in SQLite!
So I’m not opposing these two databases at all, they’re two radically different products
Thanks to those who will be testing, I can’t wait to see this new technology in your hands!