Currently develop software at Softwore.com.
Mostly Go and Rust server applications, with occasional C and Kernel topics.
Apparently it was a somewhat mythical topic at the beginning of a startup launch. There was this narrative about the Triangle of Death of the CAP theorem where you couldn't be fully ACID. Scaling network partitions while having strong consistency might scare you into thinking you need expensive core counts. Besides mutex concerns, expensive heap allocations, deadlocks, and warnings that blocking IO was always the bottleneck, nobody really talks about what a Database really is.
A database is made of just 2 simple linux syscalls: read() and write(). That's it.
In a language such as Go, you don't have the overhead of forking and locking threads to synchronize memory and disk access.
Locking primitives can be as simple as a using sync.Mutex in just a few lines of code, and since there is a user thread goroutine for each new connection, concurrent reading and writting directly from disk can be done without the overhead of OS threads.
No need for schemas neither, because the application can read from a single file and scan the text to match whatever it needs to do with the data.
So fancy algorithms are not welcome. Why would you need to rewrite a B-Tree or Log structure? No creepy CAP theorems, and of course no worries about dreaded databases. A Go server can scale out of the box using only the language runtime.
No databases wanted.