I passed a milestone with a large pull request that is working towards the goal of allowing users to log in with a password and assume permissions that have been assigned to groups or roles that they belong to. Ultimately, I should have subdivided this task, but I didn’t fully understand my goals until I was underway. This is going to be a little bit of a journey, but it is going to end with some interesting thoughts on the API design of the administration APIs for PliantDb
.
Introduction to OPAQUE
The industry standard approach to password verification is currently to use some sort of secure one-way hash on the server. The client transmits the password to the server and the server verifies that the password provided matches the stored hash. That approach still requires the password to be transmitted to the server. A safer approach would be to never transmit the password to the server. This is where PAKEs come in.
PliantDb
is implementing the OPAQUE
protocol for password authentication. All of the cryptographic work had been done within the ecosystem, but @dAxpeDDa assembled an easy-to-use wrapper, custodian-password
.
How this works behind the scenes can best be summed up by showing the implementation of set_user_password_str
:
async fn set_user_password_str(
&self,
username: &str,
password: &str,
) -> Result<ClientFile, crate::Error> {
let (registration, request) =
ClientRegistration::register(&ClientConfig::default(), password)?;
let response = self.set_user_password(username, request).await?;
let (file, finalization, _export_key) = registration.finish(response)?;
self.finish_set_user_password(username, finalization)
.await?;
Ok(file)
}
This code executes wherever the Connection
is happening at using lower-level APIs that interact with custodian-password
. As a consumer of PliantDb
, you’re automatically gaining OPAQUE
support behind the scenes.
The first line is where everything starts. A registration process is initialized with the password. This is done by using a key exchange algorithm where the client first sends a request to the server. The server uses a persistent file to store its own key that it uses to sign and validate requests/finalizations. When it receives the request, it creates a response using its private key and the request. This response is sent to the client.
The client then creates a finalization payload to send to the server. Once the server receives it, it completes the server-side login process which generates a state file that can be stored on the User record just like a password hash. The file returned can optionally be stored and used to further validate the login process.
I won’t go into the login process because it’s incredibly similar.
OPAQUE
does a lot of things to protect password safety, but if the server’s data is lost, an imposter server could still validate logins. While this doesn’t leak the password, it doesn’t allow a client to be confident it’s connected to a non-compromised server.
Thus, before I even started on implementing this, I tackled what I wrote about a few weeks ago: At-rest encryption.
At-Rest Encryption Implementation
I learned a lot while implementing this feature, and it’s not done. At the end of the day, I want PliantDb
to offer secret storage similar to what many developers use environment keys for today. There are other great solutions, such as HashiCorp’s Vault, AWS Secrets Store, and more. But, as I began understanding my own goals of PliantDb
, I realized this was a natural evolution of the platform we’re developing.
For this pull request, all I needed was “master key” driven at-rest encryption. But, I didn’t implement a basic setup. It supports separating master key storage from the database storage. This is intended to enable support for using S3-compatible storage endpoints, or other products like those mentioned before.
When opening the database storage, the internal vault starts with a sealing key. It requests the current master keys from the key storage, and decrypts them using its sealing key. These master keys are used to encrypt other files locally, and eventually can be used to store other keys as well, turning PliantDb
into a key server of sorts.
Currently, you can only use the master key to encrypt documents. By setting the encryption_key
field in the document’s header, the server will encrypt the document when stored at rest. Additionally, if you want to enable full-at-rest encryption, there’s a default_encryption_key
configuration option.
The main limitation to at-rest encryption is views: To properly support quick filters that view require, the bytes that emitted for the Key
are stored unencrypted. This is due to this encryption being implemented atop Sled
and us needing to rely on sled’s tree behaviors on the unencrypted keys. Thus, either care needs to be taken into what is placed into view keys, or full disk encryption should be used. If Sled were to introduce its own encryption functionality, we would expose those configuration options.
With this in place, I was able to securely store the password configuration necessary to drive custodian-password
. I added a create_user
API to allow creating a User
, and was able to get a simple example working:
match client.create_user("ecton").await {
Ok(_) | Err(pliantdb_core::Error::UniqueKeyViolation { .. }) => {}
Err(other) => anyhow::bail!(other),
}
let file = client.set_user_password_str("ecton", "hunter2").await?;
client
.login_with_password_str("ecton", "hunter2", Some(file))
.await?;
But, once you authenticated, what happened? At that point? Nothing!
Hooking up Permissions
The core of the permissions system is driven by actionable. The parts that needed to be added in PliantDb
were the PermissionGroup
and Role
concepts.
A PermissionGroup
is a named set of permission Statements
. Generally, I would recommend groups be used to enable specific sets of functionality on specific groups of resources.
A Role
is a named set of PermissionGroup
associations. Generally, roles are used to repeatedly assign sets of PermissionGroups
to multiple users. For example, you may want a Customer Support
role in your system that grants customer support-related permissions.
These internal administration types are all stored using PliantDb
's Collection
s inside of a built-in admin
database. This pull request exposes all of these types.
One of the most interesting realizations I had was that I wanted to pass the User
type to the Backend
when a client was authenticated. To do that, I had to expose the admin schema, and I wasn’t sure if I was comfortable with that. As I began to hack together my example, I started realizing how I need to finish this administration API design. The working example code can be seen here.
While I’m excited at having tied so many pieces of work together, there’s still more to go to finish this functionality up.
When to build APIs and when to just use PliantDb
The example linked before has a mixture of metaphors. Here’s two ways that administrative data was created:
server.create_user("ecton").await
PermissionGroup {
name: String::from("administrators"),
statements: vec![Statement::allow_all()],
}
.insert_into(&admin)
.await
The first is implemented using an API. It has its own permission. The second uses the new Collection
-based APIs to insert a new PermissionGroup
into the admin database. It requires permission Database
.Document
.Insert
on resource collection_resource_name("admin", "khonsulabs.user")
.
Internally, the first one also inserts a record, but it’s done at the server level in a context where those permissions are ignored – thus the API permission is the only important permission. This means permission could be granted to insert a User
document, but not to call create_user
or vice versa. This thought made me consider: should I even have those APIs at all?
That thought wasn’t terribly long lived. The granularity of permissions is what is going to make me feel comfortable about exposing these admin types. By allowing granular edit operations through custom-designed, limited-purpose APIs, database administration can be done with more confidence as to what capabilities each connection has. And, for power-users, the ability to update those documents directly will still provide flexibility where our APIs haven’t expanded to.
On one hand, I’m sad that this pull request is going to grow even more before I finish it. On the other hand, I’m very happy with how the design of PliantDb
is progressing.
One More Thing: New Collection APIs
I glossed over some new APIs. Here’s a before and after:
let header = collection.push(&original_value).await?;
let mut doc = collection
.get(header.id)
.await?
.expect("couldn't retrieve stored item");
let mut contents = doc.contents::<Basic>()?;
contents.value = String::from("updated_value");
doc.set_contents(&contents)?;
db.update::<Basic>(&mut doc).await?;
After:
let mut doc = original_value.insert_into(db).await?;
doc.contents.category = Some(String::from("updated"));
doc.update(db).await?;
This new API only works for serializable types. I didn’t want to sacrifice the flexibility for some collections to use the bytes contents directly. This API offers much more fluidity when dealing with serialized types. There’s still room to grow on this API design, but the simple CRUD operations are already a huge improvement.
Finishing Up
Now that I can write unit tests that verify logins (by testing applied permissions), I can work on cleaning this branch up and getting it ready to commit. I may split some of the work into other issues rather than address everything in this pull request. I am hopeful, however, that by the end of the week I’ll have this merged into main.