MongoDB Storage Provider
NoSQL document storage with GridFS support for attachments and flexible schema design.
NoSQL Flexibility
GridFS SupportTTL IndexesSharding ReadyFlexible SchemaAggregation Pipeline
Installation
dotnet add package Zetian
dotnet add package Zetian.Storage.MongoDBQuick Start
QuickStart.cs
using Zetian.Server;
using Zetian.Storage.MongoDB.Extensions;
// Basic setup with MongoDB
var server = new SmtpServerBuilder()
.Port(25)
.WithMongoDbStorage(
"mongodb://localhost:27017",
"smtp_database")
.Build();
await server.StartAsync();Advanced Configuration
AdvancedConfig.cs
var server = new SmtpServerBuilder()
.Port(25)
.WithMongoDbStorage(
"mongodb://localhost:27017",
"smtp_database",
config =>
{
config.CollectionName = "messages";
config.GridFsBucketName = "attachments";
// GridFS for large attachments
config.UseGridFsForLargeMessages = true;
config.GridFsThresholdMB = 50;
// TTL for auto-cleanup
config.EnableTTL = true;
config.TTLDays = 30;
// Sharding support
config.ShardKeyField = "received_date";
// Performance
config.CompressMessageBody = true;
})
.Build();GridFS for Large Attachments
Automatically handle large attachments with GridFS:
GridFS.cs
// GridFS automatically handles large attachments
// Files over threshold are stored in chunks
// Retrieve attachment from GridFS
var gridFsBucket = new GridFSBucket(database, new GridFSBucketOptions
{
BucketName = "attachments"
});
byte[] fileBytes;
using (var downloadStream = await gridFsBucket.OpenDownloadStreamByNameAsync("large-file.pdf"))
using (var memoryStream = new MemoryStream())
{
await downloadStream.CopyToAsync(memoryStream);
fileBytes = memoryStream.ToArray();
}
using var uploadStream = await gridFsBucket.OpenUploadStreamAsync("new-file.pdf");
await uploadStream.WriteAsync(fileBytes);
await uploadStream.CloseAsync();16MB+ Files
Handles files of any size
Chunked Storage
255KB chunks by default
Streaming API
Efficient memory usage
TTL Auto-Cleanup
Automatic message expiration with TTL indexes:
TTL.js
// TTL index for automatic cleanup
db.messages.createIndex(
{ "created_at": 1 },
{ expireAfterSeconds: 2592000 } // 30 days
);
// Messages older than 30 days are automatically deleted
// No manual cleanup required!Query Examples
Queries.cs
// MongoDB query examples
var collection = database.GetCollection<BsonDocument>("messages");
// Find by sender
var filter = Builders<BsonDocument>.Filter.Eq("from_address", "[email protected]");
var messages = await collection.Find(filter).ToListAsync();
// Find recent messages
var recentFilter = Builders<BsonDocument>.Filter.Gte("received_date", DateTime.UtcNow.AddDays(-7));
var recentMessages = await collection.Find(recentFilter).SortByDescending(x => x["received_date"]).ToListAsync();
// Aggregate statistics
var pipeline = new[]
{
new BsonDocument("$group", new BsonDocument
{
{ "_id", "$from_address" },
{ "count", new BsonDocument("$sum", 1) },
{ "total_size", new BsonDocument("$sum", "$message_size") }
})
};
var stats = await collection.Aggregate<BsonDocument>(pipeline).ToListAsync();Configuration Options
| Option | Default | Description |
|---|---|---|
| CollectionName | "smtp_messages" | Collection name |
| UseGridFsForLargeMessages | true | Use GridFS for large data |
| GridFsThresholdMB | 10 | GridFS threshold size |
| EnableTTL | false | Enable TTL auto-cleanup |
| TTLDays | 30 | Days before expiration |
Best Practices
Index Key Fields
Create indexes for query fields
Use TTL Indexes
Automatic cleanup saves space
Enable Sharding
Scale horizontally as needed
Replica Sets
High availability setup