Nucleus Application¶
Nucleus is the core application component of the RCIIS (Regional Customs Interconnectivity Information System) platform.
Overview¶
Nucleus serves as the central processing engine for customs data exchange, providing APIs and business logic for regional customs operations.
Configuration¶
Deployment Location¶
- Configuration:
apps/rciis/nucleus/ - Environments: Local, Testing, Staging
- Chart: Custom RCIIS Helm chart from Harbor registry
Directory Structure¶
apps/rciis/nucleus/
├── local/
│ ├── values.yaml
│ ├── kustomization.yaml
│ └── extra/
│ └── default.conf
├── testing/
│ ├── values.yaml
│ ├── kustomization.yaml
│ └── extra/
│ └── default.conf
└── staging/
├── values.yaml
├── kustomization.yaml
└── extra/
└── default.conf
Application Architecture¶
Core Components¶
- API Gateway: Handles external API requests
- Business Logic Layer: Core customs processing
- Data Access Layer: Database interactions
- Integration Layer: External system connections
- Notification Service: Event-driven messaging
Technology Stack¶
- Runtime: .NET Core / ASP.NET Core
- Database: Microsoft SQL Server
- Message Queue: Apache Kafka (via Strimzi)
- Cache: Redis (optional)
- File Storage: MinIO S3-compatible storage
Helm Chart Configuration¶
Multi-Source Pattern¶
ArgoCD Application Configuration:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nucleus-staging
spec:
sources:
# Values from GitOps repository
- repoURL: [email protected]:MagnaBC/rciis-devops.git
targetRevision: master
path: apps/rciis/nucleus/staging
ref: values
# Chart from Harbor registry
- repoURL: oci://harbor.devops.africa/rciis
targetRevision: 0.1.306
chart: rciis
helm:
valueFiles:
- $values/values.yaml
Values Configuration¶
Environment-specific values (values.yaml):
# Application configuration
app:
name: nucleus
version: latest
replicas: 2
# Container configuration
image:
repository: harbor.devops.africa/rciis/nucleus
tag: latest
pullPolicy: Always
# Service configuration
service:
type: ClusterIP
port: 80
targetPort: 8080
# Ingress configuration
ingress:
enabled: true
className: nginx
hosts:
- host: nucleus-staging.devops.africa
paths:
- path: /
pathType: Prefix
tls:
- secretName: nucleus-tls
hosts:
- nucleus-staging.devops.africa
# Resource limits
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
# Environment variables
env:
- name: ASPNETCORE_ENVIRONMENT
value: Staging
- name: ConnectionStrings__DefaultConnection
valueFrom:
secretKeyRef:
name: nucleus-database
key: connection-string
Secret Management¶
Application Settings¶
SOPS-encrypted configuration: apps/rciis/secrets/{environment}/nucleus/appsettings.yaml
apiVersion: v1
kind: Secret
metadata:
name: nucleus-appsettings
namespace: nucleus
type: Opaque
stringData:
appsettings.json: |
{
"ConnectionStrings": {
"DefaultConnection": "[SOPS ENCRYPTED]",
"RedisConnection": "[SOPS ENCRYPTED]"
},
"Kafka": {
"BootstrapServers": "kafka-cluster-kafka-bootstrap:9092",
"GroupId": "nucleus-consumer-group"
},
"MinIO": {
"Endpoint": "minio:9000",
"AccessKey": "[SOPS ENCRYPTED]",
"SecretKey": "[SOPS ENCRYPTED]"
}
}
Database Configuration¶
SQL Server connection: apps/rciis/secrets/{environment}/nucleus/mssql-admin.yaml
apiVersion: v1
kind: Secret
metadata:
name: nucleus-database
namespace: nucleus
type: Opaque
stringData:
connection-string: "Server=[SOPS ENCRYPTED];Database=NucleusDB;User Id=[SOPS ENCRYPTED];Password=[SOPS ENCRYPTED];TrustServerCertificate=true;"
username: "[SOPS ENCRYPTED]"
password: "[SOPS ENCRYPTED]"
Container Registry Access¶
Harbor registry credentials: apps/rciis/secrets/{environment}/nucleus/container-registry.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-registry
namespace: nucleus
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: "[SOPS ENCRYPTED BASE64]"
Kustomize Integration¶
Kustomization Configuration¶
KSOPS integration (kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: nucleus
resources:
- ../../../secrets/staging/nucleus/
generators:
- secret-generator.yaml
configurations:
- extra/default.conf
transformers:
- ksops-transformer.yaml
Secret Generator¶
KSOPS secret generator (secret-generator.yaml):
apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: nucleus-secret-generator
annotations:
config.kubernetes.io/function: |
exec:
path: ksops
files:
- ../../../secrets/staging/nucleus/appsettings.yaml
- ../../../secrets/staging/nucleus/mssql-admin.yaml
- ../../../secrets/staging/nucleus/container-registry.yaml
NGINX Configuration¶
Reverse Proxy Setup¶
Custom NGINX config (extra/default.conf):
server {
listen 80;
server_name nucleus-staging.devops.africa;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# API routes
location /api/ {
proxy_pass http://nucleus-service:8080/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle WebSocket upgrades
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Health check endpoint
location /health {
proxy_pass http://nucleus-service:8080/health;
access_log off;
}
# Static assets
location /static/ {
proxy_pass http://nucleus-service:8080/static/;
expires 1d;
add_header Cache-Control "public, immutable";
}
}
Database Integration¶
SQL Server Configuration¶
Database initialization:
-- Create database
CREATE DATABASE NucleusDB;
GO
USE NucleusDB;
GO
-- Create application user
CREATE LOGIN nucleus_user WITH PASSWORD = '[SECURE_PASSWORD]';
CREATE USER nucleus_user FOR LOGIN nucleus_user;
ALTER ROLE db_datareader ADD MEMBER nucleus_user;
ALTER ROLE db_datawriter ADD MEMBER nucleus_user;
ALTER ROLE db_ddladmin ADD MEMBER nucleus_user;
GO
Entity Framework Migrations¶
Migration commands:
# Add new migration
dotnet ef migrations add <MigrationName> --project Nucleus.Data
# Update database
dotnet ef database update --project Nucleus.Data
# Generate SQL script
dotnet ef migrations script --project Nucleus.Data --output migration.sql
Kafka Integration¶
Message Consumers¶
Kafka consumer configuration:
// Consumer configuration
var consumerConfig = new ConsumerConfig
{
BootstrapServers = "kafka-cluster-kafka-bootstrap:9092",
GroupId = "nucleus-consumer-group",
AutoOffsetReset = AutoOffsetReset.Earliest,
EnableAutoCommit = false
};
// Topic subscriptions
var topics = new[] {
"customs.declarations",
"customs.approvals",
"customs.notifications"
};
Message Producers¶
Event publishing:
// Producer configuration
var producerConfig = new ProducerConfig
{
BootstrapServers = "kafka-cluster-kafka-bootstrap:9092",
Acks = Acks.All,
MessageTimeoutMs = 30000
};
// Event publishing
var message = new Message<string, string>
{
Key = declarationId,
Value = JsonSerializer.Serialize(declarationEvent)
};
await producer.ProduceAsync("customs.declarations", message);
MinIO Storage Integration¶
Document Storage¶
MinIO client configuration:
// MinIO client setup
var minioClient = new MinioClient()
.WithEndpoint("minio:9000")
.WithCredentials(accessKey, secretKey)
.WithSSL(false)
.Build();
// Document upload
var bucketName = "customs-documents";
var objectName = $"declarations/{declarationId}/document.pdf";
await minioClient.PutObjectAsync(new PutObjectArgs()
.WithBucket(bucketName)
.WithObject(objectName)
.WithStreamData(fileStream)
.WithObjectSize(fileStream.Length)
.WithContentType("application/pdf"));
Monitoring and Health Checks¶
Application Health¶
Health check endpoints:
// Health check configuration
services.AddHealthChecks()
.AddSqlServer(connectionString, name: "database")
.AddKafka(kafkaConfig, name: "kafka")
.AddCheck<MinioHealthCheck>("minio");
// Health check endpoint
app.MapHealthChecks("/health", new HealthCheckOptions
{
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});
Prometheus Metrics¶
Application metrics:
// Metrics configuration
services.AddSingleton<IMetricsCollector, PrometheusMetricsCollector>();
// Custom metrics
public static readonly Counter ProcessedDeclarations = Metrics
.CreateCounter("nucleus_declarations_processed_total",
"Total number of processed declarations");
public static readonly Histogram ProcessingDuration = Metrics
.CreateHistogram("nucleus_processing_duration_seconds",
"Declaration processing duration");
Troubleshooting¶
Common Issues¶
Database Connection Failures: 1. Check SQL Server pod status 2. Verify connection string secrets 3. Check network policies 4. Review database permissions
Kafka Connection Issues: 1. Verify Kafka cluster status 2. Check topic existence and permissions 3. Review consumer group configuration 4. Monitor offset lag
MinIO Access Problems: 1. Check MinIO pod status 2. Verify access credentials 3. Check bucket policies 4. Review network connectivity
Diagnostic Commands¶
# Check application pods
kubectl get pods -n nucleus
# View application logs
kubectl logs -n nucleus deployment/nucleus
# Check service endpoints
kubectl get endpoints -n nucleus
# Test database connectivity
kubectl exec -n nucleus deployment/nucleus -- \
dotnet ef dbcontext info
# Check Kafka topics
kubectl exec -n kafka kafka-cluster-kafka-0 -- \
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
Performance Monitoring¶
Application Performance Metrics: - Request throughput (requests/second) - Response latency (95th percentile) - Database query performance - Memory and CPU utilization - Kafka message processing lag
Deployment Pipeline¶
CI/CD Integration¶
GitHub Actions workflow:
name: Nucleus Deployment
on:
push:
branches: [master]
paths: ['src/Nucleus/**']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Build and push image
run: |
docker build -t harbor.devops.africa/rciis/nucleus:${{ github.sha }} .
docker push harbor.devops.africa/rciis/nucleus:${{ github.sha }}
- name: Update chart version
run: |
yq '.version = "0.1.${{ github.run_number }}"' -i charts/rciis/Chart.yaml
git commit -am "Bump chart version to 0.1.${{ github.run_number }}"
git push
ArgoCD Sync Strategy¶
Automated deployment: - Testing environment deploys automatically - Staging environment deploys after testing validation - Production deployment requires manual approval
Security Considerations¶
Application Security¶
- Authentication: OAuth2/JWT token validation
- Authorization: Role-based access control (RBAC)
- Data Encryption: TLS for all communications
- Secret Management: SOPS-encrypted configuration
- Container Security: Regular image scanning
Network Security¶
- Network Policies: Restrict pod-to-pod communication
- Service Mesh: mTLS for service-to-service communication
- Ingress Security: Rate limiting and DDoS protection
- Database Security: Encrypted connections and access controls
Performance Optimization¶
Application Tuning¶
- Connection Pooling: Optimize database connections
- Caching Strategy: Implement Redis caching
- Async Processing: Use async/await patterns
- Resource Limits: Appropriate CPU and memory allocation
Infrastructure Scaling¶
- Horizontal Pod Autoscaling: CPU and memory-based scaling
- Vertical Pod Autoscaling: Dynamic resource allocation
- Cluster Autoscaling: Node-level scaling
- Database Scaling: Read replicas and connection pooling