restructure of dirs, huge docs update

This commit is contained in:
2025-11-17 16:29:14 -06:00
parent 456e052389
commit cd840cb8ca
87 changed files with 2827 additions and 1094 deletions

3
.gitignore vendored
View File

@@ -37,3 +37,6 @@ Thumbs.db
# Docker
.dockerignore
#mounted dirs
configs/

View File

@@ -23,8 +23,8 @@ RUN git clone https://github.com/robertdavidgraham/masscan /tmp/masscan && \
WORKDIR /app
# Copy requirements and install Python dependencies
COPY requirements.txt .
COPY requirements-web.txt .
COPY app/requirements.txt .
COPY app/requirements-web.txt .
RUN pip install --no-cache-dir -r requirements.txt && \
pip install --no-cache-dir -r requirements-web.txt
@@ -33,12 +33,12 @@ RUN pip install --no-cache-dir -r requirements.txt && \
RUN playwright install chromium
# Copy application code
COPY src/ ./src/
COPY templates/ ./templates/
COPY web/ ./web/
COPY migrations/ ./migrations/
COPY alembic.ini .
COPY init_db.py .
COPY app/src/ ./src/
COPY app/templates/ ./templates/
COPY app/web/ ./web/
COPY app/migrations/ ./migrations/
COPY app/alembic.ini .
COPY app/init_db.py .
# Create required directories
RUN mkdir -p /app/output /app/logs

862
README.md
View File

@@ -1,28 +1,26 @@
# SneakyScanner
A comprehensive network scanning and infrastructure monitoring platform with both CLI and web interfaces. SneakyScanner uses masscan for fast port discovery, nmap for service detection, sslyze for SSL/TLS analysis, and Playwright for webpage screenshots to perform comprehensive infrastructure audits.
A comprehensive network scanning and infrastructure monitoring platform with web interface and CLI scanner. SneakyScanner uses masscan for fast port discovery, nmap for service detection, sslyze for SSL/TLS analysis, and Playwright for webpage screenshots to perform comprehensive infrastructure audits.
**Features:**
- 🔍 **CLI Scanner** - Standalone scanning tool with YAML-based configuration
- 🌐 **Web Application** - Flask-based web UI with REST API for scan management
- 📊 **Database Storage** - SQLite database for scan history and trend analysis
- ⏱️ **Background Jobs** - Asynchronous scan execution with APScheduler
- 🔐 **Authentication** - Secure session-based authentication system
- 📈 **Historical Data** - Track infrastructure changes over time
**Primary Interface**: Web Application (Flask-based GUI)
**Alternative**: Standalone CLI Scanner (for testing and CI/CD)
## Table of Contents
---
1. [Quick Start](#quick-start)
- [Web Application (Recommended)](#web-application-recommended)
- [CLI Scanner (Standalone)](#cli-scanner-standalone)
2. [Features](#features)
3. [Web Application](#web-application)
4. [CLI Scanner](#cli-scanner)
5. [Configuration](#configuration)
6. [Output Formats](#output-formats)
7. [API Documentation](#api-documentation)
8. [Deployment](#deployment)
9. [Development](#development)
## Key Features
- 🌐 **Web Dashboard** - Modern web UI for scan management, scheduling, and historical analysis
- 📊 **Database Storage** - SQLite-based scan history with trend analysis and comparison
-**Scheduled Scans** - Cron-based automated scanning with APScheduler
- 🔧 **Config Creator** - CIDR-to-YAML configuration builder for quick setup
- 🔍 **Network Discovery** - Fast port scanning with masscan (all 65535 ports, TCP/UDP)
- 🎯 **Service Detection** - Nmap-based service enumeration with version detection
- 🔒 **SSL/TLS Analysis** - Certificate extraction, TLS version testing, cipher suite analysis
- 📸 **Screenshot Capture** - Automated webpage screenshots for all discovered web services
- 📈 **Drift Detection** - Expected vs. actual infrastructure comparison
- 📋 **Multi-Format Reports** - JSON, HTML, and ZIP archives with visual reports
- 🔐 **Authentication** - Session-based login for single-user deployments
- 🔔 **Alerts** *(Phase 5 - Coming Soon)* - Email and webhook notifications for misconfigurations
---
@@ -30,764 +28,148 @@ A comprehensive network scanning and infrastructure monitoring platform with bot
### Web Application (Recommended)
The web application provides a complete interface for managing scans, viewing history, and analyzing results.
1. **Configure environment:**
```bash
# Copy example environment file
# 1. Clone repository
git clone <repository-url>
cd SneakyScan
# 2. Configure environment
cp .env.example .env
# Edit .env and set SECRET_KEY and SNEAKYSCANNER_ENCRYPTION_KEY
# Generate secure keys (Linux/Mac)
export SECRET_KEY=$(python3 -c 'import secrets; print(secrets.token_hex(32))')
export ENCRYPTION_KEY=$(python3 -c 'import secrets; print(secrets.token_urlsafe(32))')
# 3. Build and start
docker compose build
docker compose up -d
# Update .env file with generated keys
sed -i "s/your-secret-key-here/$SECRET_KEY/" .env
sed -i "s/your-encryption-key-here/$ENCRYPTION_KEY/" .env
# 4. Initialize database
docker compose run --rm init-db --password "YourSecurePassword"
# 5. Access web interface
# Open http://localhost:5000
```
2. **Start the web application:**
```bash
docker-compose -f docker-compose-web.yml up -d
```
3. **Access the web interface:**
- Open http://localhost:5000 in your browser
- Default password: `admin` (change immediately after first login)
4. **Trigger your first scan:**
- Click "Run Scan Now" on the dashboard
- Or use the API:
```bash
curl -X POST http://localhost:5000/api/scans \
-H "Content-Type: application/json" \
-d '{"config_file":"/app/configs/example-site.yaml"}' \
-b cookies.txt
```
See [Deployment Guide](docs/ai/DEPLOYMENT.md) for detailed setup instructions.
**See [Deployment Guide](docs/DEPLOYMENT.md) for detailed setup instructions.**
### CLI Scanner (Standalone)
For quick one-off scans or scripting, use the standalone CLI scanner:
For quick one-off scans without the web interface:
```bash
# Build the image
docker-compose build
# Build and run
docker compose -f docker-compose-standalone.yml build
docker compose -f docker-compose-standalone.yml up
# Run a scan
docker-compose up
# Or run directly
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/example-site.yaml
# Results saved to ./output/
```
Results are saved to the `output/` directory as JSON, HTML, and ZIP files.
**See [CLI Scanning Guide](docs/CLI_SCANNING.md) for detailed usage.**
---
## Features
## Documentation
### Web Application (Phase 2)
### User Guides
- **[Deployment Guide](docs/DEPLOYMENT.md)** - Installation, configuration, and production deployment
- **[CLI Scanning Guide](docs/CLI_SCANNING.md)** - Standalone scanner usage, configuration, and output formats
- **[API Reference](docs/API_REFERENCE.md)** - Complete REST API documentation
- **Dashboard** - View scan history, statistics, and recent activity
- **REST API** - Programmatic access to all scan management functions
- **Background Jobs** - Scans execute asynchronously without blocking
- **Database Storage** - Complete scan history with queryable data
- **Authentication** - Secure session-based login system
- **Pagination** - Efficiently browse large scan datasets
- **Status Tracking** - Real-time scan progress monitoring
- **Error Handling** - Comprehensive error logging and reporting
### Network Discovery & Port Scanning
- **YAML-based configuration** for defining scan targets and expectations
- **Comprehensive scanning using masscan**:
- Ping/ICMP echo detection (masscan --ping)
- TCP port scanning (all 65535 ports at 10,000 pps)
- UDP port scanning (all 65535 ports at 10,000 pps)
- Fast network-wide discovery in seconds
### Service Detection & Enumeration
- **Service detection using nmap**:
- Identifies services running on discovered TCP ports
- Extracts product names and versions (e.g., "OpenSSH 8.2p1", "nginx 1.18.0")
- Provides detailed service information including extra attributes
- Balanced intensity level (5) for accuracy and speed
### Security Assessment
- **HTTP/HTTPS analysis and SSL/TLS security assessment**:
- Detects HTTP vs HTTPS on web services
- Extracts SSL certificate details (subject, issuer, expiration, SANs)
- Calculates days until certificate expiration for monitoring
- Tests TLS version support (TLS 1.0, 1.1, 1.2, 1.3)
- Lists all accepted cipher suites for each supported TLS version
- Identifies weak cryptographic configurations
### Visual Documentation
- **Webpage screenshot capture** (NEW):
- Automatically captures screenshots of all discovered web services (HTTP/HTTPS)
- Uses Playwright with headless Chromium browser
- Viewport screenshots (1280x720) for consistent sizing
- 15-second timeout per page with graceful error handling
- Handles self-signed certificates without errors
- Saves screenshots as PNG files with references in JSON reports
- Screenshots organized in timestamped directories
- Browser reuse for optimal performance
### Reporting & Output
- **Automatic multi-format output** after each scan:
- Machine-readable JSON reports for post-processing
- Human-readable HTML reports with dark theme
- ZIP archives containing all outputs for easy sharing
- **HTML report features**:
- Comprehensive reports with dark theme for easy reading
- Summary dashboard with scan statistics, drift alerts, and security warnings
- Site-by-site breakdown with expandable service details
- Visual badges for expected vs. unexpected services
- SSL/TLS certificate details with expiration warnings
- Automatically generated after every scan
- **Dockerized** for consistent execution environment and root privilege isolation
- **Expected vs. Actual comparison** to identify infrastructure drift
- Timestamped reports with complete scan duration metrics
### Developer Resources
- **[Roadmap](docs/ROADMAP.md)** - Project roadmap, architecture, and planned features
---
## Web Application
## Current Status
### Overview
**Latest Version**: Phase 4 Complete ✅
**Last Updated**: 2025-11-17
The SneakyScanner web application provides a Flask-based interface for managing network scans. All scans are stored in a SQLite database, enabling historical analysis and trending.
### Completed Phases
### Key Features
-**Phase 1**: Database schema, SQLAlchemy models, settings system
-**Phase 2**: REST API, background jobs, authentication, web UI
-**Phase 3**: Dashboard, scheduling, trend charts
-**Phase 4**: Config creator, YAML editor, config management UI
**Scan Management:**
- Trigger scans via web UI or REST API
- View complete scan history with pagination
- Monitor real-time scan status
- Delete scans and associated files
### Next Up: Phase 5 - Email, Webhooks & Comparisons
**REST API:**
- Full CRUD operations for scans
- Session-based authentication
- JSON responses for all endpoints
- Comprehensive error handling
**Core Use Case**: Monitor infrastructure for misconfigurations that expose unexpected ports/services. When a scan detects an open port not in the config's `expected_ports` list, trigger immediate notifications.
**Background Processing:**
- APScheduler for async scan execution
- Up to 3 concurrent scans (configurable)
- Status tracking: `running``completed`/`failed`
- Error capture and logging
**Planned Features**:
- Email notifications for infrastructure changes
- Webhook integrations (Slack, PagerDuty, custom SIEM)
- Alert rule engine (unexpected ports, cert expiry, weak TLS)
- Scan comparison reports for drift detection
**Database Schema:**
- 11 normalized tables for scan data
- Relationships: Scans → Sites → IPs → Ports → Services → Certificates → TLS Versions
- Efficient queries with indexes
- SQLite WAL mode for better concurrency
### Web UI Routes
| Route | Description |
|-------|-------------|
| `/` | Redirects to dashboard |
| `/login` | Login page |
| `/logout` | Logout and destroy session |
| `/dashboard` | Main dashboard with stats and recent scans |
| `/scans` | Browse scan history (paginated) |
| `/scans/<id>` | View detailed scan results |
### API Endpoints
See [API_REFERENCE.md](docs/ai/API_REFERENCE.md) for complete API documentation.
**Core Endpoints:**
- `POST /api/scans` - Trigger new scan
- `GET /api/scans` - List scans (paginated, filterable)
- `GET /api/scans/{id}` - Get scan details
- `GET /api/scans/{id}/status` - Poll scan status
- `DELETE /api/scans/{id}` - Delete scan and files
**Settings Endpoints:**
- `GET /api/settings` - Get all settings
- `PUT /api/settings/{key}` - Update setting
- `GET /api/settings/health` - Health check
### Authentication
**Login:**
```bash
curl -X POST http://localhost:5000/auth/login \
-H "Content-Type: application/json" \
-d '{"password":"yourpassword"}' \
-c cookies.txt
```
**Use session for API calls:**
```bash
curl -X GET http://localhost:5000/api/scans \
-b cookies.txt
```
**Change password:**
1. Login to web UI
2. Navigate to Settings
3. Update app password
4. Or use CLI: `python3 web/utils/change_password.py`
See [Roadmap](docs/ROADMAP.md) for complete feature timeline.
---
## CLI Scanner
## Architecture
### Requirements
- Docker
- Docker Compose (optional, for easier usage)
### Using Docker Compose
1. Create or modify a configuration file in `configs/`:
```yaml
title: "My Infrastructure Scan"
sites:
- name: "Web Servers"
ips:
- address: "192.168.1.10"
expected:
ping: true
tcp_ports: [22, 80, 443]
udp_ports: []
```
┌─────────────────────────────────────────────────────────────┐
│ Flask Web Application │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Web UI │ │ REST API │ │ Scheduler │ │
│ │ (Dashboard) │ │ (JSON/CRUD) │ │ (APScheduler) │ │
│ └──────┬───────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │ │
│ └─────────────────┴────────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ SQLAlchemy │ │
│ │ (ORM Layer) │ │
│ └────────┬────────┘ │
│ │
┌────────▼────────┐ │
│ SQLite3 DB │ │
│ (scan history) │ │
└─────────────────┘ │
└───────────────────────────┬─────────────────────────────────┘
┌──────────▼──────────┐
│ Scanner Engine │
│ (scanner.py) │
│ ┌────────────────┐ │
│ │ Masscan/Nmap │ │
│ │ Playwright │ │
│ │ sslyze │ │
│ └────────────────┘ │
└─────────────────────┘
```
2. Build and run:
```bash
docker-compose build
docker-compose up
```
3. Check results in the `output/` directory:
- `scan_report_YYYYMMDD_HHMMSS.json` - JSON report
- `scan_report_YYYYMMDD_HHMMSS.html` - HTML report
- `scan_report_YYYYMMDD_HHMMSS.zip` - ZIP archive
- `scan_report_YYYYMMDD_HHMMSS_screenshots/` - Screenshots directory
## Scan Performance
SneakyScanner uses a five-phase approach for comprehensive scanning:
1. **Ping Scan** (masscan): ICMP echo detection - ~1-2 seconds
2. **TCP Port Discovery** (masscan): Scans all 65535 TCP ports at 10,000 packets/second - ~13 seconds per 2 IPs
3. **UDP Port Discovery** (masscan): Scans all 65535 UDP ports at 10,000 packets/second - ~13 seconds per 2 IPs
4. **Service Detection** (nmap): Identifies services on discovered TCP ports - ~20-60 seconds per IP with open ports
5. **HTTP/HTTPS Analysis** (Playwright, SSL/TLS): Detects web protocols, captures screenshots, and analyzes certificates - ~10-20 seconds per web service
**Example**: Scanning 2 IPs with 10 open ports each (including 2-3 web services) typically takes 2-3 minutes total.
### Using Docker Directly
1. Build the image:
```bash
docker build -t sneakyscanner .
```
2. Run a scan:
```bash
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/your-config.yaml
```
**Technology Stack**:
- **Backend**: Flask 3.x, SQLAlchemy 2.x, SQLite3, APScheduler 3.x
- **Frontend**: Jinja2, Bootstrap 5, Chart.js, Vanilla JavaScript
- **Scanner**: Masscan, Nmap, Playwright (Chromium), sslyze
- **Deployment**: Docker Compose, Gunicorn
---
## Configuration
The YAML configuration file defines the scan parameters:
```yaml
title: "Scan Title" # Required: Report title
sites: # Required: List of sites to scan
- name: "Site Name"
ips:
- address: "192.168.1.10"
expected:
ping: true # Expected ping response
tcp_ports: [22, 80] # Expected TCP ports
udp_ports: [53] # Expected UDP ports
```
See `configs/example-site.yaml` for a complete example.
---
## Output Formats
After each scan completes, SneakyScanner automatically generates three output formats:
1. **JSON Report** (`scan_report_YYYYMMDD_HHMMSS.json`): Machine-readable scan data with all discovered services, ports, and SSL/TLS information
2. **HTML Report** (`scan_report_YYYYMMDD_HHMMSS.html`): Human-readable report with dark theme, summary dashboard, and detailed service breakdown
3. **ZIP Archive** (`scan_report_YYYYMMDD_HHMMSS.zip`): Contains JSON report, HTML report, and all screenshots for easy sharing and archival
All files share the same timestamp for easy correlation. Screenshots are saved in a subdirectory (`scan_report_YYYYMMDD_HHMMSS_screenshots/`) and included in the ZIP archive. The report includes the total scan duration (in seconds) covering all phases: ping scan, TCP/UDP port discovery, service detection, screenshot capture, and report generation.
```json
{
"title": "Sneaky Infra Scan",
"scan_time": "2024-01-15T10:30:00Z",
"scan_duration": 95.3,
"config_file": "/app/configs/example-site.yaml",
"sites": [
{
"name": "Production Web Servers",
"ips": [
{
"address": "192.168.1.10",
"expected": {
"ping": true,
"tcp_ports": [22, 80, 443],
"udp_ports": [53]
},
"actual": {
"ping": true,
"tcp_ports": [22, 80, 443, 3000],
"udp_ports": [53],
"services": [
{
"port": 22,
"protocol": "tcp",
"service": "ssh",
"product": "OpenSSH",
"version": "8.2p1"
},
{
"port": 80,
"protocol": "tcp",
"service": "http",
"product": "nginx",
"version": "1.18.0",
"http_info": {
"protocol": "http",
"screenshot": "scan_report_20250115_103000_screenshots/192_168_1_10_80.png"
}
},
{
"port": 443,
"protocol": "tcp",
"service": "https",
"product": "nginx",
"http_info": {
"protocol": "https",
"screenshot": "scan_report_20250115_103000_screenshots/192_168_1_10_443.png",
"ssl_tls": {
"certificate": {
"subject": "CN=example.com",
"issuer": "CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US",
"serial_number": "123456789012345678901234567890",
"not_valid_before": "2025-01-01T00:00:00+00:00",
"not_valid_after": "2025-04-01T23:59:59+00:00",
"days_until_expiry": 89,
"sans": ["example.com", "www.example.com"]
},
"tls_versions": {
"TLS 1.0": {
"supported": false,
"cipher_suites": []
},
"TLS 1.1": {
"supported": false,
"cipher_suites": []
},
"TLS 1.2": {
"supported": true,
"cipher_suites": [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
]
},
"TLS 1.3": {
"supported": true,
"cipher_suites": [
"TLS_AES_256_GCM_SHA384",
"TLS_AES_128_GCM_SHA256"
]
}
}
}
}
},
{
"port": 3000,
"protocol": "tcp",
"service": "http",
"product": "Node.js",
"http_info": {
"protocol": "http"
}
}
]
}
}
]
}
]
}
```
## Screenshot Capture Details
SneakyScanner automatically captures webpage screenshots for all discovered HTTP and HTTPS services, providing visual documentation of your infrastructure.
### How It Works
1. **Automatic Detection**: During the HTTP/HTTPS analysis phase, SneakyScanner identifies web services based on:
- Nmap service detection results (http, https, ssl, http-proxy)
- Common web ports (80, 443, 8000, 8006, 8080, 8081, 8443, 8888, 9443)
2. **Screenshot Capture**: For each web service:
- Launches headless Chromium browser (once per scan, reused for all screenshots)
- Navigates to the service URL (HTTP or HTTPS)
- Waits for network to be idle (up to 15 seconds)
- Captures viewport screenshot (1280x720 pixels)
- Handles SSL certificate errors gracefully (e.g., self-signed certificates)
3. **Storage**: Screenshots are saved as PNG files:
- Directory: `output/scan_report_YYYYMMDD_HHMMSS_screenshots/`
- Filename format: `{ip}_{port}.png` (e.g., `192_168_1_10_443.png`)
- Referenced in JSON report under `http_info.screenshot`
### Screenshot Configuration
Default settings (configured in `src/screenshot_capture.py`):
- **Viewport size**: 1280x720 (captures visible area only, not full page)
- **Timeout**: 15 seconds per page load
- **Browser**: Chromium (headless mode)
- **SSL handling**: Ignores HTTPS errors (works with self-signed certificates)
- **User agent**: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
### Error Handling
Screenshots are captured on a best-effort basis:
- If a screenshot fails (timeout, connection error, etc.), the scan continues
- Failed screenshots are logged but don't stop the scan
- Services without screenshots simply omit the `screenshot` field in JSON output
## HTML Report Generation
SneakyScanner automatically generates comprehensive HTML reports after each scan, providing an easy-to-read visual interface for analyzing scan results.
### Automatic Generation
HTML reports are automatically created after every scan completes, along with JSON reports and ZIP archives. All three outputs share the same timestamp and are saved to the `output/` directory.
### Manual Generation (Optional)
You can also manually generate HTML reports from existing JSON scan data:
```bash
# Generate HTML report (creates report in same directory as JSON)
python3 src/report_generator.py output/scan_report_20251113_175235.json
# Specify custom output path
python3 src/report_generator.py output/scan_report.json /path/to/custom_report.html
```
### Report Features
The generated HTML report includes:
**Summary Dashboard**:
- **Scan Statistics**: Total IPs scanned, TCP/UDP ports found, services identified, web services, screenshots captured
- **Drift Alerts**: Unexpected TCP/UDP ports, missing expected services, new services detected
- **Security Warnings**: Expiring certificates (<30 days), weak TLS versions (1.0/1.1), self-signed certificates, high port services (>10000)
**Site-by-Site Breakdown**:
- Organized by logical site grouping from configuration
- Per-IP sections with status badges (ping, port drift summary)
- Service tables with expandable details (click any row to expand)
- Visual badges: green (expected), red (unexpected), yellow (missing/warning)
**Service Details** (click to expand):
- Product name, version, extra information, OS type
- HTTP/HTTPS protocol detection
- Screenshot links for web services
- SSL/TLS certificate details (expandable):
- Subject, issuer, validity dates, serial number
- Days until expiration with color-coded warnings
- Subject Alternative Names (SANs)
- TLS version support (1.0, 1.1, 1.2, 1.3) with cipher suites
- Weak TLS and self-signed certificate warnings
**UDP Port Handling**:
- Expected UDP ports shown with green "Expected" badge
- Unexpected UDP ports shown with red "Unexpected" badge
- Missing expected UDP ports shown with yellow "Missing" badge
- Note: Service detection not available for UDP (nmap limitation)
**Design**:
- Dark theme with slate/grey color scheme for comfortable reading
- Responsive layout works on different screen sizes
- No external dependencies - single HTML file
- Minimal JavaScript for expand/collapse functionality
- Optimized hover effects for table rows
### Report Output
The HTML report is a standalone file that can be:
- Opened directly in any web browser (Chrome, Firefox, Safari, Edge)
- Shared via email or file transfer
- Archived for compliance or historical comparison
- Viewed without an internet connection or web server
Screenshot links in the report are relative paths, so keep the report and screenshot directory together.
---
## API Documentation
Complete API reference available at [docs/ai/API_REFERENCE.md](docs/ai/API_REFERENCE.md).
**Quick Reference:**
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/scans` | POST | Trigger new scan |
| `/api/scans` | GET | List all scans (paginated) |
| `/api/scans/{id}` | GET | Get scan details |
| `/api/scans/{id}/status` | GET | Get scan status |
| `/api/scans/{id}` | DELETE | Delete scan |
| `/api/settings` | GET | Get all settings |
| `/api/settings/{key}` | PUT | Update setting |
| `/api/settings/health` | GET | Health check |
**Authentication:** All endpoints (except `/api/settings/health`) require session authentication via `/auth/login`.
---
## Deployment
### Production Deployment
See [DEPLOYMENT.md](docs/ai/DEPLOYMENT.md) for comprehensive deployment guide.
**Quick Steps:**
1. **Configure environment variables:**
```bash
cp .env.example .env
# Edit .env and set secure keys
```
2. **Initialize database:**
```bash
docker-compose -f docker-compose-web.yml run --rm web python3 init_db.py
```
3. **Start services:**
```bash
docker-compose -f docker-compose-web.yml up -d
```
4. **Verify health:**
```bash
curl http://localhost:5000/api/settings/health
```
### Docker Volumes
The web application uses persistent volumes:
| Volume | Path | Description |
|--------|------|-------------|
| `data` | `/app/data` | SQLite database |
| `output` | `/app/output` | Scan results (JSON, HTML, ZIP, screenshots) |
| `logs` | `/app/logs` | Application logs |
| `configs` | `/app/configs` | YAML scan configurations |
**Backup:**
```bash
# Backup database
docker cp sneakyscanner_web:/app/data/sneakyscanner.db ./backup/
# Backup all scan results
docker cp sneakyscanner_web:/app/output ./backup/
# Or use docker-compose volumes
docker run --rm -v sneakyscanner_data:/data -v $(pwd)/backup:/backup alpine tar czf /backup/data.tar.gz /data
```
### Environment Variables
See `.env.example` for complete configuration options:
**Flask Configuration:**
- `FLASK_ENV` - Environment mode (production/development)
- `FLASK_DEBUG` - Debug mode (true/false)
- `SECRET_KEY` - Flask secret key for sessions (generate with `secrets.token_hex(32)`)
**Database:**
- `DATABASE_URL` - Database connection string (default: SQLite)
**Security:**
- `SNEAKYSCANNER_ENCRYPTION_KEY` - Encryption key for sensitive settings (generate with `secrets.token_urlsafe(32)`)
**Scheduler:**
- `SCHEDULER_EXECUTORS` - Number of concurrent scan workers (default: 2)
- `SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES` - Max concurrent jobs (default: 3)
---
## Development
### Project Structure
```
SneakyScanner/
├── src/ # Scanner engine (CLI)
│ ├── scanner.py # Main scanner application
│ ├── screenshot_capture.py # Webpage screenshot capture
│ └── report_generator.py # HTML report generation
├── web/ # Web application (Flask)
│ ├── app.py # Flask app factory
│ ├── models.py # SQLAlchemy models (11 tables)
│ ├── api/ # API blueprints
│ │ ├── scans.py # Scan management endpoints
│ │ ├── settings.py # Settings endpoints
│ │ └── ...
│ ├── auth/ # Authentication
│ │ ├── routes.py # Login/logout routes
│ │ ├── decorators.py # Auth decorators
│ │ └── models.py # User model
│ ├── routes/ # Web UI routes
│ │ └── main.py # Dashboard, scans pages
│ ├── services/ # Business logic
│ │ ├── scan_service.py # Scan CRUD operations
│ │ └── scheduler_service.py # APScheduler integration
│ ├── jobs/ # Background jobs
│ │ └── scan_job.py # Async scan execution
│ ├── utils/ # Utilities
│ │ ├── settings.py # Settings manager
│ │ ├── pagination.py # Pagination helper
│ │ └── validators.py # Input validation
│ ├── templates/ # Jinja2 templates
│ │ ├── base.html # Base layout
│ │ ├── login.html # Login page
│ │ ├── dashboard.html # Dashboard
│ │ └── errors/ # Error templates
│ └── static/ # Static assets
│ ├── css/
│ ├── js/
│ └── images/
├── templates/ # Report templates (CLI)
│ └── report_template.html # HTML report template
├── tests/ # Test suite
│ ├── conftest.py # Pytest fixtures
│ ├── test_scan_service.py # Service tests
│ ├── test_scan_api.py # API tests
│ ├── test_authentication.py # Auth tests
│ ├── test_background_jobs.py # Scheduler tests
│ └── test_error_handling.py # Error handling tests
├── migrations/ # Alembic database migrations
│ └── versions/
│ ├── 001_initial_schema.py
│ ├── 002_add_scan_indexes.py
│ └── 003_add_scan_timing_fields.py
├── configs/ # Scan configurations
│ └── example-site.yaml
├── output/ # Scan results
├── docs/ # Documentation
│ ├── ai/ # Development docs
│ │ ├── API_REFERENCE.md
│ │ ├── DEPLOYMENT.md
│ │ ├── PHASE2.md
│ │ ├── PHASE2_COMPLETE.md
│ │ └── ROADMAP.md
│ └── human/
├── Dockerfile # Scanner + web app image
├── docker-compose.yml # CLI scanner compose
├── docker-compose-web.yml # Web app compose
├── requirements.txt # Scanner dependencies
├── requirements-web.txt # Web app dependencies
├── alembic.ini # Alembic configuration
├── init_db.py # Database initialization
├── .env.example # Environment template
├── CLAUDE.md # Developer guide
└── README.md # This file
```
### Running Tests
**In Docker:**
```bash
docker-compose -f docker-compose-web.yml run --rm web pytest tests/ -v
```
**Locally (requires Python 3.12+):**
```bash
pip install -r requirements-web.txt
pytest tests/ -v
# With coverage
pytest tests/ --cov=web --cov-report=html
```
**Test Coverage:**
- 100 test functions across 6 test files
- 1,825 lines of test code
- Coverage: Service layer, API endpoints, authentication, error handling, background jobs
### Database Migrations
**Create new migration:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic revision --autogenerate -m "Description"
```
**Apply migrations:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic upgrade head
```
**Rollback:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic downgrade -1
```
## Security Notice
This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for masscan and nmap raw socket access
⚠️ **Important**: This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for raw socket access (masscan/nmap)
- `--network host` for direct network access
Only use this tool on networks you own or have explicit authorization to scan. Unauthorized network scanning may be illegal in your jurisdiction.
**Only use this tool on networks you own or have explicit authorization to scan.** Unauthorized network scanning may be illegal in your jurisdiction.
---
### Security Best Practices
## Roadmap
1. Run on dedicated scan server (not production systems)
2. Restrict network access with firewall rules
3. Use strong passwords and encryption keys
4. Enable HTTPS in production (reverse proxy recommended)
5. Regularly update Docker images and dependencies
**Current Phase:** Phase 2 Complete ✅
**Completed Phases:**
-**Phase 1** - Database foundation, Flask app structure, settings system
-**Phase 2** - REST API, background jobs, authentication, basic UI
**Upcoming Phases:**
- 📋 **Phase 3** - Enhanced dashboard, trend charts, scheduled scans (Weeks 5-6)
- 📋 **Phase 4** - Email notifications, scan comparison, alert rules (Weeks 7-8)
- 📋 **Phase 5** - CLI as API client, token authentication (Week 9)
- 📋 **Phase 6** - Advanced features (vulnerability detection, PDF export, timeline view)
See [ROADMAP.md](docs/ai/ROADMAP.md) for detailed feature planning.
See [Deployment Guide](docs/DEPLOYMENT.md) for production security checklist.
---
## Contributing
This is a personal/small team project. For bugs or feature requests:
1. Check existing issues
2. Create detailed bug reports with reproduction steps
3. Submit pull requests with tests
@@ -800,27 +182,17 @@ MIT License - See LICENSE file for details
---
## Security Notice
This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for masscan and nmap raw socket access
- `--network host` for direct network access
**⚠️ Important:** Only use this tool on networks you own or have explicit authorization to scan. Unauthorized network scanning may be illegal in your jurisdiction.
---
## Support
**Documentation:**
- [API Reference](docs/ai/API_REFERENCE.md)
- [Deployment Guide](docs/ai/DEPLOYMENT.md)
- [Developer Guide](CLAUDE.md)
- [Roadmap](docs/ai/ROADMAP.md)
**Documentation**:
- [Deployment Guide](docs/DEPLOYMENT.md)
- [CLI Scanning Guide](docs/CLI_SCANNING.md)
- [API Reference](docs/API_REFERENCE.md)
- [Roadmap](docs/ROADMAP.md)
**Issues:** https://github.com/anthropics/sneakyscanner/issues
**Issues**: email me ptarrant at gmail dot com
---
**Version:** 2.0 (Phase 2 Complete)
**Last Updated:** 2025-11-14
**Version**: Phase 4 Complete
**Last Updated**: 2025-11-17

View File

@@ -0,0 +1,13 @@
version: '3.8'
services:
scanner:
build: .
image: sneakyscanner:latest
container_name: sneakyscanner
privileged: true # Required for masscan raw socket access
network_mode: host # Required for network scanning
volumes:
- ./configs:/app/configs:ro
- ./output:/app/output
command: /app/configs/example-site.yaml

View File

@@ -1,64 +0,0 @@
version: '3.8'
services:
web:
build: .
image: sneakyscanner:latest
container_name: sneakyscanner-web
# Override entrypoint to run Flask app instead of scanner
entrypoint: ["python3", "-u"]
command: ["-m", "web.app"]
# Note: Using host network mode for scanner capabilities, so no port mapping needed
# The Flask app will be accessible at http://localhost:5000
volumes:
# Mount configs directory for scan configurations (read-write for web UI management)
- ./configs:/app/configs
# Mount output directory for scan results
- ./output:/app/output
# Mount database file for persistence
- ./data:/app/data
# Mount logs directory
- ./logs:/app/logs
environment:
# Flask configuration
- FLASK_APP=web.app
- FLASK_ENV=${FLASK_ENV:-production}
- FLASK_DEBUG=${FLASK_DEBUG:-false}
- FLASK_HOST=0.0.0.0
- FLASK_PORT=5000
# Database configuration (SQLite in mounted volume for persistence)
- DATABASE_URL=sqlite:////app/data/sneakyscanner.db
# Security settings
- SECRET_KEY=${SECRET_KEY:-dev-secret-key-change-in-production}
- SNEAKYSCANNER_ENCRYPTION_KEY=${SNEAKYSCANNER_ENCRYPTION_KEY:-}
# Optional: CORS origins (comma-separated)
- CORS_ORIGINS=${CORS_ORIGINS:-*}
# Optional: Logging level
- LOG_LEVEL=${LOG_LEVEL:-INFO}
# Scheduler configuration (APScheduler)
- SCHEDULER_EXECUTORS=${SCHEDULER_EXECUTORS:-2}
- SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES=${SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES:-3}
# Scanner functionality requires privileged mode and host network for masscan/nmap
privileged: true
network_mode: host
# Health check to ensure web service is running
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:5000/api/settings/health').read()"]
interval: 60s
timeout: 10s
retries: 3
start_period: 40s
restart: unless-stopped
# Optional: Initialize database on first run
# Run with: docker-compose -f docker-compose-web.yml run --rm init-db
init-db:
build: .
image: sneakyscanner:latest
container_name: sneakyscanner-init-db
entrypoint: ["python3"]
command: ["init_db.py", "--db-url", "sqlite:////app/data/sneakyscanner.db"]
volumes:
- ./data:/app/data
profiles:
- tools

View File

@@ -1,13 +1,64 @@
version: '3.8'
services:
scanner:
web:
build: .
image: sneakyscanner:latest
container_name: sneakyscanner
privileged: true # Required for masscan raw socket access
network_mode: host # Required for network scanning
container_name: sneakyscanner-web
# Override entrypoint to run Flask app instead of scanner
entrypoint: ["python3", "-u"]
command: ["-m", "web.app"]
# Note: Using host network mode for scanner capabilities, so no port mapping needed
# The Flask app will be accessible at http://localhost:5000
volumes:
- ./configs:/app/configs:ro
# Mount configs directory for scan configurations (read-write for web UI management)
- ./configs:/app/configs
# Mount output directory for scan results
- ./output:/app/output
command: /app/configs/example-site.yaml
# Mount database file for persistence
- ./data:/app/data
# Mount logs directory
- ./logs:/app/logs
environment:
# Flask configuration
- FLASK_APP=web.app
- FLASK_ENV=${FLASK_ENV:-production}
- FLASK_DEBUG=${FLASK_DEBUG:-false}
- FLASK_HOST=0.0.0.0
- FLASK_PORT=5000
# Database configuration (SQLite in mounted volume for persistence)
- DATABASE_URL=sqlite:////app/data/sneakyscanner.db
# Security settings
- SECRET_KEY=${SECRET_KEY:-dev-secret-key-change-in-production}
- SNEAKYSCANNER_ENCRYPTION_KEY=${SNEAKYSCANNER_ENCRYPTION_KEY:-}
# Optional: CORS origins (comma-separated)
- CORS_ORIGINS=${CORS_ORIGINS:-*}
# Optional: Logging level
- LOG_LEVEL=${LOG_LEVEL:-INFO}
# Scheduler configuration (APScheduler)
- SCHEDULER_EXECUTORS=${SCHEDULER_EXECUTORS:-2}
- SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES=${SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES:-3}
# Scanner functionality requires privileged mode and host network for masscan/nmap
privileged: true
network_mode: host
# Health check to ensure web service is running
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:5000/api/settings/health').read()"]
interval: 60s
timeout: 10s
retries: 3
start_period: 40s
restart: unless-stopped
# Optional: Initialize database on first run
# Run with: docker-compose -f docker-compose-web.yml run --rm init-db
init-db:
build: .
image: sneakyscanner:latest
container_name: sneakyscanner-init-db
entrypoint: ["python3"]
command: ["init_db.py", "--db-url", "sqlite:////app/data/sneakyscanner.db"]
volumes:
- ./data:/app/data
profiles:
- tools

File diff suppressed because it is too large Load Diff

502
docs/CLI_SCANNING.md Normal file
View File

@@ -0,0 +1,502 @@
# CLI Scanner Guide
The SneakyScanner CLI provides a standalone scanning tool for quick one-off scans, testing, or CI/CD pipelines without requiring the web application.
## Table of Contents
1. [Quick Start](#quick-start)
2. [Configuration](#configuration)
3. [Scan Performance](#scan-performance)
4. [Output Formats](#output-formats)
5. [Screenshot Capture](#screenshot-capture)
6. [HTML Reports](#html-reports)
7. [Advanced Usage](#advanced-usage)
---
## Quick Start
### Using Docker Compose (Recommended)
```bash
# Build the image
docker compose -f docker-compose-standalone.yml build
# Run a scan
docker compose -f docker-compose-standalone.yml up
# Results saved to ./output/ directory
```
### Using Docker Directly
```bash
# Build the image
docker build -t sneakyscanner .
# Run a scan
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/your-config.yaml
```
### Requirements
- Docker
- Linux host (required for `--privileged` and `--network host`)
- Configuration file in `configs/` directory
---
## Configuration
The YAML configuration file defines scan parameters and expectations.
### Basic Configuration
```yaml
title: "My Infrastructure Scan"
sites:
- name: "Web Servers"
ips:
- address: "192.168.1.10"
expected:
ping: true
tcp_ports: [22, 80, 443]
udp_ports: []
services: ["ssh", "http", "https"]
```
### CIDR Range Configuration
```yaml
title: "Network Scan"
sites:
- name: "Production Network"
cidr: "192.168.1.0/24"
expected_ports:
- port: 22
protocol: tcp
service: "ssh"
- port: 80
protocol: tcp
service: "http"
- port: 443
protocol: tcp
service: "https"
ping_expected: true
```
### Configuration Reference
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `title` | string | Yes | Report title |
| `sites` | array | Yes | List of sites to scan |
| `sites[].name` | string | Yes | Site name |
| `sites[].ips` | array | Conditional | List of IP addresses (if not using CIDR) |
| `sites[].cidr` | string | Conditional | CIDR range (if not using IPs) |
| `sites[].ips[].address` | string | Yes | IP address |
| `sites[].ips[].expected.ping` | boolean | No | Expected ping response |
| `sites[].ips[].expected.tcp_ports` | array | No | Expected TCP ports |
| `sites[].ips[].expected.udp_ports` | array | No | Expected UDP ports |
| `sites[].ips[].expected.services` | array | No | Expected service names |
| `sites[].expected_ports` | array | No | Expected ports for CIDR range |
| `sites[].ping_expected` | boolean | No | Expected ping for CIDR range |
See `configs/example-site.yaml` for a complete example.
---
## Scan Performance
SneakyScanner uses a five-phase approach for comprehensive scanning:
1. **Ping Scan** (masscan): ICMP echo detection
- Duration: ~1-2 seconds
- Tests network reachability
2. **TCP Port Discovery** (masscan): Scans all 65535 TCP ports
- Rate: 10,000 packets/second
- Duration: ~13 seconds per 2 IPs
3. **UDP Port Discovery** (masscan): Scans all 65535 UDP ports
- Rate: 10,000 packets/second
- Duration: ~13 seconds per 2 IPs
4. **Service Detection** (nmap): Identifies services on discovered TCP ports
- Intensity level: 5 (balanced)
- Duration: ~20-60 seconds per IP with open ports
- Extracts product names and versions
5. **HTTP/HTTPS Analysis**: Web protocol detection and SSL/TLS analysis
- Screenshot capture (Playwright)
- Certificate extraction (sslyze)
- TLS version testing
- Duration: ~10-20 seconds per web service
**Example**: Scanning 2 IPs with 10 open ports each (including 2-3 web services) typically takes 2-3 minutes total.
### Performance Tuning
Adjust scan rate in the scanner code if needed:
- Default: 10,000 pps (packets per second)
- Increase for faster scans (may cause network congestion)
- Decrease for slower, more reliable scans
---
## Output Formats
After each scan completes, SneakyScanner automatically generates three output formats:
### 1. JSON Report
**Filename**: `scan_report_YYYYMMDD_HHMMSS.json`
Machine-readable scan data with all discovered services, ports, and SSL/TLS information.
**Structure**:
```json
{
"title": "Scan Title",
"scan_time": "2025-01-15T10:30:00Z",
"scan_duration": 95.3,
"config_file": "/app/configs/example-site.yaml",
"sites": [
{
"name": "Site Name",
"ips": [
{
"address": "192.168.1.10",
"expected": {...},
"actual": {
"ping": true,
"tcp_ports": [22, 80, 443],
"udp_ports": [],
"services": [...]
}
}
]
}
]
}
```
### 2. HTML Report
**Filename**: `scan_report_YYYYMMDD_HHMMSS.html`
Human-readable report with dark theme, summary dashboard, and detailed service breakdown.
Features:
- Summary statistics
- Drift alerts (unexpected ports/services)
- Security warnings (expiring certs, weak TLS)
- Site-by-site breakdown
- Expandable service details
- SSL/TLS certificate information
See [HTML Reports](#html-reports) section for details.
### 3. ZIP Archive
**Filename**: `scan_report_YYYYMMDD_HHMMSS.zip`
Contains:
- JSON report
- HTML report
- All screenshots (if web services were found)
Useful for:
- Easy sharing
- Archival
- Compliance documentation
### 4. Screenshots Directory
**Directory**: `scan_report_YYYYMMDD_HHMMSS_screenshots/`
PNG screenshots of all discovered web services:
- Filename format: `{ip}_{port}.png` (e.g., `192_168_1_10_443.png`)
- Viewport size: 1280x720
- Referenced in JSON report under `http_info.screenshot`
---
## Screenshot Capture
SneakyScanner automatically captures webpage screenshots for all discovered HTTP and HTTPS services.
### Automatic Detection
Screenshots are captured for services identified as web services based on:
- Nmap service detection results (http, https, ssl, http-proxy)
- Common web ports (80, 443, 8000, 8006, 8080, 8081, 8443, 8888, 9443)
### Capture Process
For each web service:
1. **Launch Browser**: Headless Chromium (once per scan, reused)
2. **Navigate**: To service URL (HTTP or HTTPS)
3. **Wait**: For network to be idle (up to 15 seconds)
4. **Capture**: Viewport screenshot (1280x720 pixels)
5. **Save**: As PNG file in screenshots directory
### Configuration
Default settings (configured in `src/screenshot_capture.py`):
| Setting | Value |
|---------|-------|
| Viewport size | 1280x720 |
| Timeout | 15 seconds |
| Browser | Chromium (headless) |
| SSL handling | Ignores HTTPS errors |
| User agent | Mozilla/5.0 (Windows NT 10.0; Win64; x64) |
### Error Handling
Screenshots are captured on a best-effort basis:
- Failed screenshots are logged but don't stop the scan
- Services without screenshots omit the `screenshot` field in JSON
- Common errors: timeout, connection refused, invalid SSL
### Disabling Screenshots
To disable screenshot capture, modify `src/screenshot_capture.py` or comment out the screenshot phase in `src/scanner.py`.
---
## HTML Reports
SneakyScanner automatically generates comprehensive HTML reports after each scan.
### Automatic Generation
HTML reports are created after every scan, along with JSON reports and ZIP archives. All outputs share the same timestamp.
### Manual Generation
Generate HTML reports from existing JSON scan data:
```bash
# Generate HTML report (creates report in same directory as JSON)
cd app/
python3 src/report_generator.py ../output/scan_report_20250115_103000.json
# Specify custom output path
python3 src/report_generator.py ../output/scan_report.json /path/to/custom_report.html
```
### Report Features
**Summary Dashboard**:
- **Scan Statistics**: Total IPs, TCP/UDP ports, services, web services, screenshots
- **Drift Alerts**: Unexpected ports, missing services, new services
- **Security Warnings**: Expiring certificates (<30 days), weak TLS (1.0/1.1), self-signed certs, high ports (>10000)
**Site-by-Site Breakdown**:
- Organized by logical site grouping from configuration
- Per-IP sections with status badges (ping, port drift)
- Service tables with expandable details (click to expand)
- Visual badges: green (expected), red (unexpected), yellow (missing/warning)
**Service Details** (expandable):
- Product name, version, extra information, OS type
- HTTP/HTTPS protocol detection
- Screenshot links for web services
- SSL/TLS certificate details:
- Subject, issuer, validity dates, serial number
- Days until expiration (color-coded warnings)
- Subject Alternative Names (SANs)
- TLS version support (1.0, 1.1, 1.2, 1.3) with cipher suites
- Weak TLS and self-signed certificate warnings
**UDP Port Handling**:
- Expected UDP ports: green "Expected" badge
- Unexpected UDP ports: red "Unexpected" badge
- Missing UDP ports: yellow "Missing" badge
- Note: Service detection not available for UDP (nmap limitation)
**Design**:
- Dark theme with slate/grey color scheme
- Responsive layout
- No external dependencies (single HTML file)
- Minimal JavaScript for expand/collapse
- Optimized hover effects
### Report Portability
The HTML report is a standalone file that can be:
- Opened in any web browser (Chrome, Firefox, Safari, Edge)
- Shared via email or file transfer
- Archived for compliance or historical comparison
- Viewed without internet connection
**Note**: Screenshot links use relative paths, so keep the report and screenshot directory together.
---
## Advanced Usage
### Running on Remote Targets
```bash
# Scan remote network via Docker host
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/remote-network.yaml
```
**Note**: The Docker host must have network access to the target network.
### CI/CD Integration
```yaml
# Example GitLab CI pipeline
scan-infrastructure:
stage: test
image: docker:latest
services:
- docker:dind
script:
- docker build -t sneakyscanner .
- docker run --rm --privileged --network host \
-v $PWD/configs:/app/configs:ro \
-v $PWD/output:/app/output \
sneakyscanner /app/configs/production.yaml
artifacts:
paths:
- output/
expire_in: 30 days
```
### Batch Scanning
```bash
# Scan multiple configs sequentially
for config in configs/*.yaml; do
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner "/app/configs/$(basename $config)"
done
```
### Custom Output Directory
```bash
# Use custom output directory
mkdir -p /path/to/custom/output
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v /path/to/custom/output:/app/output \
sneakyscanner /app/configs/config.yaml
```
---
## Troubleshooting
### Permission Denied Errors
**Problem**: masscan or nmap fails with permission denied
**Solution**: Ensure Docker is running with `--privileged` flag:
```bash
docker run --rm --privileged --network host ...
```
### No Ports Found
**Problem**: Scan completes but finds no open ports
**Possible Causes**:
- Firewall blocking scans
- Wrong network (ensure `--network host`)
- Target hosts are down
- Incorrect IP addresses in config
**Debug**:
```bash
# Test ping manually
ping 192.168.1.10
# Check Docker network mode
docker inspect <container-id> | grep NetworkMode
```
### Screenshots Failing
**Problem**: Screenshots not being captured
**Possible Causes**:
- Chromium not installed (check Dockerfile)
- Timeout too short (increase in screenshot_capture.py)
- Web service requires authentication
- SSL certificate errors
**Debug**: Check scan logs for screenshot errors
### Scan Takes Too Long
**Problem**: Scan runs for 30+ minutes
**Solutions**:
- Reduce scan rate (edit scanner.py)
- Limit port range (edit scanner.py to scan specific ports)
- Reduce number of IPs in config
- Disable UDP scanning if not needed
---
## Security Considerations
### Privileged Mode
The CLI scanner requires `--privileged` flag for:
- Raw socket access (masscan, nmap)
- ICMP echo requests (ping)
**Security implications**:
- Container has extensive host capabilities
- Only run on trusted networks
- Don't expose to public networks
### Network Mode: Host
The scanner uses `--network host` for:
- Direct network access without NAT
- Raw packet sending
- Accurate service detection
**Security implications**:
- Container shares host network namespace
- Can access all host network interfaces
- Bypass Docker network isolation
### Best Practices
1. **Only scan authorized networks**
2. **Run on dedicated scan server** (not production)
3. **Limit network access** with firewall rules
4. **Review scan configs** before running
5. **Store results securely** (may contain sensitive data)
---
## Support
- **Deployment Guide**: [docs/DEPLOYMENT.md](DEPLOYMENT.md)
- **API Reference**: [docs/API_REFERENCE.md](API_REFERENCE.md)
- **Roadmap**: [docs/ROADMAP.md](ROADMAP.md)
---
**Last Updated**: 2025-11-17
**Version**: Phase 4 Complete

View File

@@ -8,12 +8,13 @@
4. [Configuration](#configuration)
5. [First-Time Setup](#first-time-setup)
6. [Running the Application](#running-the-application)
7. [Volume Management](#volume-management)
8. [Health Monitoring](#health-monitoring)
9. [Troubleshooting](#troubleshooting)
10. [Security Considerations](#security-considerations)
11. [Upgrading](#upgrading)
12. [Backup and Restore](#backup-and-restore)
7. [Using the Web Interface](#using-the-web-interface)
8. [Volume Management](#volume-management)
9. [Health Monitoring](#health-monitoring)
10. [Troubleshooting](#troubleshooting)
11. [Security Considerations](#security-considerations)
12. [Upgrading](#upgrading)
13. [Backup and Restore](#backup-and-restore)
---
@@ -22,10 +23,12 @@
SneakyScanner is deployed as a Docker container running a Flask web application with an integrated network scanner. The application requires privileged mode and host networking to perform network scans using masscan and nmap.
**Architecture:**
- **Web Application**: Flask app on port 5000
- **Web Application**: Flask app on port 5000 with modern web UI
- **Database**: SQLite (persisted to volume)
- **Background Jobs**: APScheduler for async scan execution
- **Scanner**: masscan, nmap, sslyze, Playwright
- **Config Creator**: Web-based CIDR-to-YAML configuration builder
- **Scheduling**: Cron-based scheduled scans with dashboard management
---
@@ -69,7 +72,7 @@ docker compose version
## Quick Start
For users who want to get started immediately:
For users who want to get started immediately with the web application:
```bash
# 1. Clone the repository
@@ -82,18 +85,32 @@ cp .env.example .env
nano .env
# 3. Build the Docker image
docker compose -f docker-compose-web.yml build
docker compose build
# 4. Initialize the database and set password
docker compose -f docker-compose-web.yml run --rm init-db --password "YourSecurePassword"
docker compose run --rm init-db --password "YourSecurePassword"
# 5. Start the application
docker compose -f docker-compose-web.yml up -d
docker compose up -d
# 6. Access the web interface
# Open browser to: http://localhost:5000
```
**Alternative: Standalone CLI Scanner**
For quick one-off scans without the web interface:
```bash
# Build and run with standalone compose file
docker compose -f docker-compose-standalone.yml build
docker compose -f docker-compose-standalone.yml up
# Results saved to ./output/ directory
```
**Note**: `docker-compose.yml` (web application) is now the default. Use `docker-compose-standalone.yml` for CLI-only scans.
---
## Configuration
@@ -153,7 +170,23 @@ mkdir -p configs data output logs
### Step 2: Configure Scan Targets
Create YAML configuration files for your scan targets:
You can create scan configurations in two ways:
**Option A: Using the Web UI (Recommended - Phase 4 Feature)**
1. Navigate to **Configs** in the web interface
2. Click **"Create New Config"**
3. Use the CIDR-based config creator for quick setup:
- Enter site name
- Enter CIDR range (e.g., `192.168.1.0/24`)
- Select expected ports from dropdowns
- Click **"Generate Config"**
4. Or use the **YAML Editor** for advanced configurations
5. Save and use immediately in scans or schedules
**Option B: Manual YAML Files**
Create YAML configuration files manually in the `configs/` directory:
```bash
# Example configuration
@@ -161,21 +194,28 @@ cat > configs/my-network.yaml <<EOF
title: "My Network Infrastructure"
sites:
- name: "Web Servers"
ips:
- address: "192.168.1.10"
expected:
ping: true
tcp_ports: [80, 443]
udp_ports: []
services: ["http", "https"]
cidr: "192.168.1.0/24" # Scan entire subnet
expected_ports:
- port: 80
protocol: tcp
service: "http"
- port: 443
protocol: tcp
service: "https"
- port: 22
protocol: tcp
service: "ssh"
ping_expected: true
EOF
```
**Note**: Phase 4 introduced a powerful config creator in the web UI that makes it easy to generate configs from CIDR ranges without manually editing YAML.
### Step 3: Build Docker Image
```bash
# Build the image (takes 5-10 minutes on first run)
docker compose -f docker-compose-web.yml build
docker compose -f docker-compose.yml build
# Verify image was created
docker images | grep sneakyscanner
@@ -183,17 +223,20 @@ docker images | grep sneakyscanner
### Step 4: Initialize Database
The database must be initialized before first use:
The database must be initialized before first use. The init-db service uses a profile, so you need to explicitly run it:
```bash
# Initialize database and set application password
docker compose -f docker-compose-web.yml run --rm init-db --password "YourSecurePassword"
docker compose -f docker-compose.yml run --rm init-db --password "YourSecurePassword"
# The init-db command will:
# - Create database schema
# - Run all Alembic migrations
# - Set the application password
# - Create default settings
# - Set the application password (bcrypt hashed)
# - Create default settings with encryption
# Verify database was created
ls -lh data/sneakyscanner.db
```
**Password Requirements:**
@@ -201,6 +244,8 @@ docker compose -f docker-compose-web.yml run --rm init-db --password "YourSecure
- Use a strong, unique password
- Store securely (password manager)
**Note**: The init-db service is defined with `profiles: [tools]` in docker-compose.yml, which means it won't start automatically with `docker compose up`.
### Step 5: Verify Configuration
```bash
@@ -208,7 +253,7 @@ docker compose -f docker-compose-web.yml run --rm init-db --password "YourSecure
ls -lh data/sneakyscanner.db
# Verify Docker Compose configuration
docker compose -f docker-compose-web.yml config
docker compose -f docker-compose.yml config
```
---
@@ -219,10 +264,10 @@ docker compose -f docker-compose-web.yml config
```bash
# Start in detached mode (background)
docker compose -f docker-compose-web.yml up -d
docker compose -f docker-compose.yml up -d
# View logs during startup
docker compose -f docker-compose-web.yml logs -f web
docker compose -f docker-compose.yml logs -f web
# Expected output:
# web_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
@@ -231,47 +276,143 @@ docker compose -f docker-compose-web.yml logs -f web
### Accessing the Web Interface
1. Open browser to: **http://localhost:5000**
2. Login with the password you set during database initialization
3. Dashboard will display recent scans and statistics
2. Login with the password you set during database initialization (username is not required - single-user mode)
3. Dashboard will display:
- Recent scans with status indicators
- Summary statistics (total scans, IPs, ports, services)
- Trend charts showing infrastructure changes over time
- Quick actions (run scan, create config, view schedules)
### Stopping the Application
```bash
# Stop containers (preserves data)
docker compose -f docker-compose-web.yml down
docker compose -f docker-compose.yml down
# Stop and remove volumes (WARNING: deletes all data!)
docker compose -f docker-compose-web.yml down -v
docker compose -f docker-compose.yml down -v
```
### Restarting the Application
```bash
# Restart all services
docker compose -f docker-compose-web.yml restart
docker compose -f docker-compose.yml restart
# Restart only the web service
docker compose -f docker-compose-web.yml restart web
docker compose -f docker-compose.yml restart web
```
### Viewing Logs
```bash
# View all logs
docker compose -f docker-compose-web.yml logs
docker compose -f docker-compose.yml logs
# Follow logs in real-time
docker compose -f docker-compose-web.yml logs -f
docker compose -f docker-compose.yml logs -f
# View last 100 lines
docker compose -f docker-compose-web.yml logs --tail=100
docker compose -f docker-compose.yml logs --tail=100
# View logs for specific service
docker compose -f docker-compose-web.yml logs web
docker compose -f docker-compose.yml logs web
```
---
## Using the Web Interface
### Dashboard Overview
The dashboard provides a central view of your scanning activity:
**Key Sections:**
- **Summary Statistics**: Total scans, IPs discovered, open ports, services detected
- **Recent Scans**: Last 10 scans with status, timestamp, and quick actions
- **Trend Charts**: Port count trends over time using Chart.js
- **Quick Actions**: Buttons to run scans, create configs, manage schedules
### Managing Scan Configurations (Phase 4)
**Creating Configs:**
1. Navigate to **Configs****Create New Config**
2. **CIDR Creator Mode**:
- Enter site name (e.g., "Production Servers")
- Enter CIDR range (e.g., `192.168.1.0/24`)
- Select expected TCP/UDP ports from dropdowns
- Click **"Generate Config"** to create YAML
3. **YAML Editor Mode**:
- Switch to editor tab for advanced configurations
- Syntax highlighting with line numbers
- Validate YAML before saving
**Editing Configs:**
1. Navigate to **Configs** → Select config
2. Click **"Edit"** button
3. Make changes in YAML editor
4. Save changes (validates YAML automatically)
**Uploading Configs:**
1. Navigate to **Configs****Upload**
2. Select YAML file from your computer
3. File is validated and saved to `configs/` directory
**Downloading Configs:**
- Click **"Download"** button next to any config
- Saves YAML file to your local machine
**Deleting Configs:**
- Click **"Delete"** button
- **Warning**: Cannot delete configs used by active schedules
### Running Scans
**Manual Scans:**
1. Navigate to **Dashboard** or **Scans**
2. Click **"Run Scan Now"**
3. Select configuration file from dropdown
4. Click **"Start Scan"**
5. Scan executes in background (APScheduler)
6. Monitor progress on **Scans** page
**Scheduled Scans:**
1. Navigate to **Schedules****Create Schedule**
2. Enter schedule name (e.g., "Daily production scan")
3. Select config file
4. Enter cron expression (e.g., `0 2 * * *` for 2 AM daily)
5. Enable schedule
6. Scans run automatically in background
**Cron Expression Examples:**
- `0 2 * * *` - Daily at 2 AM
- `0 */6 * * *` - Every 6 hours
- `0 0 * * 0` - Weekly on Sunday at midnight
- `0 0 1 * *` - Monthly on 1st at midnight
### Viewing Scan Results
**Scan List:**
- Navigate to **Scans** page
- View all historical scans with filters
- Click scan ID to view details
**Scan Details:**
- Full scan results organized by site
- Discovered IPs, ports, services
- SSL/TLS certificate information
- TLS version support and cipher suites
- Service version detection
- Screenshots of web services
- Download buttons (JSON, HTML, ZIP)
**Trend Analysis:**
- Charts showing port count changes over time
- Identify infrastructure drift
- Track service version updates
---
## Volume Management
### Understanding Volumes
@@ -280,10 +421,12 @@ SneakyScanner uses several mounted volumes for data persistence:
| Volume | Container Path | Purpose | Important? |
|--------|----------------|---------|------------|
| `./configs` | `/app/configs` | Scan configuration files (read-only) | Yes |
| `./data` | `/app/data` | SQLite database | **Critical** |
| `./output` | `/app/output` | Scan results (JSON, HTML, ZIP) | Yes |
| `./logs` | `/app/logs` | Application logs | No |
| `./configs` | `/app/configs` | Scan configuration files (managed via web UI) | Yes |
| `./data` | `/app/data` | SQLite database (contains all scan history) | **Critical** |
| `./output` | `/app/output` | Scan results (JSON, HTML, ZIP, screenshots) | Yes |
| `./logs` | `/app/logs` | Application logs (rotating file handler) | No |
**Note**: As of Phase 4, the `./configs` volume is read-write to support the web-based config creator and editor. The web UI can now create, edit, and delete configuration files directly.
### Backing Up Data
@@ -305,7 +448,7 @@ tar -czf backups/$(date +%Y%m%d)/configs.tar.gz configs/
```bash
# Stop application
docker compose -f docker-compose-web.yml down
docker compose -f docker-compose.yml down
# Restore database
cp backups/YYYYMMDD/sneakyscanner.db data/
@@ -314,35 +457,216 @@ cp backups/YYYYMMDD/sneakyscanner.db data/
tar -xzf backups/YYYYMMDD/output.tar.gz
# Restart application
docker compose -f docker-compose-web.yml up -d
docker compose -f docker-compose.yml up -d
```
### Cleaning Up Old Scan Results
**Option A: Using the Web UI (Recommended)**
1. Navigate to **Scans** page
2. Select scans you want to delete
3. Click **"Delete"** button
4. Confirm deletion (removes database records and all associated files)
**Option B: Manual Cleanup**
```bash
# Find old scan results (older than 30 days)
find output/ -type f -name "scan_report_*.json" -mtime +30
# Delete old scan results
find output/ -type f -name "scan_report_*" -mtime +30 -delete
# Delete old scan results and screenshots
find output/ -type f -mtime +30 -delete
find output/ -type d -empty -delete
# Or use the API to delete scans from UI/API
# Note: Manual deletion doesn't remove database records
# Use the web UI or API for complete cleanup
```
**Option C: Using the API**
```bash
# Delete a specific scan (removes DB records + files)
curl -X DELETE http://localhost:5000/api/scans/{scan_id} \
-b cookies.txt
```
---
## API Usage Examples
SneakyScanner provides a comprehensive REST API for automation and integration. All API endpoints require authentication via session cookies.
### Authentication
```bash
# Login and save session cookie
curl -X POST http://localhost:5000/api/auth/login \
-H "Content-Type: application/json" \
-d '{"password": "YourPassword"}' \
-c cookies.txt
# Logout
curl -X POST http://localhost:5000/api/auth/logout \
-b cookies.txt
```
### Config Management (Phase 4)
```bash
# List all configs
curl http://localhost:5000/api/configs \
-b cookies.txt
# Get specific config
curl http://localhost:5000/api/configs/prod-network.yaml \
-b cookies.txt
# Create new config
curl -X POST http://localhost:5000/api/configs \
-H "Content-Type: application/json" \
-d '{
"filename": "test-network.yaml",
"content": "title: Test Network\nsites:\n - name: Test\n cidr: 10.0.0.0/24"
}' \
-b cookies.txt
# Update config
curl -X PUT http://localhost:5000/api/configs/test-network.yaml \
-H "Content-Type: application/json" \
-d '{
"content": "title: Updated Test Network\nsites:\n - name: Test Site\n cidr: 10.0.0.0/24"
}' \
-b cookies.txt
# Download config
curl http://localhost:5000/api/configs/test-network.yaml/download \
-b cookies.txt -o test-network.yaml
# Delete config
curl -X DELETE http://localhost:5000/api/configs/test-network.yaml \
-b cookies.txt
```
### Scan Management
```bash
# Trigger a scan
curl -X POST http://localhost:5000/api/scans \
-H "Content-Type: application/json" \
-d '{"config_file": "/app/configs/prod-network.yaml"}' \
-b cookies.txt
# List all scans
curl http://localhost:5000/api/scans?page=1&per_page=20 \
-b cookies.txt
# Get scan details
curl http://localhost:5000/api/scans/123 \
-b cookies.txt
# Check scan status
curl http://localhost:5000/api/scans/123/status \
-b cookies.txt
# Delete scan
curl -X DELETE http://localhost:5000/api/scans/123 \
-b cookies.txt
```
### Schedule Management
```bash
# List schedules
curl http://localhost:5000/api/schedules \
-b cookies.txt
# Create schedule
curl -X POST http://localhost:5000/api/schedules \
-H "Content-Type: application/json" \
-d '{
"name": "Daily Production Scan",
"config_file": "/app/configs/prod-network.yaml",
"cron_expression": "0 2 * * *",
"enabled": true
}' \
-b cookies.txt
# Update schedule
curl -X PUT http://localhost:5000/api/schedules/1 \
-H "Content-Type: application/json" \
-d '{"enabled": false}' \
-b cookies.txt
# Manually trigger scheduled scan
curl -X POST http://localhost:5000/api/schedules/1/trigger \
-b cookies.txt
# Delete schedule
curl -X DELETE http://localhost:5000/api/schedules/1 \
-b cookies.txt
```
### Settings Management
```bash
# Get all settings (sanitized - passwords hidden)
curl http://localhost:5000/api/settings \
-b cookies.txt
# Update settings
curl -X PUT http://localhost:5000/api/settings \
-H "Content-Type: application/json" \
-d '{
"retention_days": 90,
"smtp_server": "smtp.gmail.com"
}' \
-b cookies.txt
# Test email configuration
curl -X POST http://localhost:5000/api/settings/test-email \
-b cookies.txt
# Health check (no auth required)
curl http://localhost:5000/api/settings/health
```
### Statistics
```bash
# Get dashboard summary
curl http://localhost:5000/api/stats/summary \
-b cookies.txt
# Get trend data
curl http://localhost:5000/api/stats/trends?days=30&metric=port_count \
-b cookies.txt
# Get certificate expiry overview
curl http://localhost:5000/api/stats/certificates \
-b cookies.txt
```
For complete API documentation, see `docs/API_REFERENCE.md`.
---
## Health Monitoring
### Health Check Endpoint
SneakyScanner includes a built-in health check endpoint:
SneakyScanner includes a built-in health check endpoint used by Docker's healthcheck:
```bash
# Check application health
curl http://localhost:5000/api/settings/health
# Expected response:
# Expected response (200 OK):
# {"status": "healthy"}
# This endpoint is also used by Docker Compose healthcheck
# Defined in docker-compose.yml:
# - Interval: 60s (check every minute)
# - Timeout: 10s
# - Retries: 3
# - Start period: 40s (grace period for app startup)
```
### Docker Health Status
@@ -359,7 +683,7 @@ docker inspect sneakyscanner-web | grep -A 10 Health
```bash
# Watch for errors in logs
docker compose -f docker-compose-web.yml logs -f | grep ERROR
docker compose -f docker-compose.yml logs -f | grep ERROR
# Check application log file
tail -f logs/sneakyscanner.log
@@ -375,7 +699,7 @@ tail -f logs/sneakyscanner.log
```bash
# Check logs for errors
docker compose -f docker-compose-web.yml logs web
docker compose -f docker-compose.yml logs web
# Common issues:
# 1. Database not initialized - run init-db first
@@ -399,7 +723,7 @@ sqlite3 data/sneakyscanner.db "SELECT 1;" 2>&1
# Remove corrupted database and reinitialize
rm data/sneakyscanner.db
docker compose -f docker-compose-web.yml run --rm init-db --password "YourPassword"
docker compose -f docker-compose.yml run --rm init-db --password "YourPassword"
```
### Scans Fail with "Permission Denied"
@@ -415,7 +739,7 @@ docker inspect sneakyscanner-web | grep Privileged
docker inspect sneakyscanner-web | grep NetworkMode
# Should show: "NetworkMode": "host"
# If not, verify docker-compose-web.yml has:
# If not, verify docker-compose.yml has:
# privileged: true
# network_mode: host
```
@@ -429,7 +753,7 @@ docker inspect sneakyscanner-web | grep NetworkMode
docker ps | grep sneakyscanner-web
# Check if Flask is listening
docker compose -f docker-compose-web.yml exec web netstat -tlnp | grep 5000
docker compose -f docker-compose.yml exec web netstat -tlnp | grep 5000
# Check firewall rules
sudo ufw status | grep 5000
@@ -438,7 +762,7 @@ sudo ufw status | grep 5000
curl http://localhost:5000/api/settings/health
# Check logs for binding errors
docker compose -f docker-compose-web.yml logs web | grep -i bind
docker compose -f docker-compose.yml logs web | grep -i bind
```
### Background Scans Not Running
@@ -447,13 +771,39 @@ docker compose -f docker-compose-web.yml logs web | grep -i bind
```bash
# Check scheduler is initialized
docker compose -f docker-compose-web.yml logs web | grep -i scheduler
docker compose -f docker-compose.yml logs web | grep -i scheduler
# Check for job execution errors
docker compose -f docker-compose-web.yml logs web | grep -i "execute_scan"
docker compose -f docker-compose.yml logs web | grep -i "execute_scan"
# Verify APScheduler environment variables
docker compose -f docker-compose-web.yml exec web env | grep SCHEDULER
docker compose -f docker-compose.yml exec web env | grep SCHEDULER
# Check for scan job errors
docker compose -f docker-compose.yml logs web | grep -E "(ERROR|Exception|Traceback)"
# Verify scanner executables are available
docker compose -f docker-compose.yml exec web which masscan nmap
```
### Config Files Not Appearing in Web UI
**Problem**: Manually created configs don't show up in web interface
```bash
# Check file permissions (must be readable by web container)
ls -la configs/
# Fix permissions if needed
sudo chown -R 1000:1000 configs/
chmod 644 configs/*.yaml
# Verify YAML syntax is valid
docker compose -f docker-compose.yml exec web python3 -c \
"import yaml; yaml.safe_load(open('/app/configs/your-config.yaml'))"
# Check web logs for parsing errors
docker compose -f docker-compose.yml logs web | grep -i "config"
```
### Health Check Failing
@@ -462,7 +812,7 @@ docker compose -f docker-compose-web.yml exec web env | grep SCHEDULER
```bash
# Run health check manually
docker compose -f docker-compose-web.yml exec web \
docker compose -f docker-compose.yml exec web \
python3 -c "import urllib.request; print(urllib.request.urlopen('http://localhost:5000/api/settings/health').read())"
# Check if health endpoint exists
@@ -480,16 +830,19 @@ curl -v http://localhost:5000/api/settings/health
### Production Deployment Checklist
- [ ] Changed `SECRET_KEY` to random value
- [ ] Changed `SNEAKYSCANNER_ENCRYPTION_KEY` to random value
- [ ] Set strong application password
- [ ] Changed `SECRET_KEY` to random value (64+ character hex string)
- [ ] Changed `SNEAKYSCANNER_ENCRYPTION_KEY` to random Fernet key
- [ ] Set strong application password via init-db
- [ ] Set `FLASK_ENV=production`
- [ ] Set `FLASK_DEBUG=false`
- [ ] Configured proper `CORS_ORIGINS` (not `*`)
- [ ] Using HTTPS/TLS (reverse proxy recommended)
- [ ] Restricted network access (firewall rules)
- [ ] Regular backups configured
- [ ] Regular backups configured (database + configs)
- [ ] Log monitoring enabled
- [ ] Scheduled scans configured with appropriate frequency
- [ ] Alert rules configured (Phase 5 - coming soon)
- [ ] Webhook/email notifications configured (Phase 5 - coming soon)
### Network Security
@@ -552,7 +905,7 @@ chmod 444 configs/*.yaml
```bash
# 1. Stop the application
docker compose -f docker-compose-web.yml down
docker compose -f docker-compose.yml down
# 2. Backup database
cp data/sneakyscanner.db data/sneakyscanner.db.backup
@@ -561,16 +914,16 @@ cp data/sneakyscanner.db data/sneakyscanner.db.backup
git pull origin master
# 4. Rebuild Docker image
docker compose -f docker-compose-web.yml build
docker compose -f docker-compose.yml build
# 5. Run database migrations
docker compose -f docker-compose-web.yml run --rm web alembic upgrade head
docker compose -f docker-compose.yml run --rm web alembic upgrade head
# 6. Start application
docker compose -f docker-compose-web.yml up -d
docker compose -f docker-compose.yml up -d
# 7. Verify upgrade
docker compose -f docker-compose-web.yml logs -f
docker compose -f docker-compose.yml logs -f
curl http://localhost:5000/api/settings/health
```
@@ -580,7 +933,7 @@ If upgrade fails:
```bash
# Stop new version
docker compose -f docker-compose-web.yml down
docker compose -f docker-compose.yml down
# Restore database backup
cp data/sneakyscanner.db.backup data/sneakyscanner.db
@@ -589,8 +942,8 @@ cp data/sneakyscanner.db.backup data/sneakyscanner.db
git checkout <previous-version-tag>
# Rebuild and start
docker compose -f docker-compose-web.yml build
docker compose -f docker-compose-web.yml up -d
docker compose -f docker-compose.yml build
docker compose -f docker-compose.yml up -d
```
---
@@ -607,7 +960,7 @@ BACKUP_DIR="backups/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
# Stop application for consistent backup
docker compose -f docker-compose-web.yml stop web
docker compose -f docker-compose.yml stop web
# Backup database
cp data/sneakyscanner.db "$BACKUP_DIR/"
@@ -619,7 +972,7 @@ find output/ -type f -mtime -30 -exec cp --parents {} "$BACKUP_DIR/" \;
cp -r configs/ "$BACKUP_DIR/"
# Restart application
docker compose -f docker-compose-web.yml start web
docker compose -f docker-compose.yml start web
echo "Backup complete: $BACKUP_DIR"
```
@@ -639,7 +992,7 @@ crontab -e
```bash
# Stop application
docker compose -f docker-compose-web.yml down
docker compose -f docker-compose.yml down
# Restore files
cp backups/YYYYMMDD_HHMMSS/sneakyscanner.db data/
@@ -647,7 +1000,7 @@ cp -r backups/YYYYMMDD_HHMMSS/configs/* configs/
cp -r backups/YYYYMMDD_HHMMSS/output/* output/
# Start application
docker compose -f docker-compose-web.yml up -d
docker compose -f docker-compose.yml up -d
```
---
@@ -655,12 +1008,41 @@ docker compose -f docker-compose-web.yml up -d
## Support and Further Reading
- **Project README**: `README.md` - General project information
- **API Documentation**: `docs/ai/API_REFERENCE.md` - REST API reference
- **Developer Guide**: `docs/ai/DEVELOPMENT.md` - Development setup and architecture
- **Phase 2 Documentation**: `docs/ai/PHASE2.md` - Implementation details
- **API Documentation**: `docs/API_REFERENCE.md` - Complete REST API reference
- **Roadmap**: `docs/ROADMAP.md` - Project roadmap, feature plans, and architecture
- **Issue Tracker**: File bugs and feature requests on GitHub
---
**Last Updated**: 2025-11-14
**Version**: Phase 2 - Web Application Complete
## What's New
### Phase 4 (2025-11-17) - Config Creator ✅
- **CIDR-based Config Creator**: Web UI for generating scan configs from CIDR ranges
- **YAML Editor**: Built-in editor with syntax highlighting (CodeMirror)
- **Config Management UI**: List, view, edit, download, and delete configs via web interface
- **Config Upload**: Direct YAML file upload for advanced users
- **REST API**: 7 new config management endpoints
- **Schedule Protection**: Prevents deleting configs used by active schedules
### Phase 3 (2025-11-14) - Dashboard & Scheduling ✅
- **Dashboard**: Summary stats, recent scans, trend charts
- **Scheduled Scans**: Cron-based scheduling with web UI management
- **Scan History**: Detailed scan results with full data display
- **Chart.js Integration**: Port count trends over time
### Phase 2 (2025-11-14) - Web Application Core ✅
- **REST API**: Complete API for scan management
- **Background Jobs**: APScheduler-based async execution
- **Authentication**: Session-based login system
- **Database Integration**: SQLite with SQLAlchemy ORM
### Coming Soon: Phase 5 - Email, Webhooks & Comparisons
- Email notifications for infrastructure changes
- Webhook integrations (Slack, PagerDuty, custom)
- Scan comparison reports
- Alert rule configuration
---
**Last Updated**: 2025-11-17
**Version**: Phase 4 - Config Creator Complete

View File

@@ -1,5 +1,9 @@
# SneakyScanner Roadmap
## Vision & Goals
SneakyScanner is a comprehensive **Flask web application** for infrastructure monitoring and security auditing. The primary interface is the web GUI, with a CLI API client planned for scripting and automation needs.
**Status:** Phase 4 Complete ✅ | Phase 5 Next Up
## Progress Overview
@@ -11,37 +15,14 @@
- Dashboard, scan history, scheduled scans, trend charts
-**Phase 4: Config Creator** - Complete (2025-11-17)
- CIDR-based config creation, YAML editor, config management UI
- 📋 **Phase 5: Email & Comparisons** - Next Up
- 📋 **Phase 5: Email, Webhooks & Comparisons** - Next Up
- Email notifications, alert rules, scan comparison
- 📋 **Phase 6: CLI as API Client** - Planned
- CLI for scripting and automation via API
- 📋 **Phase 7: Advanced Features** - Future
- CVE integration, timeline view, PDF export, enhanced reports
## Recent Bug Fixes
### 2025-11-17: Chart.js Infinite Canvas Growth Fix
**Issue:** Scan detail page (`scan_detail.html`) was experiencing infinite scrolling and page lock-up due to Chart.js canvas growing infinitely (height reaching 22302px+).
**Root Causes:**
1. Duplicate initialization - `loadScan()` was being called twice on page load
2. Multiple Chart.js instances created on the same canvas without destroying previous ones
3. Canvas element without fixed-height container caused infinite resize loop with `responsive: true` and `maintainAspectRatio: false`
**Fixes Applied:**
1. **Consolidated initialization** (`scan_detail.html:172-175`) - Moved `findPreviousScan()` and `loadHistoricalChart()` into `DOMContentLoaded` event listener, removed duplicate call
2. **Chart instance tracking** (`scan_detail.html:169`) - Added `let historyChart = null;` to store chart reference
3. **Destroy old charts** (`scan_detail.html:501-504`) - Added `historyChart.destroy()` before creating new chart instance
4. **Fixed-height container** (`scan_detail.html:136-138`) - Wrapped canvas in `<div style="position: relative; height: 300px;">` to prevent infinite resize loop
**Files Modified:**
- `web/templates/scan_detail.html`
**Status:** ✅ Fixed and tested
## Vision & Goals
SneakyScanner is a comprehensive **Flask web application** for infrastructure monitoring and security auditing. The primary interface is the web GUI, with a CLI API client planned for scripting and automation needs.
**Core Features:**
- **Centralized dashboard** for viewing scan history and trends
@@ -439,33 +420,360 @@ All API endpoints return JSON and follow RESTful conventions.
---
### Phase 5: Email & Comparisons
### Phase 5: Email, Webhooks & Comparisons
**Status:** Next Up
**Priority:** MEDIUM
**Goals:**
- Implement email notification system
- Create scan comparison reports
- Add alert rule configuration
- Implement email notification system for infrastructure misconfigurations
- Implement webhook notification system for real-time alerting
- Create scan comparison reports to detect drift
- Add alert rule configuration for unexpected exposure detection
**Core Use Case:**
Monitor infrastructure for misconfigurations that expose unexpected ports/services to the world. When a scan detects an open port that wasn't defined in the YAML config's `expected_ports` list, trigger immediate notifications via email and/or webhooks.
**Planned Features:**
1. **Email Notifications:**
- SMTP integration with configurable settings
- Alert email templates (Jinja2 HTML)
#### 1. Alert Rule Engine
**Purpose:** Automatically detect and classify infrastructure anomalies after each scan.
**Alert Types:**
- `unexpected_port` - Port open but not in config's `expected_ports` list
- `unexpected_service` - Service detected that doesn't match expected service name
- `cert_expiry` - SSL/TLS certificate expiring soon (configurable threshold)
- `ping_failed` - Expected host not responding to ping
- `service_down` - Previously detected service no longer responding
- `service_change` - Service version/product changed between scans
- `weak_tls` - TLS 1.0/1.1 detected or weak cipher suites
- `new_host` - New IP address responding in CIDR range
- `host_disappeared` - Previously seen IP no longer responding
**Alert Severity Levels:**
- `critical` - Unexpected internet-facing service (ports 80/443/22/3389/etc.)
- `warning` - Minor configuration drift or upcoming cert expiry
- `info` - Informational alerts (new host discovered, service version change)
**Alert Rule Configuration:**
```yaml
# Example alert rule configuration (stored in DB)
alert_rules:
- id: 1
rule_type: unexpected_port
enabled: true
severity: critical
email_enabled: true
webhook_enabled: true
filter_conditions:
ports: [22, 80, 443, 3389, 3306, 5432, 27017] # High-risk ports
- id: 2
rule_type: cert_expiry
enabled: true
severity: warning
threshold: 30 # Days before expiry
email_enabled: true
webhook_enabled: false
```
**Implementation:**
- Evaluate alert rules after each scan completes
- Compare current scan results to expected configuration
- Generate alerts and store in `alerts` table
- Trigger notifications based on rule configuration
- Alert deduplication (don't spam for same issue)
#### 2. Email Notifications
**Purpose:** Send detailed email alerts when infrastructure misconfigurations are detected.
**SMTP Configuration (via Settings API):**
```json
{
"smtp_server": "smtp.gmail.com",
"smtp_port": 587,
"smtp_use_tls": true,
"smtp_username": "alerts@example.com",
"smtp_password": "encrypted_password",
"smtp_from_email": "SneakyScanner <alerts@example.com>",
"smtp_to_emails": ["admin@example.com", "security@example.com"],
"email_alerts_enabled": true
}
```
**Email Template (Jinja2 HTML):**
```html
Subject: [SneakyScanner Alert] Unexpected Port Detected - {{ ip_address }}:{{ port }}
Body:
===============================================
SneakyScanner Security Alert
===============================================
Alert Type: {{ alert_type }}
Severity: {{ severity }}
Scan ID: {{ scan_id }}
Timestamp: {{ timestamp }}
Issue Detected:
{{ message }}
Details:
- IP Address: {{ ip_address }}
- Port: {{ port }}/{{ protocol }}
- Service: {{ service_name }} ({{ product }} {{ version }})
- Expected: No (not in expected_ports list)
Recommended Actions:
1. Verify if this service should be exposed
2. Check firewall rules for {{ ip_address }}
3. Review service configuration
4. Update scan config if this is intentional
View Full Scan Results:
{{ web_url }}/scans/{{ scan_id }}
===============================================
```
**Email Features:**
- HTML email with styled alert box (Bootstrap-based)
- Plain-text fallback for compatibility
- Alert summary with actionable recommendations
- Direct link to scan detail page
- Configurable recipients (multiple emails)
- Test email functionality in Settings UI
- Email delivery tracking (email_sent, email_sent_at in alerts table)
- Rate limiting to prevent email flood
**Email API Endpoints:**
- `POST /api/settings/email` - Configure SMTP settings
- `POST /api/settings/email/test` - Send test email
- `GET /api/alerts?email_sent=true` - Get alerts with email status
#### 3. Webhook Notifications
**Purpose:** Real-time HTTP POST notifications for integration with external systems (Slack, PagerDuty, custom dashboards, SIEM tools).
**Webhook Configuration (via Settings API):**
```json
{
"webhook_enabled": true,
"webhook_urls": [
{
"id": 1,
"name": "Slack Security Channel",
"url": "https://hooks.slack.com/services/XXX/YYY/ZZZ",
"enabled": true,
"auth_type": "none",
"custom_headers": {},
"alert_types": ["unexpected_port", "unexpected_service", "weak_tls"],
"severity_filter": ["critical", "warning"]
},
{
"id": 2,
"name": "PagerDuty",
"url": "https://events.pagerduty.com/v2/enqueue",
"enabled": true,
"auth_type": "bearer",
"auth_token": "encrypted_token",
"custom_headers": {
"Content-Type": "application/json"
},
"alert_types": ["unexpected_port"],
"severity_filter": ["critical"]
}
]
}
```
**Webhook Payload Format (JSON):**
```json
{
"event_type": "scan_alert",
"alert_id": 42,
"alert_type": "unexpected_port",
"severity": "critical",
"timestamp": "2025-11-17T14:23:45Z",
"scan": {
"scan_id": 123,
"title": "Production Network Scan",
"timestamp": "2025-11-17T14:15:00Z",
"config_file": "prod_config.yaml",
"triggered_by": "scheduled"
},
"alert_details": {
"message": "Unexpected port 3306 (MySQL) exposed on 192.168.1.100",
"ip_address": "192.168.1.100",
"port": 3306,
"protocol": "tcp",
"state": "open",
"service": {
"name": "mysql",
"product": "MySQL",
"version": "8.0.32"
},
"expected": false,
"site_name": "Production Servers"
},
"recommended_actions": [
"Verify if MySQL should be exposed externally",
"Check firewall rules for 192.168.1.100",
"Review MySQL bind-address configuration"
],
"web_url": "https://sneakyscanner.local/scans/123"
}
```
**Webhook Features:**
- Multiple webhook URLs with independent configuration
- Per-webhook filtering by alert type and severity
- Custom headers support (for API keys, auth tokens)
- Authentication methods:
- `none` - No authentication
- `bearer` - Bearer token in Authorization header
- `basic` - Basic authentication
- `custom` - Custom header-based auth
- Retry logic with exponential backoff (3 attempts)
- Webhook delivery tracking (webhook_sent, webhook_sent_at, webhook_response_code)
- Test webhook functionality in Settings UI
- Timeout configuration (default 10 seconds)
- Webhook delivery history and logs
**Webhook API Endpoints:**
- `POST /api/webhooks` - Create webhook configuration
- `GET /api/webhooks` - List all webhooks
- `PUT /api/webhooks/{id}` - Update webhook configuration
- `DELETE /api/webhooks/{id}` - Delete webhook
- `POST /api/webhooks/{id}/test` - Send test webhook
- `GET /api/webhooks/{id}/history` - Get delivery history
**Slack Integration Example:**
Transform webhook payload to Slack message format:
```json
{
"text": "SneakyScanner Alert: Unexpected Port Detected",
"attachments": [
{
"color": "danger",
"fields": [
{"title": "IP Address", "value": "192.168.1.100", "short": true},
{"title": "Port", "value": "3306/tcp", "short": true},
{"title": "Service", "value": "MySQL 8.0.32", "short": true},
{"title": "Severity", "value": "CRITICAL", "short": true}
],
"footer": "SneakyScanner",
"ts": 1700234625
}
]
}
```
#### 4. Alert Management UI
**Purpose:** Web interface for configuring alert rules, viewing alert history, and managing notifications.
**Pages:**
- `/alerts` - Alert history with filtering and search
- `/alerts/rules` - Alert rule configuration
- `/settings/email` - Email notification settings
- `/settings/webhooks` - Webhook configuration
**Alert History Features:**
- Filter by alert type, severity, date range, IP address
- Search by message content
- Bulk acknowledge/dismiss alerts
- Export alerts to CSV
- Alert detail modal with full context
**Alert Rule UI Features:**
- Enable/disable rules individually
- Configure severity levels
- Set thresholds (e.g., cert expiry days)
- Toggle email/webhook per rule
- Test rules against recent scans
#### 5. Scan Comparison
**Purpose:** Detect infrastructure drift by comparing two scans and highlighting changes.
**Comparison API:**
- `GET /api/scans/{id1}/compare/{id2}` - Compare two scans
**Comparison Response:**
```json
{
"scan1": {"id": 100, "timestamp": "2025-11-10T10:00:00Z"},
"scan2": {"id": 123, "timestamp": "2025-11-17T14:15:00Z"},
"summary": {
"new_ports": 2,
"removed_ports": 0,
"service_changes": 1,
"cert_changes": 0,
"new_hosts": 1,
"removed_hosts": 0
},
"differences": {
"new_ports": [
{"ip": "192.168.1.100", "port": 3306, "service": "mysql"}
],
"removed_ports": [],
"service_changes": [
{
"ip": "192.168.1.50",
"port": 22,
"old": "OpenSSH 8.2",
"new": "OpenSSH 8.9"
}
],
"new_hosts": [
{"ip": "192.168.1.200", "site": "Production Servers"}
]
}
}
```
**Comparison UI Features:**
- Side-by-side comparison view
- Color-coded differences (green=new, red=removed, yellow=changed)
- Filter by change type
- Export comparison report to PDF/HTML
- "Compare with previous scan" button on scan detail page
---
**Phase 5 Implementation Plan:**
1. **Week 1: Alert Rule Engine**
- Implement alert evaluation logic after scan completion
- Create `alerts` table population
- Add alert rule CRUD API
- Unit tests for alert detection
2. **Week 2: Email Notifications**
- SMTP integration with Flask-Mail
- Jinja2 email templates (HTML + plain text)
- Settings API for email configuration
- Test email functionality
- Email triggers for critical events
- Email delivery tracking
2. **Alert Rule Engine:**
- Alert types: unexpected ports, cert expiry, service changes, host down, weak TLS
- Alert rule creation and management UI
- Automatic evaluation after each scan
- Alert history with severity filtering
3. **Week 3: Webhook System**
- Webhook configuration API
- HTTP POST delivery with retry logic
- Webhook template system for different platforms
- Test webhook functionality
- Delivery tracking and logging
3. **Scan Comparison:**
- Compare two scans API endpoint
- Diff detection (new/removed ports, service changes, cert changes)
- Visual comparison UI with highlighting
- "Compare" button on scan list
4. **Week 4: Alert UI & Scan Comparison**
- Alert history page with filtering
- Alert rule management UI
- Email/webhook settings pages
- Scan comparison API and UI
- Integration testing
**Success Criteria:**
- Alert triggered within 30 seconds of scan completion
- Email delivered successfully to configured recipients
- Webhook POST delivered with retry on failure
- Scan comparison highlights all infrastructure changes
- Zero false positives for expected ports/services
- Alert deduplication prevents notification spam
---
@@ -560,34 +868,6 @@ All API endpoints return JSON and follow RESTful conventions.
- Gunicorn WSGI server
- Optional Nginx reverse proxy
## Prioritized Feature List
### Completed ✅ (Phases 1-4)
1. **Database foundation** (SQLite3 + SQLAlchemy)
2. **Flask web app core** (REST API, authentication)
3. **Dashboard with scan history** (list, detail, delete)
4. **Trend charts** (Chart.js - port counts over time)
5. **Scheduled scans** (APScheduler + cron expressions)
6. **Config creator** (CIDR-based, YAML editor)
### Next Up (Phase 5)
7. **Email notifications** (SMTP integration)
8. **Alert rules** (cert expiry, unexpected ports, etc.)
9. **Scan comparison reports** (diff view)
### Planned (Phase 6-7)
10. **CLI as API client** (token auth, scripting)
11. **Sortable/filterable tables** (DataTables.js)
12. **PDF export** (WeasyPrint)
13. **Vulnerability detection** (CVE integration)
14. **Timeline view** (visual scan history)
### Future/Deferred
15. **Multi-user support** (if requirements change)
16. **Slack/webhook integrations**
17. **Prometheus metrics**
18. **Advanced charts** (heatmaps, forecasts)
## Development Workflow
### Iteration Cycle
@@ -617,69 +897,6 @@ All API endpoints return JSON and follow RESTful conventions.
- **CLAUDE.md** - Developer documentation (architecture, code references)
- **API.md** - API documentation (OpenAPI/Swagger in Phase 4)
## Success Metrics
### Phase 1 Success ✅ ACHIEVED
- [x] Database creates successfully with all 11 tables
- [x] Settings can be stored/retrieved with encryption
- [x] Flask app starts without errors
- [x] API blueprints load correctly
- [x] All Python modules have valid syntax
- [x] Docker deployment configured
### Phase 2 Success ✅ ACHIEVED
- [x] Database stores scan results correctly
- [x] REST API functional with all endpoints
- [x] Background scans execute asynchronously
- [x] Authentication protects all routes
- [x] Web UI is intuitive and responsive
- [x] 100 tests passing with comprehensive coverage
- [x] Docker deployment production-ready
### Phase 3 Success ✅ ACHIEVED
- [x] Dashboard displays scans and trends with charts
- [x] Scheduled scans execute automatically
- [x] Historical trend charts show scan history
- [x] Real-time progress updates for running scans
### Phase 4 Success ✅ ACHIEVED
- [x] Users can create configs from CIDR ranges via web UI
- [x] YAML editor with syntax highlighting works correctly
- [x] Config management UI provides list/view/edit/download/delete operations
- [x] Direct YAML upload works for advanced users
- [x] Configs immediately usable in scan triggers and schedules
- [x] Delete protection prevents removal of configs used by schedules
- [x] All tests passing (25+ unit and integration tests)
### Phase 5 Success (Email & Comparisons)
- [ ] Email notifications sent for critical alerts
- [ ] Comparison reports show meaningful diffs
- [ ] Settings UI allows SMTP configuration without editing files
### Phase 6 Success (CLI as API Client)
- [ ] CLI can trigger scans via API
- [ ] API tokens work for authentication
- [ ] Standalone CLI mode still functional
### Phase 7 Success (Advanced Features)
- [ ] CVE integration provides actionable vulnerability data
- [ ] Timeline view helps track infrastructure changes
- [ ] PDF exports are shareable and professional
## Open Questions
### Technical Decisions
- **Flask vs. FastAPI?** - Sticking with Flask for simplicity, but FastAPI offers async and auto-docs
- **APScheduler vs. Celery?** - APScheduler for simplicity (no Redis/RabbitMQ needed), but Celery scales better
- **Bootstrap vs. Tailwind?** - Bootstrap for speed (pre-built components), Tailwind for customization
- **Chart.js vs. Plotly?** - Chart.js for lightweight, Plotly for interactive (decide in Phase 3)
### Product Questions
- **Should we support multiple configs per schedule?** - Start with 1:1, add later if needed
- **How many scans to keep in DB?** - Add retention setting (default: keep all)
- **Support for multi-tenancy?** - Not in scope (single-user), but database schema allows future expansion
- **Mobile app?** - Out of scope, but responsive web UI covers basics
## Resources & References
### Documentation