Compare commits

69 Commits

Author SHA1 Message Date
4b197e0b3d Merge pull request 'beta' (#10) from beta into master
Reviewed-on: #10
2025-11-25 20:49:46 +00:00
30f0987a99 Merge pull request 'nightly' (#9) from nightly into beta
Reviewed-on: #9
2025-11-25 20:49:25 +00:00
9e2fc348b7 Merge branch 'bug/long-scans-break' into nightly 2025-11-25 14:48:00 -06:00
847e05abbe Changes Made
1. app/web/utils/validators.py - Added 'finalizing' to valid_statuses list
  2. app/web/models.py - Updated status field comment to document all valid statuses
  3. app/web/jobs/scan_job.py
  - Added transition to 'finalizing' status before output file generation
  - Sets current_phase = 'generating_outputs' during this phase
  - Wrapped output generation in try-except with proper error handling
  - If output generation fails, scan is marked 'completed' with warning message (scan data is still valid)

  4. app/web/api/scans.py
  - Added _recover_orphaned_scan() helper function for smart recovery
  - Modified stop_running_scan() to:
    - Allow stopping scans with status 'running' OR 'finalizing'
    - When scanner not in registry, perform smart recovery instead of returning 404
    - Smart recovery checks for output files and marks as 'completed' if found, 'cancelled' if not

  5. app/web/services/scan_service.py
  - Enhanced cleanup_orphaned_scans() with smart recovery logic
  - Now finds scans in both 'running' and 'finalizing' status
  - Returns dict with stats: {'recovered': N, 'failed': N, 'total': N}

  6. app/web/app.py - Updated caller to handle new dict return type from cleanup_orphaned_scans()

  Expected Behavior Now

  1. Normal scan flow: running → finalizing → completed
  2. Stop on active scan: Sends cancel signal, becomes 'cancelled'
  3. Stop on orphaned scan with files: Smart recovery → 'completed'
  4. Stop on orphaned scan without files: → 'cancelled'
  5. App restart with orphans: Startup cleanup uses smart recovery
2025-11-25 14:47:36 -06:00
07c2bcfd11 Merge branch 'beta' 2025-11-24 12:54:58 -06:00
a560bae800 Merge branch 'nightly' into beta 2025-11-24 12:54:33 -06:00
56828e4184 Merge branch 'feat/fix-cron-schedules' into nightly 2025-11-24 12:53:44 -06:00
5e3a70f837 Fix schedule management and update documentation for database-backed configs
This commit addresses multiple issues with schedule management and updates
  documentation to reflect the transition from YAML-based to database-backed
  configuration system.

  **Documentation Updates:**
  - Update DEPLOYMENT.md to remove all references to YAML config files
  - Document that all configurations are now stored in SQLite database
  - Update API examples to use config IDs instead of YAML filenames
  - Remove configs directory from backup/restore procedures
  - Update volume management section to reflect database-only storage

  **Cron Expression Handling:**
  - Add comprehensive documentation for APScheduler cron format conversion
  - Document that from_crontab() accepts standard format (Sunday=0) and converts automatically
  - Add validate_cron_expression() helper method with detailed error messages
  - Include helpful hints for day-of-week field errors in validation
  - Fix all deprecated datetime.utcnow() calls, replace with datetime.now(timezone.utc)

  **Timezone-Aware DateTime Fixes:**
  - Fix "can't subtract offset-naive and offset-aware datetimes" error
  - Add timezone awareness to croniter.get_next() return values
  - Make _get_relative_time() defensive to handle both naive and aware datetimes
  - Ensure all datetime comparisons use timezone-aware objects

  **Schedule Edit UI Fixes:**
  - Fix JavaScript error "Cannot set properties of null (setting 'value')"
  - Change reference from non-existent 'config-id' to correct 'config-file' element
  - Add config_name field to schedule API responses for better UX
  - Eagerly load Schedule.config relationship using joinedload()
  - Fix AttributeError: use schedule.config.title instead of .name
  - Display config title and ID in schedule edit form

  **Technical Details:**
  - app/web/services/schedule_service.py: 6 datetime.utcnow() fixes, validation enhancements
  - app/web/services/scheduler_service.py: Documentation, validation, timezone fixes
  - app/web/templates/schedule_edit.html: JavaScript element reference fix
  - docs/DEPLOYMENT.md: Complete rewrite of config management sections

  Fixes scheduling for Sunday at midnight (cron: 0 0 * * 0)
  Fixes schedule edit page JavaScript errors
  Improves user experience with config title display
2025-11-24 12:53:06 -06:00
451c7e92ff Merge pull request 'Merging beta into master' (#8) from beta into master
Reviewed-on: #8
2025-11-21 22:07:06 +00:00
8b89fd506d Merge pull request 'nightly merge into beta' (#7) from nightly into beta
Reviewed-on: #7
2025-11-21 22:05:43 +00:00
f24bd11dfd Add unique IP count and duplicate detection to sites page
The sites page previously showed total IP count which included duplicates
across multiple sites, leading to inflated numbers. Now displays unique
IP count as the primary metric with duplicate count shown when present.

- Add get_global_ip_stats() method to SiteService for unique/duplicate counts
- Update /api/sites?all=true endpoint to include IP statistics
- Update sites.html to display unique IPs with optional duplicate indicator
- Update API documentation with new response fields

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 16:03:53 -06:00
9bd2f67150 Add quick button to mark unexpected ports as expected
Allow users to add ports to expected list directly from scan results page
instead of navigating through site config pages. The button appears next
to unexpected ports and updates the site IP configuration via the API.

- Add site_id and site_ip_id to scan result data for linking to config
- Add "Mark Expected" button next to unexpected ports in scan detail view
- Implement markPortExpected() JS function to update site IP settings
2025-11-21 15:40:37 -06:00
3058c69c39 Add scan cancellation feature
- Replace subprocess.run() with Popen for cancellable processes
- Add cancel() method to SneakyScanner with process termination
- Track running scanners in registry for stop signal delivery
- Handle ScanCancelledError to set scan status to 'cancelled'
- Add POST /api/scans/<id>/stop endpoint
- Add 'cancelled' as valid scan status
- Add Stop button to scans list and detail views
- Show cancelled status with warning badge in UI
2025-11-21 14:17:26 -06:00
04dc238aea Add configurable UDP scanning and numeric IP sorting
- Add UDP_SCAN_ENABLED and UDP_PORTS environment variables to control UDP scanning
- UDP scanning disabled by default for faster scans
- Support port ranges (100-200), lists (53,67,68), or mixed formats
- Sort IPs numerically by octets in site management modal
2025-11-21 13:33:38 -06:00
c592000c96 Add real-time scan progress tracking
- Add ScanProgress model and progress fields to Scan model
- Implement progress callback in scanner to report phase completion
- Update scan_job to write per-IP results to database during execution
- Add /api/scans/<id>/progress endpoint for progress polling
- Add progress section to scan detail page with live updates
- Progress table shows current phase, completion bar, and per-IP results
- Poll every 3 seconds during active scans
- Sort IPs numerically for proper ordering
- Add database migration for new tables/columns
2025-11-21 12:49:27 -06:00
4c6b4bf35d Add IP address search feature with global search box
- Add API endpoint GET /api/scans/by-ip/{ip_address} to retrieve
  last 10 scans containing a specific IP
- Add ScanService.get_scans_by_ip() method with ScanIP join query
- Add search box to global navigation header
- Create dedicated search results page at /search/ip
- Update API documentation with new endpoint
2025-11-21 11:29:03 -06:00
3adb51ece2 Add configurable nmap host timeout setting
Move nmap host timeout from hardcoded 5m to configurable setting
in app/web/config.py with a default of 2m for faster scans.
2025-11-21 11:11:37 -06:00
c4cbbee280 Bump version to 1.0.0-beta 2025-11-20 14:43:04 -06:00
889e1eaac3 updating release.sh to use correct branch names 2025-11-20 14:42:44 -06:00
a682e5233c Reorganize roadmap with versioned planned features
Condensed completed phases into concise summaries and categorized
planned features into version milestones:
- v1.1.0: Communication & Automation (CLI, Email, CSV)
- v1.2.0: Reporting & Analysis (Scan Comparison, Enhanced Reports)
- v1.3.0: Visualization (Timeline View, Advanced Charts)
- v2.0.0: Security Intelligence (Vulnerability Detection)
2025-11-20 14:39:14 -06:00
7a14f1602b updating docs 2025-11-20 14:00:10 -06:00
949bccf644 updating readme to align with new config layout 2025-11-20 13:05:41 -06:00
801ddc8d81 removing standalone docker compose, no longer using that, api usage is fully implimented now 2025-11-20 12:59:27 -06:00
db5c828b5f adding release script 2025-11-20 12:34:15 -06:00
a044c19a46 Merge branch 'beta' 2025-11-20 11:40:27 -06:00
a5e2b43944 Merge branch 'master' into nightly 2025-11-20 11:39:39 -06:00
3219f8a861 Merge branch 'master' into beta 2025-11-20 11:39:07 -06:00
480065ed14 Fix screenshot directory deletion and update SSL dependencies
Save screenshot_dir to database when scans complete so the directory
is properly cleaned up on scan deletion. Previously the field was never
populated, causing screenshots to remain after deleting scans.

Update sslyze to 6.2.0 and cryptography to 46.0.0 to fix certificate
handling issues with negative serial numbers (RFC 5280 compliance).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 11:35:37 -06:00
73a3b95834 Add certificate details modal and fix SSL/TLS data processing
- Add certificate details modal to scan detail page with subject, issuer,
  validity dates, serial number, self-signed indicator, SANs, and TLS
  version support with expandable cipher suites
- Fix bug where certificate data was not being saved to database due to
  incorrect path lookup (was checking http_info['certificate'] instead of
  http_info['ssl_tls']['certificate'])
- Update requirements: add sslyze 6.0.0 and upgrade cryptography to >=42.0.0
  to fix 'No module named cryptography.x509.verification' error
2025-11-20 11:35:37 -06:00
8d8e53c903 Add screenshot viewing button to scan detail page
Display screenshot button in port table when a service has a captured
screenshot. Button opens screenshot in new tab with correct path
including the screenshot directory.
2025-11-20 11:35:37 -06:00
12d5aff7a5 Add help page with user documentation
Create comprehensive help page covering:
- Getting started workflow
- Sites and IP management
- Scan configuration
- Running scans manually
- Scheduling automated scans
- Scan comparisons
- Alerts and alert rules
- Webhook configuration

Add Help link with icon to navigation bar.
2025-11-20 11:35:37 -06:00
cc3758f92d Add acknowledge all alerts feature
Add POST /api/alerts/acknowledge-all endpoint to bulk acknowledge all
unacknowledged alerts. Add "Ack All" button to alerts page header with
confirmation dialog for quick dismissal of all pending alerts.
2025-11-20 11:35:37 -06:00
9804f9c032 Add route to serve scan output files
Output files (JSON, HTML, ZIP) are stored outside the static directory,
so download links in scan_detail.html were broken. This adds a /output/
route that serves files from the output directory using send_from_directory
for secure file access. Route requires authentication.
2025-11-20 11:35:37 -06:00
e3b647521e Fix scan output file paths and improve notification system
- Save JSON/HTML/ZIP paths to database when scans complete
- Remove orphaned scan-config-id reference causing JS errors
- Add showAlert function to scan_detail.html and scans.html
- Increase notification z-index to 9999 for modal visibility
- Replace inline alert creation with consistent toast notifications
2025-11-20 11:35:37 -06:00
7460c9e23e Merge branch 'nightly' into beta 2025-11-20 11:34:34 -06:00
66b02edc84 Fix screenshot directory deletion and update SSL dependencies
Save screenshot_dir to database when scans complete so the directory
is properly cleaned up on scan deletion. Previously the field was never
populated, causing screenshots to remain after deleting scans.

Update sslyze to 6.2.0 and cryptography to 46.0.0 to fix certificate
handling issues with negative serial numbers (RFC 5280 compliance).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 11:33:12 -06:00
f8b89c46c2 Add certificate details modal and fix SSL/TLS data processing
- Add certificate details modal to scan detail page with subject, issuer,
  validity dates, serial number, self-signed indicator, SANs, and TLS
  version support with expandable cipher suites
- Fix bug where certificate data was not being saved to database due to
  incorrect path lookup (was checking http_info['certificate'] instead of
  http_info['ssl_tls']['certificate'])
- Update requirements: add sslyze 6.0.0 and upgrade cryptography to >=42.0.0
  to fix 'No module named cryptography.x509.verification' error
2025-11-20 10:38:02 -06:00
6d5005403c Add screenshot viewing button to scan detail page
Display screenshot button in port table when a service has a captured
screenshot. Button opens screenshot in new tab with correct path
including the screenshot directory.
2025-11-20 10:07:24 -06:00
05f846809e Add help page with user documentation
Create comprehensive help page covering:
- Getting started workflow
- Sites and IP management
- Scan configuration
- Running scans manually
- Scheduling automated scans
- Scan comparisons
- Alerts and alert rules
- Webhook configuration

Add Help link with icon to navigation bar.
2025-11-20 09:59:35 -06:00
7c26824aa1 Add acknowledge all alerts feature
Add POST /api/alerts/acknowledge-all endpoint to bulk acknowledge all
unacknowledged alerts. Add "Ack All" button to alerts page header with
confirmation dialog for quick dismissal of all pending alerts.
2025-11-20 09:35:13 -06:00
91507cc8f8 Add route to serve scan output files
Output files (JSON, HTML, ZIP) are stored outside the static directory,
so download links in scan_detail.html were broken. This adds a /output/
route that serves files from the output directory using send_from_directory
for secure file access. Route requires authentication.
2025-11-20 09:32:28 -06:00
7437716613 Fix scan output file paths and improve notification system
- Save JSON/HTML/ZIP paths to database when scans complete
- Remove orphaned scan-config-id reference causing JS errors
- Add showAlert function to scan_detail.html and scans.html
- Increase notification z-index to 9999 for modal visibility
- Replace inline alert creation with consistent toast notifications
2025-11-20 08:41:02 -06:00
657f4784bf Merge pull request 'Update API documentation for database-based configuration' (#5) from nightly into master
Reviewed-on: #5
2025-11-20 04:07:46 +00:00
73d04cae5e Update API documentation for database-based configuration
- Fix config_id references to use integers instead of file paths
- Update scan delete response format to include scan_id field
- Add missing read_only field to Settings API responses
- Add missing template fields to Webhook responses
- Correct endpoint count from 80+ to 65+

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 22:06:38 -06:00
b8c3e4e2d8 Merge pull request 'beta' (#4) from beta into master
Reviewed-on: #4
2025-11-20 03:47:16 +00:00
aa7c32381c Merge pull request 'nightly' (#3) from nightly into beta
Reviewed-on: #3
2025-11-20 03:46:49 +00:00
0fc51eb032 Improve UI design system and fix notification positioning
- Overhaul CSS with comprehensive design tokens (shadows, transitions, radii)
- Add hover effects and smooth transitions to cards, buttons, tables
- Improve typography hierarchy and color consistency
- Remove inline styles from 10 template files for better maintainability
- Add global notification container to ensure toasts appear above modals
- Update showNotification/showAlert functions to use centralized container
- Add accessibility improvements (focus states, reduced motion support)
- Improve responsive design and mobile styling
- Add print styles
2025-11-19 21:45:36 -06:00
fdf689316f code cleanup, UI change to menu to make it cleaner 2025-11-19 21:27:05 -06:00
41ba4c47b5 refactor to remove config_files in favor of db 2025-11-19 20:29:14 -06:00
b2e6efb4b3 config file remove 2025-11-19 20:01:35 -06:00
e7dd207a62 Fix AlertRule initialization to use config_id instead of config_file
Updated init_db.py to use config_id field after database migration,
fixing container startup error on new systems.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 19:56:28 -06:00
30a29142a0 Fix password not being set when regenerating .env in setup.sh
Remove the database init marker when regenerating .env file so that
the docker entrypoint will re-run password initialization with the
new INITIAL_PASSWORD value on next container start.
2025-11-19 19:53:40 -06:00
0ec338e252 Migrate from file-based configs to database with per-IP site configuration
Major architectural changes:
   - Replace YAML config files with database-stored ScanConfig model
   - Remove CIDR block support in favor of individual IP addresses per site
   - Each IP now has its own expected_ping, expected_tcp_ports, expected_udp_ports
   - AlertRule now uses config_id FK instead of config_file string

   API changes:
   - POST /api/scans now requires config_id instead of config_file
   - Alert rules API uses config_id with validation
   - All config dropdowns fetch from /api/configs dynamically

   Template updates:
   - scans.html, dashboard.html, alert_rules.html load configs via API
   - Display format: Config Title (X sites) in dropdowns
   - Removed Jinja2 config_files loops

   Migrations:
   - 008: Expand CIDRs to individual IPs with per-IP port configs
   - 009: Remove CIDR-related columns
   - 010: Add config_id to alert_rules, remove config_file
2025-11-19 19:40:34 -06:00
034f146fa1 stage 1 of doing new cidrs/ site setup 2025-11-19 13:39:27 -06:00
4a4c33a10b doc changes 2025-11-19 10:42:49 -06:00
21254c3522 added webhooks and templates to alerting, email is next 2025-11-18 19:26:12 -06:00
230094d7b2 webhook templates 2025-11-18 15:29:23 -06:00
28b32a2049 added webhooks, moved app name and verison to simple config file 2025-11-18 15:05:57 -06:00
1d076a467a added webhooks, moved app name and verison to simple config file 2025-11-18 15:05:39 -06:00
3c740268c4 updated API docs 2025-11-18 13:23:06 -06:00
131e1f5a61 adding phase 5 init framework, added deployment ease scripts 2025-11-18 13:10:53 -06:00
b2a3fc7832 license 2025-11-17 16:32:02 -06:00
cd840cb8ca restructure of dirs, huge docs update 2025-11-17 16:29:14 -06:00
456e052389 updating docs 2025-11-17 15:50:15 -06:00
72c4f3d29b hot fixes for several UI and logic issues 2025-11-17 15:41:51 -06:00
5f2314a532 phase 4 complete 2025-11-17 14:54:31 -06:00
5301b07f37 Merge pull request 'phase3' (#2) from phase3 into master
Reviewed-on: #2
2025-11-17 18:06:56 +00:00
6fe24c3907 adding Phase4 2025-11-17 12:05:11 -06:00
489284bde1 updating Phase3.md 2025-11-14 16:31:35 -06:00
143 changed files with 24273 additions and 13913 deletions

File diff suppressed because one or more lines are too long

8
.gitignore vendored
View File

@@ -9,6 +9,11 @@ output/
data/
logs/
# Environment and secrets
.env
admin_password.txt
logs/admin_password.txt
# Python
__pycache__/
*.py[cod]
@@ -37,3 +42,6 @@ Thumbs.db
# Docker
.dockerignore
#mounted dirs
configs/

View File

@@ -23,8 +23,8 @@ RUN git clone https://github.com/robertdavidgraham/masscan /tmp/masscan && \
WORKDIR /app
# Copy requirements and install Python dependencies
COPY requirements.txt .
COPY requirements-web.txt .
COPY app/requirements.txt .
COPY app/requirements-web.txt .
RUN pip install --no-cache-dir -r requirements.txt && \
pip install --no-cache-dir -r requirements-web.txt
@@ -33,18 +33,19 @@ RUN pip install --no-cache-dir -r requirements.txt && \
RUN playwright install chromium
# Copy application code
COPY src/ ./src/
COPY templates/ ./templates/
COPY web/ ./web/
COPY migrations/ ./migrations/
COPY alembic.ini .
COPY init_db.py .
COPY app/src/ ./src/
COPY app/templates/ ./templates/
COPY app/web/ ./web/
COPY app/migrations/ ./migrations/
COPY app/alembic.ini .
COPY app/init_db.py .
COPY app/docker-entrypoint.sh /docker-entrypoint.sh
# Create required directories
RUN mkdir -p /app/output /app/logs
# Make scripts executable
RUN chmod +x /app/src/scanner.py /app/init_db.py
RUN chmod +x /app/src/scanner.py /app/init_db.py /docker-entrypoint.sh
# Force Python unbuffered output
ENV PYTHONUNBUFFERED=1

7
LICENSE Normal file
View File

@@ -0,0 +1,7 @@
Copyright 2025 Phillip Tarrant
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

884
README.md
View File

@@ -1,793 +1,180 @@
# SneakyScanner
A comprehensive network scanning and infrastructure monitoring platform with both CLI and web interfaces. SneakyScanner uses masscan for fast port discovery, nmap for service detection, sslyze for SSL/TLS analysis, and Playwright for webpage screenshots to perform comprehensive infrastructure audits.
A comprehensive network scanning and infrastructure monitoring platform with web interface and CLI scanner. SneakyScanner uses masscan for fast port discovery, nmap for service detection, sslyze for SSL/TLS analysis, and Playwright for webpage screenshots to perform comprehensive infrastructure audits.
**Features:**
- 🔍 **CLI Scanner** - Standalone scanning tool with YAML-based configuration
- 🌐 **Web Application** - Flask-based web UI with REST API for scan management
- 📊 **Database Storage** - SQLite database for scan history and trend analysis
- ⏱️ **Background Jobs** - Asynchronous scan execution with APScheduler
- 🔐 **Authentication** - Secure session-based authentication system
- 📈 **Historical Data** - Track infrastructure changes over time
**Primary Interface**: Web Application (Flask-based GUI)
**Scripting/Automation**: REST API (see [API Reference](docs/API_REFERENCE.md))
## Table of Contents
---
1. [Quick Start](#quick-start)
- [Web Application (Recommended)](#web-application-recommended)
- [CLI Scanner (Standalone)](#cli-scanner-standalone)
2. [Features](#features)
3. [Web Application](#web-application)
4. [CLI Scanner](#cli-scanner)
5. [Configuration](#configuration)
6. [Output Formats](#output-formats)
7. [API Documentation](#api-documentation)
8. [Deployment](#deployment)
9. [Development](#development)
## Key Features
- 🌐 **Web Dashboard** - Modern web UI for scan management, scheduling, and historical analysis
- 📊 **Database Storage** - SQLite-based scan history with trend analysis and comparison
-**Scheduled Scans** - Cron-based automated scanning with APScheduler
- 🔧 **Config Creator** - Web-based target configuration builder for quick setup
- 🔍 **Network Discovery** - Fast port scanning with masscan (all 65535 ports, TCP/UDP)
- 🎯 **Service Detection** - Nmap-based service enumeration with version detection
- 🔒 **SSL/TLS Analysis** - Certificate extraction, TLS version testing, cipher suite analysis
- 📸 **Screenshot Capture** - Automated webpage screenshots for all discovered web services
- 📈 **Drift Detection** - Expected vs. actual infrastructure comparison
- 📋 **Multi-Format Reports** - JSON, HTML, and ZIP archives with visual reports
- 🔐 **Authentication** - Session-based login for single-user deployments
- 🔔 **Webhook Alerts** - Real-time notifications via Slack, Discord, PagerDuty, and custom integrations
- ⚠️ **Alert Rules** - Automated detection of infrastructure misconfigurations and anomalies
---
## Quick Start
### Web Application (Recommended)
### Web Application
The web application provides a complete interface for managing scans, viewing history, and analyzing results.
**Easy Setup (One Command):**
1. **Configure environment:**
```bash
# Copy example environment file
# 1. Clone repository
git clone <repository-url>
cd SneakyScan
# 2. Run setup script
./setup.sh
# 3. Access web interface at http://localhost:5000
```
The setup script will:
- Generate secure keys automatically
- Create required directories
- Build and start the Docker containers
- Initialize the database on first run
- Display your login credentials
**Manual Setup (Alternative):**
```bash
# 1. Clone repository
git clone <repository-url>
cd SneakyScan
# 2. Configure environment
cp .env.example .env
# Edit .env and set SECRET_KEY, SNEAKYSCANNER_ENCRYPTION_KEY, and INITIAL_PASSWORD
# Generate secure keys (Linux/Mac)
export SECRET_KEY=$(python3 -c 'import secrets; print(secrets.token_hex(32))')
export ENCRYPTION_KEY=$(python3 -c 'import secrets; print(secrets.token_urlsafe(32))')
# 3. Build and start (database auto-initializes on first run)
docker compose up --build -d
# Update .env file with generated keys
sed -i "s/your-secret-key-here/$SECRET_KEY/" .env
sed -i "s/your-encryption-key-here/$ENCRYPTION_KEY/" .env
# 4. Access web interface
# Open http://localhost:5000
```
2. **Start the web application:**
```bash
docker-compose -f docker-compose-web.yml up -d
```
3. **Access the web interface:**
- Open http://localhost:5000 in your browser
- Default password: `admin` (change immediately after first login)
4. **Trigger your first scan:**
- Click "Run Scan Now" on the dashboard
- Or use the API:
```bash
curl -X POST http://localhost:5000/api/scans \
-H "Content-Type: application/json" \
-d '{"config_file":"/app/configs/example-site.yaml"}' \
-b cookies.txt
```
See [Deployment Guide](docs/ai/DEPLOYMENT.md) for detailed setup instructions.
### CLI Scanner (Standalone)
For quick one-off scans or scripting, use the standalone CLI scanner:
```bash
# Build the image
docker-compose build
# Run a scan
docker-compose up
# Or run directly
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/example-site.yaml
```
Results are saved to the `output/` directory as JSON, HTML, and ZIP files.
**See [Deployment Guide](docs/DEPLOYMENT.md) for detailed setup instructions.**
---
## Features
## Documentation
### Web Application (Phase 2)
### User Guides
- **[Deployment Guide](docs/DEPLOYMENT.md)** - Installation, configuration, and production deployment
- **[API Reference](docs/API_REFERENCE.md)** - Complete REST API documentation for scripting and automation
- **Dashboard** - View scan history, statistics, and recent activity
- **REST API** - Programmatic access to all scan management functions
- **Background Jobs** - Scans execute asynchronously without blocking
- **Database Storage** - Complete scan history with queryable data
- **Authentication** - Secure session-based login system
- **Pagination** - Efficiently browse large scan datasets
- **Status Tracking** - Real-time scan progress monitoring
- **Error Handling** - Comprehensive error logging and reporting
### Network Discovery & Port Scanning
- **YAML-based configuration** for defining scan targets and expectations
- **Comprehensive scanning using masscan**:
- Ping/ICMP echo detection (masscan --ping)
- TCP port scanning (all 65535 ports at 10,000 pps)
- UDP port scanning (all 65535 ports at 10,000 pps)
- Fast network-wide discovery in seconds
### Service Detection & Enumeration
- **Service detection using nmap**:
- Identifies services running on discovered TCP ports
- Extracts product names and versions (e.g., "OpenSSH 8.2p1", "nginx 1.18.0")
- Provides detailed service information including extra attributes
- Balanced intensity level (5) for accuracy and speed
### Security Assessment
- **HTTP/HTTPS analysis and SSL/TLS security assessment**:
- Detects HTTP vs HTTPS on web services
- Extracts SSL certificate details (subject, issuer, expiration, SANs)
- Calculates days until certificate expiration for monitoring
- Tests TLS version support (TLS 1.0, 1.1, 1.2, 1.3)
- Lists all accepted cipher suites for each supported TLS version
- Identifies weak cryptographic configurations
### Visual Documentation
- **Webpage screenshot capture** (NEW):
- Automatically captures screenshots of all discovered web services (HTTP/HTTPS)
- Uses Playwright with headless Chromium browser
- Viewport screenshots (1280x720) for consistent sizing
- 15-second timeout per page with graceful error handling
- Handles self-signed certificates without errors
- Saves screenshots as PNG files with references in JSON reports
- Screenshots organized in timestamped directories
- Browser reuse for optimal performance
### Reporting & Output
- **Automatic multi-format output** after each scan:
- Machine-readable JSON reports for post-processing
- Human-readable HTML reports with dark theme
- ZIP archives containing all outputs for easy sharing
- **HTML report features**:
- Comprehensive reports with dark theme for easy reading
- Summary dashboard with scan statistics, drift alerts, and security warnings
- Site-by-site breakdown with expandable service details
- Visual badges for expected vs. unexpected services
- SSL/TLS certificate details with expiration warnings
- Automatically generated after every scan
- **Dockerized** for consistent execution environment and root privilege isolation
- **Expected vs. Actual comparison** to identify infrastructure drift
- Timestamped reports with complete scan duration metrics
### Developer Resources
- **[Roadmap](docs/ROADMAP.md)** - Project roadmap, architecture, and planned features
---
## Web Application
## Current Status
### Overview
**Latest Version**: Phase 5 Complete ✅
**Last Updated**: 2025-11-19
The SneakyScanner web application provides a Flask-based interface for managing network scans. All scans are stored in a SQLite database, enabling historical analysis and trending.
### Completed Phases
### Key Features
-**Phase 1**: Database schema, SQLAlchemy models, settings system
-**Phase 2**: REST API, background jobs, authentication, web UI
-**Phase 3**: Dashboard, scheduling, trend charts
-**Phase 4**: Config creator, target editor, config management UI
-**Phase 5**: Webhooks & alerting, notification templates, alert rules
**Scan Management:**
- Trigger scans via web UI or REST API
- View complete scan history with pagination
- Monitor real-time scan status
- Delete scans and associated files
### Next Up: Phase 6 - CLI as API Client
**REST API:**
- Full CRUD operations for scans
- Session-based authentication
- JSON responses for all endpoints
- Comprehensive error handling
**Goal**: Create a thin CLI client that calls the Flask API for scan operations, enabling scripting and automation workflows while leveraging centralized database storage and web dashboard features.
**Background Processing:**
- APScheduler for async scan execution
- Up to 3 concurrent scans (configurable)
- Status tracking: `running``completed`/`failed`
- Error capture and logging
**Planned Features**:
- API token authentication for CLI access
- Remote scan triggering and status polling
- Centralized scan history accessible via web dashboard
- Scriptable automation workflows
**Database Schema:**
- 11 normalized tables for scan data
- Relationships: Scans → Sites → IPs → Ports → Services → Certificates → TLS Versions
- Efficient queries with indexes
- SQLite WAL mode for better concurrency
### Web UI Routes
| Route | Description |
|-------|-------------|
| `/` | Redirects to dashboard |
| `/login` | Login page |
| `/logout` | Logout and destroy session |
| `/dashboard` | Main dashboard with stats and recent scans |
| `/scans` | Browse scan history (paginated) |
| `/scans/<id>` | View detailed scan results |
### API Endpoints
See [API_REFERENCE.md](docs/ai/API_REFERENCE.md) for complete API documentation.
**Core Endpoints:**
- `POST /api/scans` - Trigger new scan
- `GET /api/scans` - List scans (paginated, filterable)
- `GET /api/scans/{id}` - Get scan details
- `GET /api/scans/{id}/status` - Poll scan status
- `DELETE /api/scans/{id}` - Delete scan and files
**Settings Endpoints:**
- `GET /api/settings` - Get all settings
- `PUT /api/settings/{key}` - Update setting
- `GET /api/settings/health` - Health check
### Authentication
**Login:**
```bash
curl -X POST http://localhost:5000/auth/login \
-H "Content-Type: application/json" \
-d '{"password":"yourpassword"}' \
-c cookies.txt
```
**Use session for API calls:**
```bash
curl -X GET http://localhost:5000/api/scans \
-b cookies.txt
```
**Change password:**
1. Login to web UI
2. Navigate to Settings
3. Update app password
4. Or use CLI: `python3 web/utils/change_password.py`
See [Roadmap](docs/ROADMAP.md) for complete feature timeline and future phases.
---
## CLI Scanner
## Architecture
### Requirements
- Docker
- Docker Compose (optional, for easier usage)
### Using Docker Compose
1. Create or modify a configuration file in `configs/`:
```yaml
title: "My Infrastructure Scan"
sites:
- name: "Web Servers"
ips:
- address: "192.168.1.10"
expected:
ping: true
tcp_ports: [22, 80, 443]
udp_ports: []
```
┌─────────────────────────────────────────────────────────────┐
│ Flask Web Application │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Web UI │ │ REST API │ │ Scheduler │ │
│ │ (Dashboard) │ │ (JSON/CRUD) │ │ (APScheduler) │ │
│ └──────┬───────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │ │
│ └─────────────────┴────────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ SQLAlchemy │ │
│ │ (ORM Layer) │ │
│ └────────┬────────┘ │
│ │
┌────────▼────────┐ │
│ SQLite3 DB │ │
│ (scan history) │ │
└─────────────────┘ │
└───────────────────────────┬─────────────────────────────────┘
┌──────────▼──────────┐
│ Scanner Engine │
│ (scanner.py) │
│ ┌────────────────┐ │
│ │ Masscan/Nmap │ │
│ │ Playwright │ │
│ │ sslyze │ │
│ └────────────────┘ │
└─────────────────────┘
```
2. Build and run:
```bash
docker-compose build
docker-compose up
```
3. Check results in the `output/` directory:
- `scan_report_YYYYMMDD_HHMMSS.json` - JSON report
- `scan_report_YYYYMMDD_HHMMSS.html` - HTML report
- `scan_report_YYYYMMDD_HHMMSS.zip` - ZIP archive
- `scan_report_YYYYMMDD_HHMMSS_screenshots/` - Screenshots directory
## Scan Performance
SneakyScanner uses a five-phase approach for comprehensive scanning:
1. **Ping Scan** (masscan): ICMP echo detection - ~1-2 seconds
2. **TCP Port Discovery** (masscan): Scans all 65535 TCP ports at 10,000 packets/second - ~13 seconds per 2 IPs
3. **UDP Port Discovery** (masscan): Scans all 65535 UDP ports at 10,000 packets/second - ~13 seconds per 2 IPs
4. **Service Detection** (nmap): Identifies services on discovered TCP ports - ~20-60 seconds per IP with open ports
5. **HTTP/HTTPS Analysis** (Playwright, SSL/TLS): Detects web protocols, captures screenshots, and analyzes certificates - ~10-20 seconds per web service
**Example**: Scanning 2 IPs with 10 open ports each (including 2-3 web services) typically takes 2-3 minutes total.
### Using Docker Directly
1. Build the image:
```bash
docker build -t sneakyscanner .
```
2. Run a scan:
```bash
docker run --rm --privileged --network host \
-v $(pwd)/configs:/app/configs:ro \
-v $(pwd)/output:/app/output \
sneakyscanner /app/configs/your-config.yaml
```
**Technology Stack**:
- **Backend**: Flask 3.x, SQLAlchemy 2.x, SQLite3, APScheduler 3.x
- **Frontend**: Jinja2, Bootstrap 5, Chart.js, Vanilla JavaScript
- **Scanner**: Masscan, Nmap, Playwright (Chromium), sslyze
- **Deployment**: Docker Compose, Gunicorn
---
## Configuration
The YAML configuration file defines the scan parameters:
```yaml
title: "Scan Title" # Required: Report title
sites: # Required: List of sites to scan
- name: "Site Name"
ips:
- address: "192.168.1.10"
expected:
ping: true # Expected ping response
tcp_ports: [22, 80] # Expected TCP ports
udp_ports: [53] # Expected UDP ports
```
See `configs/example-site.yaml` for a complete example.
---
## Output Formats
After each scan completes, SneakyScanner automatically generates three output formats:
1. **JSON Report** (`scan_report_YYYYMMDD_HHMMSS.json`): Machine-readable scan data with all discovered services, ports, and SSL/TLS information
2. **HTML Report** (`scan_report_YYYYMMDD_HHMMSS.html`): Human-readable report with dark theme, summary dashboard, and detailed service breakdown
3. **ZIP Archive** (`scan_report_YYYYMMDD_HHMMSS.zip`): Contains JSON report, HTML report, and all screenshots for easy sharing and archival
All files share the same timestamp for easy correlation. Screenshots are saved in a subdirectory (`scan_report_YYYYMMDD_HHMMSS_screenshots/`) and included in the ZIP archive. The report includes the total scan duration (in seconds) covering all phases: ping scan, TCP/UDP port discovery, service detection, screenshot capture, and report generation.
```json
{
"title": "Sneaky Infra Scan",
"scan_time": "2024-01-15T10:30:00Z",
"scan_duration": 95.3,
"config_file": "/app/configs/example-site.yaml",
"sites": [
{
"name": "Production Web Servers",
"ips": [
{
"address": "192.168.1.10",
"expected": {
"ping": true,
"tcp_ports": [22, 80, 443],
"udp_ports": [53]
},
"actual": {
"ping": true,
"tcp_ports": [22, 80, 443, 3000],
"udp_ports": [53],
"services": [
{
"port": 22,
"protocol": "tcp",
"service": "ssh",
"product": "OpenSSH",
"version": "8.2p1"
},
{
"port": 80,
"protocol": "tcp",
"service": "http",
"product": "nginx",
"version": "1.18.0",
"http_info": {
"protocol": "http",
"screenshot": "scan_report_20250115_103000_screenshots/192_168_1_10_80.png"
}
},
{
"port": 443,
"protocol": "tcp",
"service": "https",
"product": "nginx",
"http_info": {
"protocol": "https",
"screenshot": "scan_report_20250115_103000_screenshots/192_168_1_10_443.png",
"ssl_tls": {
"certificate": {
"subject": "CN=example.com",
"issuer": "CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US",
"serial_number": "123456789012345678901234567890",
"not_valid_before": "2025-01-01T00:00:00+00:00",
"not_valid_after": "2025-04-01T23:59:59+00:00",
"days_until_expiry": 89,
"sans": ["example.com", "www.example.com"]
},
"tls_versions": {
"TLS 1.0": {
"supported": false,
"cipher_suites": []
},
"TLS 1.1": {
"supported": false,
"cipher_suites": []
},
"TLS 1.2": {
"supported": true,
"cipher_suites": [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
]
},
"TLS 1.3": {
"supported": true,
"cipher_suites": [
"TLS_AES_256_GCM_SHA384",
"TLS_AES_128_GCM_SHA256"
]
}
}
}
}
},
{
"port": 3000,
"protocol": "tcp",
"service": "http",
"product": "Node.js",
"http_info": {
"protocol": "http"
}
}
]
}
}
]
}
]
}
```
## Screenshot Capture Details
SneakyScanner automatically captures webpage screenshots for all discovered HTTP and HTTPS services, providing visual documentation of your infrastructure.
### How It Works
1. **Automatic Detection**: During the HTTP/HTTPS analysis phase, SneakyScanner identifies web services based on:
- Nmap service detection results (http, https, ssl, http-proxy)
- Common web ports (80, 443, 8000, 8006, 8080, 8081, 8443, 8888, 9443)
2. **Screenshot Capture**: For each web service:
- Launches headless Chromium browser (once per scan, reused for all screenshots)
- Navigates to the service URL (HTTP or HTTPS)
- Waits for network to be idle (up to 15 seconds)
- Captures viewport screenshot (1280x720 pixels)
- Handles SSL certificate errors gracefully (e.g., self-signed certificates)
3. **Storage**: Screenshots are saved as PNG files:
- Directory: `output/scan_report_YYYYMMDD_HHMMSS_screenshots/`
- Filename format: `{ip}_{port}.png` (e.g., `192_168_1_10_443.png`)
- Referenced in JSON report under `http_info.screenshot`
### Screenshot Configuration
Default settings (configured in `src/screenshot_capture.py`):
- **Viewport size**: 1280x720 (captures visible area only, not full page)
- **Timeout**: 15 seconds per page load
- **Browser**: Chromium (headless mode)
- **SSL handling**: Ignores HTTPS errors (works with self-signed certificates)
- **User agent**: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
### Error Handling
Screenshots are captured on a best-effort basis:
- If a screenshot fails (timeout, connection error, etc.), the scan continues
- Failed screenshots are logged but don't stop the scan
- Services without screenshots simply omit the `screenshot` field in JSON output
## HTML Report Generation
SneakyScanner automatically generates comprehensive HTML reports after each scan, providing an easy-to-read visual interface for analyzing scan results.
### Automatic Generation
HTML reports are automatically created after every scan completes, along with JSON reports and ZIP archives. All three outputs share the same timestamp and are saved to the `output/` directory.
### Manual Generation (Optional)
You can also manually generate HTML reports from existing JSON scan data:
```bash
# Generate HTML report (creates report in same directory as JSON)
python3 src/report_generator.py output/scan_report_20251113_175235.json
# Specify custom output path
python3 src/report_generator.py output/scan_report.json /path/to/custom_report.html
```
### Report Features
The generated HTML report includes:
**Summary Dashboard**:
- **Scan Statistics**: Total IPs scanned, TCP/UDP ports found, services identified, web services, screenshots captured
- **Drift Alerts**: Unexpected TCP/UDP ports, missing expected services, new services detected
- **Security Warnings**: Expiring certificates (<30 days), weak TLS versions (1.0/1.1), self-signed certificates, high port services (>10000)
**Site-by-Site Breakdown**:
- Organized by logical site grouping from configuration
- Per-IP sections with status badges (ping, port drift summary)
- Service tables with expandable details (click any row to expand)
- Visual badges: green (expected), red (unexpected), yellow (missing/warning)
**Service Details** (click to expand):
- Product name, version, extra information, OS type
- HTTP/HTTPS protocol detection
- Screenshot links for web services
- SSL/TLS certificate details (expandable):
- Subject, issuer, validity dates, serial number
- Days until expiration with color-coded warnings
- Subject Alternative Names (SANs)
- TLS version support (1.0, 1.1, 1.2, 1.3) with cipher suites
- Weak TLS and self-signed certificate warnings
**UDP Port Handling**:
- Expected UDP ports shown with green "Expected" badge
- Unexpected UDP ports shown with red "Unexpected" badge
- Missing expected UDP ports shown with yellow "Missing" badge
- Note: Service detection not available for UDP (nmap limitation)
**Design**:
- Dark theme with slate/grey color scheme for comfortable reading
- Responsive layout works on different screen sizes
- No external dependencies - single HTML file
- Minimal JavaScript for expand/collapse functionality
- Optimized hover effects for table rows
### Report Output
The HTML report is a standalone file that can be:
- Opened directly in any web browser (Chrome, Firefox, Safari, Edge)
- Shared via email or file transfer
- Archived for compliance or historical comparison
- Viewed without an internet connection or web server
Screenshot links in the report are relative paths, so keep the report and screenshot directory together.
---
## API Documentation
Complete API reference available at [docs/ai/API_REFERENCE.md](docs/ai/API_REFERENCE.md).
**Quick Reference:**
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/scans` | POST | Trigger new scan |
| `/api/scans` | GET | List all scans (paginated) |
| `/api/scans/{id}` | GET | Get scan details |
| `/api/scans/{id}/status` | GET | Get scan status |
| `/api/scans/{id}` | DELETE | Delete scan |
| `/api/settings` | GET | Get all settings |
| `/api/settings/{key}` | PUT | Update setting |
| `/api/settings/health` | GET | Health check |
**Authentication:** All endpoints (except `/api/settings/health`) require session authentication via `/auth/login`.
---
## Deployment
### Production Deployment
See [DEPLOYMENT.md](docs/ai/DEPLOYMENT.md) for comprehensive deployment guide.
**Quick Steps:**
1. **Configure environment variables:**
```bash
cp .env.example .env
# Edit .env and set secure keys
```
2. **Initialize database:**
```bash
docker-compose -f docker-compose-web.yml run --rm web python3 init_db.py
```
3. **Start services:**
```bash
docker-compose -f docker-compose-web.yml up -d
```
4. **Verify health:**
```bash
curl http://localhost:5000/api/settings/health
```
### Docker Volumes
The web application uses persistent volumes:
| Volume | Path | Description |
|--------|------|-------------|
| `data` | `/app/data` | SQLite database |
| `output` | `/app/output` | Scan results (JSON, HTML, ZIP, screenshots) |
| `logs` | `/app/logs` | Application logs |
| `configs` | `/app/configs` | YAML scan configurations |
**Backup:**
```bash
# Backup database
docker cp sneakyscanner_web:/app/data/sneakyscanner.db ./backup/
# Backup all scan results
docker cp sneakyscanner_web:/app/output ./backup/
# Or use docker-compose volumes
docker run --rm -v sneakyscanner_data:/data -v $(pwd)/backup:/backup alpine tar czf /backup/data.tar.gz /data
```
### Environment Variables
See `.env.example` for complete configuration options:
**Flask Configuration:**
- `FLASK_ENV` - Environment mode (production/development)
- `FLASK_DEBUG` - Debug mode (true/false)
- `SECRET_KEY` - Flask secret key for sessions (generate with `secrets.token_hex(32)`)
**Database:**
- `DATABASE_URL` - Database connection string (default: SQLite)
**Security:**
- `SNEAKYSCANNER_ENCRYPTION_KEY` - Encryption key for sensitive settings (generate with `secrets.token_urlsafe(32)`)
**Scheduler:**
- `SCHEDULER_EXECUTORS` - Number of concurrent scan workers (default: 2)
- `SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES` - Max concurrent jobs (default: 3)
---
## Development
### Project Structure
```
SneakyScanner/
├── src/ # Scanner engine (CLI)
│ ├── scanner.py # Main scanner application
│ ├── screenshot_capture.py # Webpage screenshot capture
│ └── report_generator.py # HTML report generation
├── web/ # Web application (Flask)
│ ├── app.py # Flask app factory
│ ├── models.py # SQLAlchemy models (11 tables)
│ ├── api/ # API blueprints
│ │ ├── scans.py # Scan management endpoints
│ │ ├── settings.py # Settings endpoints
│ │ └── ...
│ ├── auth/ # Authentication
│ │ ├── routes.py # Login/logout routes
│ │ ├── decorators.py # Auth decorators
│ │ └── models.py # User model
│ ├── routes/ # Web UI routes
│ │ └── main.py # Dashboard, scans pages
│ ├── services/ # Business logic
│ │ ├── scan_service.py # Scan CRUD operations
│ │ └── scheduler_service.py # APScheduler integration
│ ├── jobs/ # Background jobs
│ │ └── scan_job.py # Async scan execution
│ ├── utils/ # Utilities
│ │ ├── settings.py # Settings manager
│ │ ├── pagination.py # Pagination helper
│ │ └── validators.py # Input validation
│ ├── templates/ # Jinja2 templates
│ │ ├── base.html # Base layout
│ │ ├── login.html # Login page
│ │ ├── dashboard.html # Dashboard
│ │ └── errors/ # Error templates
│ └── static/ # Static assets
│ ├── css/
│ ├── js/
│ └── images/
├── templates/ # Report templates (CLI)
│ └── report_template.html # HTML report template
├── tests/ # Test suite
│ ├── conftest.py # Pytest fixtures
│ ├── test_scan_service.py # Service tests
│ ├── test_scan_api.py # API tests
│ ├── test_authentication.py # Auth tests
│ ├── test_background_jobs.py # Scheduler tests
│ └── test_error_handling.py # Error handling tests
├── migrations/ # Alembic database migrations
│ └── versions/
│ ├── 001_initial_schema.py
│ ├── 002_add_scan_indexes.py
│ └── 003_add_scan_timing_fields.py
├── configs/ # Scan configurations
│ └── example-site.yaml
├── output/ # Scan results
├── docs/ # Documentation
│ ├── ai/ # Development docs
│ │ ├── API_REFERENCE.md
│ │ ├── DEPLOYMENT.md
│ │ ├── PHASE2.md
│ │ ├── PHASE2_COMPLETE.md
│ │ └── ROADMAP.md
│ └── human/
├── Dockerfile # Scanner + web app image
├── docker-compose.yml # CLI scanner compose
├── docker-compose-web.yml # Web app compose
├── requirements.txt # Scanner dependencies
├── requirements-web.txt # Web app dependencies
├── alembic.ini # Alembic configuration
├── init_db.py # Database initialization
├── .env.example # Environment template
├── CLAUDE.md # Developer guide
└── README.md # This file
```
### Running Tests
**In Docker:**
```bash
docker-compose -f docker-compose-web.yml run --rm web pytest tests/ -v
```
**Locally (requires Python 3.12+):**
```bash
pip install -r requirements-web.txt
pytest tests/ -v
# With coverage
pytest tests/ --cov=web --cov-report=html
```
**Test Coverage:**
- 100 test functions across 6 test files
- 1,825 lines of test code
- Coverage: Service layer, API endpoints, authentication, error handling, background jobs
### Database Migrations
**Create new migration:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic revision --autogenerate -m "Description"
```
**Apply migrations:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic upgrade head
```
**Rollback:**
```bash
docker-compose -f docker-compose-web.yml run --rm web alembic downgrade -1
```
## Security Notice
This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for masscan and nmap raw socket access
⚠️ **Important**: This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for raw socket access (masscan/nmap)
- `--network host` for direct network access
Only use this tool on networks you own or have explicit authorization to scan. Unauthorized network scanning may be illegal in your jurisdiction.
**Only use this tool on networks you own or have explicit authorization to scan.** Unauthorized network scanning may be illegal in your jurisdiction.
---
### Security Best Practices
## Roadmap
1. Run on dedicated scan server (not production systems)
2. Restrict network access with firewall rules
3. Use strong passwords and encryption keys
4. Enable HTTPS in production (reverse proxy recommended)
5. Regularly update Docker images and dependencies
**Current Phase:** Phase 2 Complete ✅
**Completed Phases:**
-**Phase 1** - Database foundation, Flask app structure, settings system
-**Phase 2** - REST API, background jobs, authentication, basic UI
**Upcoming Phases:**
- 📋 **Phase 3** - Enhanced dashboard, trend charts, scheduled scans (Weeks 5-6)
- 📋 **Phase 4** - Email notifications, scan comparison, alert rules (Weeks 7-8)
- 📋 **Phase 5** - CLI as API client, token authentication (Week 9)
- 📋 **Phase 6** - Advanced features (vulnerability detection, PDF export, timeline view)
See [ROADMAP.md](docs/ai/ROADMAP.md) for detailed feature planning.
See [Deployment Guide](docs/DEPLOYMENT.md) for production security checklist.
---
## Contributing
This is a personal/small team project. For bugs or feature requests:
This is a personal project. For bugs or feature requests:
1. Check existing issues
2. Create detailed bug reports with reproduction steps
3. Submit pull requests with tests
@@ -800,27 +187,16 @@ MIT License - See LICENSE file for details
---
## Security Notice
This tool requires:
- `--privileged` flag or `CAP_NET_RAW` capability for masscan and nmap raw socket access
- `--network host` for direct network access
**⚠️ Important:** Only use this tool on networks you own or have explicit authorization to scan. Unauthorized network scanning may be illegal in your jurisdiction.
---
## Support
**Documentation:**
- [API Reference](docs/ai/API_REFERENCE.md)
- [Deployment Guide](docs/ai/DEPLOYMENT.md)
- [Developer Guide](CLAUDE.md)
- [Roadmap](docs/ai/ROADMAP.md)
**Documentation**:
- [Deployment Guide](docs/DEPLOYMENT.md)
- [API Reference](docs/API_REFERENCE.md)
- [Roadmap](docs/ROADMAP.md)
**Issues:** https://github.com/anthropics/sneakyscanner/issues
**Issues**: email me ptarrant at gmail dot com
---
**Version:** 2.0 (Phase 2 Complete)
**Last Updated:** 2025-11-14
**Version**: 1.0.0-beta
**Last Updated**: 2025-11-19

80
app/docker-entrypoint.sh Normal file
View File

@@ -0,0 +1,80 @@
#!/bin/bash
set -e
# SneakyScanner Docker Entrypoint Script
# This script ensures the database is initialized before starting the Flask app
DB_PATH="${DATABASE_URL#sqlite:///}" # Extract path from sqlite:////app/data/sneakyscanner.db
DB_DIR=$(dirname "$DB_PATH")
INIT_MARKER="$DB_DIR/.db_initialized"
PASSWORD_FILE="/app/logs/admin_password.txt" # Save to logs dir (mounted, no permission issues)
echo "=== SneakyScanner Startup ==="
echo "Database path: $DB_PATH"
echo "Database directory: $DB_DIR"
# Ensure database directory exists
mkdir -p "$DB_DIR"
# Check if this is the first run (database doesn't exist or not initialized)
if [ ! -f "$DB_PATH" ] || [ ! -f "$INIT_MARKER" ]; then
echo ""
echo "=== First Run Detected ==="
echo "Initializing database..."
# Set default password from environment or generate a random one
if [ -z "$INITIAL_PASSWORD" ]; then
echo "INITIAL_PASSWORD not set, generating random password..."
# Generate a 32-character alphanumeric password
INITIAL_PASSWORD=$(cat /dev/urandom | tr -dc 'A-Za-z0-9' | head -c 32)
# Ensure logs directory exists
mkdir -p /app/logs
echo "$INITIAL_PASSWORD" > "$PASSWORD_FILE"
echo "✓ Random password generated and saved to: ./logs/admin_password.txt"
SAVE_PASSWORD_MESSAGE=true
fi
# Run database initialization
python3 /app/init_db.py \
--db-url "$DATABASE_URL" \
--password "$INITIAL_PASSWORD" \
--no-migrations \
--force
# Create marker file to indicate successful initialization
if [ $? -eq 0 ]; then
touch "$INIT_MARKER"
echo "✓ Database initialized successfully"
echo ""
echo "=== IMPORTANT ==="
if [ "$SAVE_PASSWORD_MESSAGE" = "true" ]; then
echo "Login password saved to: ./logs/admin_password.txt"
echo "Password: $INITIAL_PASSWORD"
else
echo "Login password: $INITIAL_PASSWORD"
fi
echo "Please change this password after logging in!"
echo "=================="
echo ""
else
echo "✗ Database initialization failed!"
exit 1
fi
else
echo "Database already initialized, skipping init..."
fi
# Apply any pending migrations (if using migrations in future)
if [ -f "/app/alembic.ini" ]; then
echo "Checking for pending migrations..."
# Uncomment when ready to use migrations:
# alembic upgrade head
fi
echo ""
echo "=== Starting Flask Application ==="
echo "Flask will be available at http://localhost:5000"
echo ""
# Execute the main application
exec "$@"

View File

@@ -23,11 +23,112 @@ from alembic import command
from alembic.config import Config
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from datetime import datetime, timezone
from web.models import Base
from web.models import Base, AlertRule
from web.utils.settings import PasswordManager, SettingsManager
def init_default_alert_rules(session):
"""
Create default alert rules for Phase 5.
Args:
session: Database session
"""
print("Initializing default alert rules...")
# Check if alert rules already exist
existing_rules = session.query(AlertRule).count()
if existing_rules > 0:
print(f" Alert rules already exist ({existing_rules} rules), skipping...")
return
default_rules = [
{
'name': 'Unexpected Port Detection',
'rule_type': 'unexpected_port',
'enabled': True,
'threshold': None,
'email_enabled': False,
'webhook_enabled': False,
'severity': 'warning',
'filter_conditions': None,
'config_id': None
},
{
'name': 'Drift Detection',
'rule_type': 'drift_detection',
'enabled': True,
'threshold': None, # No threshold means alert on any drift
'email_enabled': False,
'webhook_enabled': False,
'severity': 'info',
'filter_conditions': None,
'config_id': None
},
{
'name': 'Certificate Expiry Warning',
'rule_type': 'cert_expiry',
'enabled': True,
'threshold': 30, # Alert when certs expire in 30 days
'email_enabled': False,
'webhook_enabled': False,
'severity': 'warning',
'filter_conditions': None,
'config_id': None
},
{
'name': 'Weak TLS Detection',
'rule_type': 'weak_tls',
'enabled': True,
'threshold': None,
'email_enabled': False,
'webhook_enabled': False,
'severity': 'warning',
'filter_conditions': None,
'config_id': None
},
{
'name': 'Host Down Detection',
'rule_type': 'ping_failed',
'enabled': True,
'threshold': None,
'email_enabled': False,
'webhook_enabled': False,
'severity': 'critical',
'filter_conditions': None,
'config_id': None
}
]
try:
for rule_data in default_rules:
rule = AlertRule(
name=rule_data['name'],
rule_type=rule_data['rule_type'],
enabled=rule_data['enabled'],
threshold=rule_data['threshold'],
email_enabled=rule_data['email_enabled'],
webhook_enabled=rule_data['webhook_enabled'],
severity=rule_data['severity'],
filter_conditions=rule_data['filter_conditions'],
config_id=rule_data['config_id'],
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc)
)
session.add(rule)
print(f" ✓ Created rule: {rule.name}")
session.commit()
print(f"✓ Created {len(default_rules)} default alert rules")
except Exception as e:
print(f"✗ Failed to create default alert rules: {e}")
session.rollback()
raise
def init_database(db_url: str = "sqlite:///./sneakyscanner.db", run_migrations: bool = True):
"""
Initialize the database schema and settings.
@@ -78,6 +179,10 @@ def init_database(db_url: str = "sqlite:///./sneakyscanner.db", run_migrations:
settings_manager = SettingsManager(session)
settings_manager.init_defaults()
print("✓ Default settings initialized")
# Initialize default alert rules
init_default_alert_rules(session)
except Exception as e:
print(f"✗ Failed to initialize settings: {e}")
session.rollback()
@@ -164,6 +269,9 @@ Examples:
# Use custom database URL
python3 init_db.py --db-url postgresql://user:pass@localhost/sneakyscanner
# Force initialization without prompting (for Docker/scripts)
python3 init_db.py --force --password mysecret
# Verify existing database
python3 init_db.py --verify-only
"""
@@ -192,6 +300,12 @@ Examples:
help='Create tables directly instead of using migrations'
)
parser.add_argument(
'--force',
action='store_true',
help='Force initialization without prompting (for non-interactive environments)'
)
args = parser.parse_args()
# Check if database already exists
@@ -200,7 +314,7 @@ Examples:
db_path = args.db_url.replace('sqlite:///', '')
db_exists = Path(db_path).exists()
if db_exists and not args.verify_only:
if db_exists and not args.verify_only and not args.force:
response = input(f"\nDatabase already exists at {db_path}. Reinitialize? (y/N): ")
if response.lower() != 'y':
print("Aborting.")

View File

@@ -0,0 +1,120 @@
"""Add enhanced alert features for Phase 5
Revision ID: 004
Revises: 003
Create Date: 2025-11-18
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic
revision = '004'
down_revision = '003'
branch_labels = None
depends_on = None
def upgrade():
"""
Add enhancements for Phase 5 Alert Rule Engine:
- Enhanced alert_rules fields
- Enhanced alerts fields
- New webhooks table
- New webhook_delivery_log table
"""
# Enhance alert_rules table
with op.batch_alter_table('alert_rules') as batch_op:
batch_op.add_column(sa.Column('name', sa.String(255), nullable=True, comment='User-friendly rule name'))
batch_op.add_column(sa.Column('webhook_enabled', sa.Boolean(), nullable=False, server_default='0', comment='Whether to send webhooks for this rule'))
batch_op.add_column(sa.Column('severity', sa.String(20), nullable=True, comment='Alert severity level (critical, warning, info)'))
batch_op.add_column(sa.Column('filter_conditions', sa.Text(), nullable=True, comment='JSON filter conditions for the rule'))
batch_op.add_column(sa.Column('config_file', sa.String(255), nullable=True, comment='Optional: specific config file this rule applies to'))
batch_op.add_column(sa.Column('updated_at', sa.DateTime(), nullable=True, comment='Last update timestamp'))
# Enhance alerts table
with op.batch_alter_table('alerts') as batch_op:
batch_op.add_column(sa.Column('rule_id', sa.Integer(), nullable=True, comment='Associated alert rule'))
batch_op.add_column(sa.Column('webhook_sent', sa.Boolean(), nullable=False, server_default='0', comment='Whether webhook was sent'))
batch_op.add_column(sa.Column('webhook_sent_at', sa.DateTime(), nullable=True, comment='When webhook was sent'))
batch_op.add_column(sa.Column('acknowledged', sa.Boolean(), nullable=False, server_default='0', comment='Whether alert was acknowledged'))
batch_op.add_column(sa.Column('acknowledged_at', sa.DateTime(), nullable=True, comment='When alert was acknowledged'))
batch_op.add_column(sa.Column('acknowledged_by', sa.String(255), nullable=True, comment='User who acknowledged the alert'))
batch_op.create_foreign_key('fk_alerts_rule_id', 'alert_rules', ['rule_id'], ['id'])
batch_op.create_index('idx_alerts_rule_id', ['rule_id'])
batch_op.create_index('idx_alerts_acknowledged', ['acknowledged'])
# Create webhooks table
op.create_table('webhooks',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(255), nullable=False, comment='Webhook name'),
sa.Column('url', sa.Text(), nullable=False, comment='Webhook URL'),
sa.Column('enabled', sa.Boolean(), nullable=False, server_default='1', comment='Whether webhook is enabled'),
sa.Column('auth_type', sa.String(20), nullable=True, comment='Authentication type: none, bearer, basic, custom'),
sa.Column('auth_token', sa.Text(), nullable=True, comment='Encrypted authentication token'),
sa.Column('custom_headers', sa.Text(), nullable=True, comment='JSON custom headers'),
sa.Column('alert_types', sa.Text(), nullable=True, comment='JSON array of alert types to trigger on'),
sa.Column('severity_filter', sa.Text(), nullable=True, comment='JSON array of severities to trigger on'),
sa.Column('timeout', sa.Integer(), nullable=True, server_default='10', comment='Request timeout in seconds'),
sa.Column('retry_count', sa.Integer(), nullable=True, server_default='3', comment='Number of retry attempts'),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
# Create webhook_delivery_log table
op.create_table('webhook_delivery_log',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('webhook_id', sa.Integer(), nullable=False, comment='Associated webhook'),
sa.Column('alert_id', sa.Integer(), nullable=False, comment='Associated alert'),
sa.Column('status', sa.String(20), nullable=True, comment='Delivery status: success, failed, retrying'),
sa.Column('response_code', sa.Integer(), nullable=True, comment='HTTP response code'),
sa.Column('response_body', sa.Text(), nullable=True, comment='Response body from webhook'),
sa.Column('error_message', sa.Text(), nullable=True, comment='Error message if failed'),
sa.Column('attempt_number', sa.Integer(), nullable=True, comment='Which attempt this was'),
sa.Column('delivered_at', sa.DateTime(), nullable=False, comment='Delivery timestamp'),
sa.ForeignKeyConstraint(['webhook_id'], ['webhooks.id'], ),
sa.ForeignKeyConstraint(['alert_id'], ['alerts.id'], ),
sa.PrimaryKeyConstraint('id')
)
# Create indexes for webhook_delivery_log
op.create_index('idx_webhook_delivery_alert_id', 'webhook_delivery_log', ['alert_id'])
op.create_index('idx_webhook_delivery_webhook_id', 'webhook_delivery_log', ['webhook_id'])
op.create_index('idx_webhook_delivery_status', 'webhook_delivery_log', ['status'])
def downgrade():
"""Remove Phase 5 alert enhancements."""
# Drop webhook_delivery_log table and its indexes
op.drop_index('idx_webhook_delivery_status', table_name='webhook_delivery_log')
op.drop_index('idx_webhook_delivery_webhook_id', table_name='webhook_delivery_log')
op.drop_index('idx_webhook_delivery_alert_id', table_name='webhook_delivery_log')
op.drop_table('webhook_delivery_log')
# Drop webhooks table
op.drop_table('webhooks')
# Remove enhancements from alerts table
with op.batch_alter_table('alerts') as batch_op:
batch_op.drop_index('idx_alerts_acknowledged')
batch_op.drop_index('idx_alerts_rule_id')
batch_op.drop_constraint('fk_alerts_rule_id', type_='foreignkey')
batch_op.drop_column('acknowledged_by')
batch_op.drop_column('acknowledged_at')
batch_op.drop_column('acknowledged')
batch_op.drop_column('webhook_sent_at')
batch_op.drop_column('webhook_sent')
batch_op.drop_column('rule_id')
# Remove enhancements from alert_rules table
with op.batch_alter_table('alert_rules') as batch_op:
batch_op.drop_column('updated_at')
batch_op.drop_column('config_file')
batch_op.drop_column('filter_conditions')
batch_op.drop_column('severity')
batch_op.drop_column('webhook_enabled')
batch_op.drop_column('name')

View File

@@ -0,0 +1,83 @@
"""Add webhook template support
Revision ID: 005
Revises: 004
Create Date: 2025-11-18
"""
from alembic import op
import sqlalchemy as sa
import json
# revision identifiers, used by Alembic
revision = '005'
down_revision = '004'
branch_labels = None
depends_on = None
# Default template that matches the current JSON payload structure
DEFAULT_TEMPLATE = """{
"event": "alert.created",
"alert": {
"id": {{ alert.id }},
"type": "{{ alert.type }}",
"severity": "{{ alert.severity }}",
"message": "{{ alert.message }}",
{% if alert.ip_address %}"ip_address": "{{ alert.ip_address }}",{% endif %}
{% if alert.port %}"port": {{ alert.port }},{% endif %}
"acknowledged": {{ alert.acknowledged|lower }},
"created_at": "{{ alert.created_at.isoformat() }}"
},
"scan": {
"id": {{ scan.id }},
"title": "{{ scan.title }}",
"timestamp": "{{ scan.timestamp.isoformat() }}",
"status": "{{ scan.status }}"
},
"rule": {
"id": {{ rule.id }},
"name": "{{ rule.name }}",
"type": "{{ rule.type }}",
"threshold": {{ rule.threshold if rule.threshold else 'null' }}
}
}"""
def upgrade():
"""
Add webhook template fields:
- template: Jinja2 template for payload
- template_format: Output format (json, text)
- content_type_override: Optional custom Content-Type
"""
# Add new columns to webhooks table
with op.batch_alter_table('webhooks') as batch_op:
batch_op.add_column(sa.Column('template', sa.Text(), nullable=True, comment='Jinja2 template for webhook payload'))
batch_op.add_column(sa.Column('template_format', sa.String(20), nullable=True, server_default='json', comment='Template output format: json, text'))
batch_op.add_column(sa.Column('content_type_override', sa.String(100), nullable=True, comment='Optional custom Content-Type header'))
# Populate existing webhooks with default template
# This ensures backward compatibility by converting existing webhooks to use the
# same JSON structure they're currently sending
connection = op.get_bind()
connection.execute(
sa.text("""
UPDATE webhooks
SET template = :template,
template_format = 'json'
WHERE template IS NULL
"""),
{"template": DEFAULT_TEMPLATE}
)
def downgrade():
"""Remove webhook template fields."""
with op.batch_alter_table('webhooks') as batch_op:
batch_op.drop_column('content_type_override')
batch_op.drop_column('template_format')
batch_op.drop_column('template')

View File

@@ -0,0 +1,161 @@
"""Add reusable site definitions
Revision ID: 006
Revises: 005
Create Date: 2025-11-19
This migration introduces reusable site definitions that can be shared across
multiple scans. Sites are defined once with CIDR ranges and can be referenced
in multiple scan configurations.
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import text
# revision identifiers, used by Alembic
revision = '006'
down_revision = '005'
branch_labels = None
depends_on = None
def upgrade():
"""
Create new site tables and migrate existing scan_sites data to the new structure.
"""
# Create sites table (master site definitions)
op.create_table('sites',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(length=255), nullable=False, comment='Unique site name'),
sa.Column('description', sa.Text(), nullable=True, comment='Site description'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='Site creation time'),
sa.Column('updated_at', sa.DateTime(), nullable=False, comment='Last modification time'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name', name='uix_site_name')
)
op.create_index(op.f('ix_sites_name'), 'sites', ['name'], unique=True)
# Create site_cidrs table (CIDR ranges for each site)
op.create_table('site_cidrs',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('site_id', sa.Integer(), nullable=False, comment='FK to sites'),
sa.Column('cidr', sa.String(length=45), nullable=False, comment='CIDR notation (e.g., 10.0.0.0/24)'),
sa.Column('expected_ping', sa.Boolean(), nullable=True, comment='Expected ping response for this CIDR'),
sa.Column('expected_tcp_ports', sa.Text(), nullable=True, comment='JSON array of expected TCP ports'),
sa.Column('expected_udp_ports', sa.Text(), nullable=True, comment='JSON array of expected UDP ports'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='CIDR creation time'),
sa.ForeignKeyConstraint(['site_id'], ['sites.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('site_id', 'cidr', name='uix_site_cidr')
)
op.create_index(op.f('ix_site_cidrs_site_id'), 'site_cidrs', ['site_id'], unique=False)
# Create site_ips table (IP-level overrides within CIDRs)
op.create_table('site_ips',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('site_cidr_id', sa.Integer(), nullable=False, comment='FK to site_cidrs'),
sa.Column('ip_address', sa.String(length=45), nullable=False, comment='IPv4 or IPv6 address'),
sa.Column('expected_ping', sa.Boolean(), nullable=True, comment='Override ping expectation for this IP'),
sa.Column('expected_tcp_ports', sa.Text(), nullable=True, comment='JSON array of expected TCP ports (overrides CIDR)'),
sa.Column('expected_udp_ports', sa.Text(), nullable=True, comment='JSON array of expected UDP ports (overrides CIDR)'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='IP override creation time'),
sa.ForeignKeyConstraint(['site_cidr_id'], ['site_cidrs.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('site_cidr_id', 'ip_address', name='uix_site_cidr_ip')
)
op.create_index(op.f('ix_site_ips_site_cidr_id'), 'site_ips', ['site_cidr_id'], unique=False)
# Create scan_site_associations table (many-to-many between scans and sites)
op.create_table('scan_site_associations',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('scan_id', sa.Integer(), nullable=False, comment='FK to scans'),
sa.Column('site_id', sa.Integer(), nullable=False, comment='FK to sites'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='Association creation time'),
sa.ForeignKeyConstraint(['scan_id'], ['scans.id'], ),
sa.ForeignKeyConstraint(['site_id'], ['sites.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('scan_id', 'site_id', name='uix_scan_site')
)
op.create_index(op.f('ix_scan_site_associations_scan_id'), 'scan_site_associations', ['scan_id'], unique=False)
op.create_index(op.f('ix_scan_site_associations_site_id'), 'scan_site_associations', ['site_id'], unique=False)
# Migrate existing data
connection = op.get_bind()
# 1. Extract unique site names from existing scan_sites and create master Site records
# This groups all historical scan sites by name and creates one master site per unique name
connection.execute(text("""
INSERT INTO sites (name, description, created_at, updated_at)
SELECT DISTINCT
site_name,
'Migrated from scan_sites' as description,
datetime('now') as created_at,
datetime('now') as updated_at
FROM scan_sites
WHERE site_name NOT IN (SELECT name FROM sites)
"""))
# 2. Create scan_site_associations linking scans to their sites
# This maintains the historical relationship between scans and the sites they used
connection.execute(text("""
INSERT INTO scan_site_associations (scan_id, site_id, created_at)
SELECT DISTINCT
ss.scan_id,
s.id as site_id,
datetime('now') as created_at
FROM scan_sites ss
INNER JOIN sites s ON s.name = ss.site_name
WHERE NOT EXISTS (
SELECT 1 FROM scan_site_associations ssa
WHERE ssa.scan_id = ss.scan_id AND ssa.site_id = s.id
)
"""))
# 3. For each migrated site, create a CIDR entry from the IPs in scan_ips
# Since historical data has individual IPs, we'll create /32 CIDRs for each unique IP
# This preserves the exact IP addresses while fitting them into the new CIDR-based model
connection.execute(text("""
INSERT INTO site_cidrs (site_id, cidr, expected_ping, expected_tcp_ports, expected_udp_ports, created_at)
SELECT DISTINCT
s.id as site_id,
si.ip_address || '/32' as cidr,
si.ping_expected,
'[]' as expected_tcp_ports,
'[]' as expected_udp_ports,
datetime('now') as created_at
FROM scan_ips si
INNER JOIN scan_sites ss ON ss.id = si.site_id
INNER JOIN sites s ON s.name = ss.site_name
WHERE NOT EXISTS (
SELECT 1 FROM site_cidrs sc
WHERE sc.site_id = s.id AND sc.cidr = si.ip_address || '/32'
)
GROUP BY s.id, si.ip_address, si.ping_expected
"""))
print("✓ Migration complete: Reusable sites created from historical scan data")
print(f" - Created {connection.execute(text('SELECT COUNT(*) FROM sites')).scalar()} master site(s)")
print(f" - Created {connection.execute(text('SELECT COUNT(*) FROM site_cidrs')).scalar()} CIDR range(s)")
print(f" - Created {connection.execute(text('SELECT COUNT(*) FROM scan_site_associations')).scalar()} scan-site association(s)")
def downgrade():
"""Remove reusable site tables."""
# Drop tables in reverse order of creation (respecting foreign keys)
op.drop_index(op.f('ix_scan_site_associations_site_id'), table_name='scan_site_associations')
op.drop_index(op.f('ix_scan_site_associations_scan_id'), table_name='scan_site_associations')
op.drop_table('scan_site_associations')
op.drop_index(op.f('ix_site_ips_site_cidr_id'), table_name='site_ips')
op.drop_table('site_ips')
op.drop_index(op.f('ix_site_cidrs_site_id'), table_name='site_cidrs')
op.drop_table('site_cidrs')
op.drop_index(op.f('ix_sites_name'), table_name='sites')
op.drop_table('sites')
print("✓ Downgrade complete: Reusable site tables removed")

View File

@@ -0,0 +1,102 @@
"""Add database-stored scan configurations
Revision ID: 007
Revises: 006
Create Date: 2025-11-19
This migration introduces database-stored scan configurations to replace YAML
config files. Configs reference sites from the sites table, enabling visual
config builder and better data management.
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import text
# revision identifiers, used by Alembic
revision = '007'
down_revision = '006'
branch_labels = None
depends_on = None
def upgrade():
"""
Create scan_configs and scan_config_sites tables.
Add config_id foreign keys to scans and schedules tables.
"""
# Create scan_configs table
op.create_table('scan_configs',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('title', sa.String(length=255), nullable=False, comment='Configuration title'),
sa.Column('description', sa.Text(), nullable=True, comment='Configuration description'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='Config creation time'),
sa.Column('updated_at', sa.DateTime(), nullable=False, comment='Last modification time'),
sa.PrimaryKeyConstraint('id')
)
# Create scan_config_sites table (many-to-many between configs and sites)
op.create_table('scan_config_sites',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('config_id', sa.Integer(), nullable=False, comment='FK to scan_configs'),
sa.Column('site_id', sa.Integer(), nullable=False, comment='FK to sites'),
sa.Column('created_at', sa.DateTime(), nullable=False, comment='Association creation time'),
sa.ForeignKeyConstraint(['config_id'], ['scan_configs.id'], ),
sa.ForeignKeyConstraint(['site_id'], ['sites.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('config_id', 'site_id', name='uix_config_site')
)
op.create_index(op.f('ix_scan_config_sites_config_id'), 'scan_config_sites', ['config_id'], unique=False)
op.create_index(op.f('ix_scan_config_sites_site_id'), 'scan_config_sites', ['site_id'], unique=False)
# Add config_id to scans table
with op.batch_alter_table('scans', schema=None) as batch_op:
batch_op.add_column(sa.Column('config_id', sa.Integer(), nullable=True, comment='FK to scan_configs table'))
batch_op.create_index('ix_scans_config_id', ['config_id'], unique=False)
batch_op.create_foreign_key('fk_scans_config_id', 'scan_configs', ['config_id'], ['id'])
# Mark config_file as deprecated in comment (already has nullable=True)
# Add config_id to schedules table and make config_file nullable
with op.batch_alter_table('schedules', schema=None) as batch_op:
batch_op.add_column(sa.Column('config_id', sa.Integer(), nullable=True, comment='FK to scan_configs table'))
batch_op.create_index('ix_schedules_config_id', ['config_id'], unique=False)
batch_op.create_foreign_key('fk_schedules_config_id', 'scan_configs', ['config_id'], ['id'])
# Make config_file nullable (it was required before)
batch_op.alter_column('config_file', existing_type=sa.Text(), nullable=True)
connection = op.get_bind()
print("✓ Migration complete: Scan configs tables created")
print(" - Created scan_configs table for database-stored configurations")
print(" - Created scan_config_sites association table")
print(" - Added config_id to scans table")
print(" - Added config_id to schedules table")
print(" - Existing YAML configs remain in config_file column for backward compatibility")
def downgrade():
"""Remove scan config tables and columns."""
# Remove foreign keys and columns from schedules
with op.batch_alter_table('schedules', schema=None) as batch_op:
batch_op.drop_constraint('fk_schedules_config_id', type_='foreignkey')
batch_op.drop_index('ix_schedules_config_id')
batch_op.drop_column('config_id')
# Restore config_file as required
batch_op.alter_column('config_file', existing_type=sa.Text(), nullable=False)
# Remove foreign keys and columns from scans
with op.batch_alter_table('scans', schema=None) as batch_op:
batch_op.drop_constraint('fk_scans_config_id', type_='foreignkey')
batch_op.drop_index('ix_scans_config_id')
batch_op.drop_column('config_id')
# Drop tables in reverse order
op.drop_index(op.f('ix_scan_config_sites_site_id'), table_name='scan_config_sites')
op.drop_index(op.f('ix_scan_config_sites_config_id'), table_name='scan_config_sites')
op.drop_table('scan_config_sites')
op.drop_table('scan_configs')
print("✓ Downgrade complete: Scan config tables and columns removed")

View File

@@ -0,0 +1,270 @@
"""Expand CIDRs to individual IPs with per-IP settings
Revision ID: 008
Revises: 007
Create Date: 2025-11-19
This migration changes the site architecture to automatically expand CIDRs into
individual IPs in the database. Each IP has its own port and ping settings.
Changes:
- Add site_id to site_ips (direct link to sites, support standalone IPs)
- Make site_cidr_id nullable (IPs can exist without a CIDR parent)
- Remove settings from site_cidrs (settings now only at IP level)
- Add unique constraint: no duplicate IPs within a site
- Expand existing CIDRs to individual IPs
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import text
import ipaddress
# revision identifiers, used by Alembic
revision = '008'
down_revision = '007'
branch_labels = None
depends_on = None
def upgrade():
"""
Modify schema to support per-IP settings and auto-expand CIDRs.
"""
connection = op.get_bind()
# Check if site_id column already exists
inspector = sa.inspect(connection)
site_ips_columns = [col['name'] for col in inspector.get_columns('site_ips')]
site_cidrs_columns = [col['name'] for col in inspector.get_columns('site_cidrs')]
# Step 1: Add site_id column to site_ips (will be populated from site_cidr_id)
if 'site_id' not in site_ips_columns:
print("Adding site_id column to site_ips...")
op.add_column('site_ips', sa.Column('site_id', sa.Integer(), nullable=True, comment='FK to sites (direct link)'))
else:
print("site_id column already exists in site_ips, skipping...")
# Step 2: Populate site_id from site_cidr_id (before we make it nullable)
print("Populating site_id from existing site_cidr relationships...")
connection.execute(text("""
UPDATE site_ips
SET site_id = (
SELECT site_id
FROM site_cidrs
WHERE site_cidrs.id = site_ips.site_cidr_id
)
WHERE site_cidr_id IS NOT NULL
"""))
# Step 3: Make site_id NOT NULL and add foreign key
# Check if foreign key exists before creating
try:
op.alter_column('site_ips', 'site_id', nullable=False)
print("Made site_id NOT NULL")
except Exception as e:
print(f"site_id already NOT NULL or error: {e}")
# Check if foreign key exists
try:
op.create_foreign_key('fk_site_ips_site_id', 'site_ips', 'sites', ['site_id'], ['id'])
print("Created foreign key fk_site_ips_site_id")
except Exception as e:
print(f"Foreign key already exists or error: {e}")
# Check if index exists
try:
op.create_index(op.f('ix_site_ips_site_id'), 'site_ips', ['site_id'], unique=False)
print("Created index ix_site_ips_site_id")
except Exception as e:
print(f"Index already exists or error: {e}")
# Step 4: Make site_cidr_id nullable (for standalone IPs)
try:
op.alter_column('site_ips', 'site_cidr_id', nullable=True)
print("Made site_cidr_id nullable")
except Exception as e:
print(f"site_cidr_id already nullable or error: {e}")
# Step 5: Drop old unique constraint and create new one (site_id, ip_address)
# This prevents duplicate IPs within a site (across all CIDRs and standalone)
try:
op.drop_constraint('uix_site_cidr_ip', 'site_ips', type_='unique')
print("Dropped old constraint uix_site_cidr_ip")
except Exception as e:
print(f"Constraint already dropped or doesn't exist: {e}")
try:
op.create_unique_constraint('uix_site_ip_address', 'site_ips', ['site_id', 'ip_address'])
print("Created new constraint uix_site_ip_address")
except Exception as e:
print(f"Constraint already exists or error: {e}")
# Step 6: Expand existing CIDRs to individual IPs
print("Expanding existing CIDRs to individual IPs...")
# Get all existing CIDRs
cidrs = connection.execute(text("""
SELECT id, site_id, cidr, expected_ping, expected_tcp_ports, expected_udp_ports
FROM site_cidrs
""")).fetchall()
expanded_count = 0
skipped_count = 0
for cidr_row in cidrs:
cidr_id, site_id, cidr_str, expected_ping, expected_tcp_ports, expected_udp_ports = cidr_row
try:
# Parse CIDR
network = ipaddress.ip_network(cidr_str, strict=False)
# Check size - skip if too large (> /24 for IPv4, > /64 for IPv6)
if isinstance(network, ipaddress.IPv4Network) and network.prefixlen < 24:
print(f" ⚠ Skipping large CIDR {cidr_str} (>{network.num_addresses} IPs)")
skipped_count += 1
continue
elif isinstance(network, ipaddress.IPv6Network) and network.prefixlen < 64:
print(f" ⚠ Skipping large CIDR {cidr_str} (>{network.num_addresses} IPs)")
skipped_count += 1
continue
# Expand to individual IPs
for ip in network.hosts() if network.num_addresses > 2 else [network.network_address]:
ip_str = str(ip)
# Check if this IP already exists (from previous IP overrides)
existing = connection.execute(text("""
SELECT id FROM site_ips
WHERE site_cidr_id = :cidr_id AND ip_address = :ip_address
"""), {'cidr_id': cidr_id, 'ip_address': ip_str}).fetchone()
if not existing:
# Insert new IP with settings from CIDR
connection.execute(text("""
INSERT INTO site_ips (
site_id, site_cidr_id, ip_address,
expected_ping, expected_tcp_ports, expected_udp_ports,
created_at
)
VALUES (
:site_id, :cidr_id, :ip_address,
:expected_ping, :expected_tcp_ports, :expected_udp_ports,
datetime('now')
)
"""), {
'site_id': site_id,
'cidr_id': cidr_id,
'ip_address': ip_str,
'expected_ping': expected_ping,
'expected_tcp_ports': expected_tcp_ports,
'expected_udp_ports': expected_udp_ports
})
expanded_count += 1
except Exception as e:
print(f" ✗ Error expanding CIDR {cidr_str}: {e}")
skipped_count += 1
continue
print(f" ✓ Expanded {expanded_count} IPs from CIDRs")
if skipped_count > 0:
print(f" ⚠ Skipped {skipped_count} CIDRs (too large or errors)")
# Step 7: Remove settings columns from site_cidrs (now only at IP level)
print("Removing settings columns from site_cidrs...")
# Re-inspect to get current columns
site_cidrs_columns = [col['name'] for col in inspector.get_columns('site_cidrs')]
if 'expected_ping' in site_cidrs_columns:
try:
op.drop_column('site_cidrs', 'expected_ping')
print("Dropped expected_ping from site_cidrs")
except Exception as e:
print(f"Error dropping expected_ping: {e}")
else:
print("expected_ping already dropped from site_cidrs")
if 'expected_tcp_ports' in site_cidrs_columns:
try:
op.drop_column('site_cidrs', 'expected_tcp_ports')
print("Dropped expected_tcp_ports from site_cidrs")
except Exception as e:
print(f"Error dropping expected_tcp_ports: {e}")
else:
print("expected_tcp_ports already dropped from site_cidrs")
if 'expected_udp_ports' in site_cidrs_columns:
try:
op.drop_column('site_cidrs', 'expected_udp_ports')
print("Dropped expected_udp_ports from site_cidrs")
except Exception as e:
print(f"Error dropping expected_udp_ports: {e}")
else:
print("expected_udp_ports already dropped from site_cidrs")
# Print summary
total_sites = connection.execute(text('SELECT COUNT(*) FROM sites')).scalar()
total_cidrs = connection.execute(text('SELECT COUNT(*) FROM site_cidrs')).scalar()
total_ips = connection.execute(text('SELECT COUNT(*) FROM site_ips')).scalar()
print("\n✓ Migration 008 complete: CIDRs expanded to individual IPs")
print(f" - Total sites: {total_sites}")
print(f" - Total CIDRs: {total_cidrs}")
print(f" - Total IPs: {total_ips}")
def downgrade():
"""
Revert schema changes (restore CIDR-level settings).
Note: This will lose per-IP granularity!
"""
connection = op.get_bind()
print("Rolling back to CIDR-level settings...")
# Step 1: Add settings columns back to site_cidrs
op.add_column('site_cidrs', sa.Column('expected_ping', sa.Boolean(), nullable=True))
op.add_column('site_cidrs', sa.Column('expected_tcp_ports', sa.Text(), nullable=True))
op.add_column('site_cidrs', sa.Column('expected_udp_ports', sa.Text(), nullable=True))
# Step 2: Populate CIDR settings from first IP in each CIDR (approximation)
connection.execute(text("""
UPDATE site_cidrs
SET
expected_ping = (
SELECT expected_ping FROM site_ips
WHERE site_ips.site_cidr_id = site_cidrs.id
LIMIT 1
),
expected_tcp_ports = (
SELECT expected_tcp_ports FROM site_ips
WHERE site_ips.site_cidr_id = site_cidrs.id
LIMIT 1
),
expected_udp_ports = (
SELECT expected_udp_ports FROM site_ips
WHERE site_ips.site_cidr_id = site_cidrs.id
LIMIT 1
)
"""))
# Step 3: Delete auto-expanded IPs (keep only original overrides)
# In practice, this is difficult to determine, so we'll keep all IPs
# and just remove the schema changes
# Step 4: Drop new unique constraint and restore old one
op.drop_constraint('uix_site_ip_address', 'site_ips', type_='unique')
op.create_unique_constraint('uix_site_cidr_ip', 'site_ips', ['site_cidr_id', 'ip_address'])
# Step 5: Make site_cidr_id NOT NULL again
op.alter_column('site_ips', 'site_cidr_id', nullable=False)
# Step 6: Drop site_id column and related constraints
op.drop_index(op.f('ix_site_ips_site_id'), table_name='site_ips')
op.drop_constraint('fk_site_ips_site_id', 'site_ips', type_='foreignkey')
op.drop_column('site_ips', 'site_id')
print("✓ Downgrade complete: Reverted to CIDR-level settings")

View File

@@ -0,0 +1,210 @@
"""Remove CIDR table - make sites IP-only
Revision ID: 009
Revises: 008
Create Date: 2025-11-19
This migration removes the SiteCIDR table entirely, making sites purely
IP-based. CIDRs are now only used as a convenience for bulk IP addition,
not stored as permanent entities.
Changes:
- Set all site_ips.site_cidr_id to NULL (preserve all IPs)
- Drop foreign key from site_ips to site_cidrs
- Drop site_cidrs table
- Remove site_cidr_id column from site_ips
All existing IPs are preserved. They become "standalone" IPs without
a CIDR parent.
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import text
# revision identifiers, used by Alembic
revision = '009'
down_revision = '008'
branch_labels = None
depends_on = None
def upgrade():
"""
Remove CIDR table and make all IPs standalone.
"""
connection = op.get_bind()
inspector = sa.inspect(connection)
print("\n=== Migration 009: Remove CIDR Table ===\n")
# Get counts before migration
try:
total_cidrs = connection.execute(text('SELECT COUNT(*) FROM site_cidrs')).scalar()
total_ips = connection.execute(text('SELECT COUNT(*) FROM site_ips')).scalar()
ips_with_cidr = connection.execute(text(
'SELECT COUNT(*) FROM site_ips WHERE site_cidr_id IS NOT NULL'
)).scalar()
print(f"Before migration:")
print(f" - Total CIDRs: {total_cidrs}")
print(f" - Total IPs: {total_ips}")
print(f" - IPs linked to CIDRs: {ips_with_cidr}")
print(f" - Standalone IPs: {total_ips - ips_with_cidr}\n")
except Exception as e:
print(f"Could not get pre-migration stats: {e}\n")
# Step 1: Set all site_cidr_id to NULL (preserve all IPs as standalone)
print("Step 1: Converting all IPs to standalone (nulling CIDR associations)...")
try:
result = connection.execute(text("""
UPDATE site_ips
SET site_cidr_id = NULL
WHERE site_cidr_id IS NOT NULL
"""))
print(f" ✓ Converted {result.rowcount} IPs to standalone\n")
except Exception as e:
print(f" ⚠ Error or already done: {e}\n")
# Step 2: Drop foreign key constraint from site_ips to site_cidrs
print("Step 2: Dropping foreign key constraint from site_ips to site_cidrs...")
foreign_keys = inspector.get_foreign_keys('site_ips')
fk_to_drop = None
for fk in foreign_keys:
if fk['referred_table'] == 'site_cidrs':
fk_to_drop = fk['name']
break
if fk_to_drop:
try:
op.drop_constraint(fk_to_drop, 'site_ips', type_='foreignkey')
print(f" ✓ Dropped foreign key constraint: {fk_to_drop}\n")
except Exception as e:
print(f" ⚠ Could not drop foreign key: {e}\n")
else:
print(" ⚠ Foreign key constraint not found or already dropped\n")
# Step 3: Drop index on site_cidr_id (if exists)
print("Step 3: Dropping index on site_cidr_id...")
indexes = inspector.get_indexes('site_ips')
index_to_drop = None
for idx in indexes:
if 'site_cidr_id' in idx['column_names']:
index_to_drop = idx['name']
break
if index_to_drop:
try:
op.drop_index(index_to_drop, table_name='site_ips')
print(f" ✓ Dropped index: {index_to_drop}\n")
except Exception as e:
print(f" ⚠ Could not drop index: {e}\n")
else:
print(" ⚠ Index not found or already dropped\n")
# Step 4: Drop site_cidrs table
print("Step 4: Dropping site_cidrs table...")
tables = inspector.get_table_names()
if 'site_cidrs' in tables:
try:
op.drop_table('site_cidrs')
print(" ✓ Dropped site_cidrs table\n")
except Exception as e:
print(f" ⚠ Could not drop table: {e}\n")
else:
print(" ⚠ Table site_cidrs not found or already dropped\n")
# Step 5: Drop site_cidr_id column from site_ips
print("Step 5: Dropping site_cidr_id column from site_ips...")
site_ips_columns = [col['name'] for col in inspector.get_columns('site_ips')]
if 'site_cidr_id' in site_ips_columns:
try:
op.drop_column('site_ips', 'site_cidr_id')
print(" ✓ Dropped site_cidr_id column from site_ips\n")
except Exception as e:
print(f" ⚠ Could not drop column: {e}\n")
else:
print(" ⚠ Column site_cidr_id not found or already dropped\n")
# Get counts after migration
try:
final_ips = connection.execute(text('SELECT COUNT(*) FROM site_ips')).scalar()
total_sites = connection.execute(text('SELECT COUNT(*) FROM sites')).scalar()
print("After migration:")
print(f" - Total sites: {total_sites}")
print(f" - Total IPs (all standalone): {final_ips}")
print(f" - CIDRs: N/A (table removed)")
except Exception as e:
print(f"Could not get post-migration stats: {e}")
print("\n✓ Migration 009 complete: Sites are now IP-only")
print(" All IPs preserved as standalone. CIDRs can still be used")
print(" via the API/UI for bulk IP creation, but are not stored.\n")
def downgrade():
"""
Recreate site_cidrs table (CANNOT restore original CIDR associations).
WARNING: This downgrade creates an empty site_cidrs table structure but
cannot restore the original CIDR-to-IP associations since that data was
deleted. All IPs will remain standalone.
"""
connection = op.get_bind()
print("\n=== Downgrade 009: Recreate CIDR Table Structure ===\n")
print("⚠ WARNING: Cannot restore original CIDR associations!")
print(" The site_cidrs table structure will be recreated but will be empty.")
print(" All IPs will remain standalone. This is a PARTIAL downgrade.\n")
# Step 1: Recreate site_cidrs table (empty)
print("Step 1: Recreating site_cidrs table structure...")
try:
op.create_table(
'site_cidrs',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('site_id', sa.Integer(), nullable=False),
sa.Column('cidr', sa.String(length=45), nullable=False, comment='CIDR notation (e.g., 10.0.0.0/24)'),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.ForeignKeyConstraint(['site_id'], ['sites.id'], ),
sa.UniqueConstraint('site_id', 'cidr', name='uix_site_cidr')
)
print(" ✓ Recreated site_cidrs table (empty)\n")
except Exception as e:
print(f" ⚠ Could not create table: {e}\n")
# Step 2: Add site_cidr_id column back to site_ips (nullable)
print("Step 2: Adding site_cidr_id column back to site_ips...")
try:
op.add_column('site_ips', sa.Column('site_cidr_id', sa.Integer(), nullable=True, comment='FK to site_cidrs (optional, for grouping)'))
print(" ✓ Added site_cidr_id column (nullable)\n")
except Exception as e:
print(f" ⚠ Could not add column: {e}\n")
# Step 3: Add foreign key constraint
print("Step 3: Adding foreign key constraint...")
try:
op.create_foreign_key('fk_site_ips_site_cidr_id', 'site_ips', 'site_cidrs', ['site_cidr_id'], ['id'])
print(" ✓ Created foreign key constraint\n")
except Exception as e:
print(f" ⚠ Could not create foreign key: {e}\n")
# Step 4: Add index on site_cidr_id
print("Step 4: Adding index on site_cidr_id...")
try:
op.create_index('ix_site_ips_site_cidr_id', 'site_ips', ['site_cidr_id'], unique=False)
print(" ✓ Created index on site_cidr_id\n")
except Exception as e:
print(f" ⚠ Could not create index: {e}\n")
print("✓ Downgrade complete: CIDR table structure restored (but empty)")
print(" All IPs remain standalone. You would need to manually recreate")
print(" CIDR records and associate IPs with them.\n")

View File

@@ -0,0 +1,53 @@
"""Add config_id to alert_rules table
Revision ID: 010
Revises: 009
Create Date: 2025-11-19
This migration adds config_id foreign key to alert_rules table to replace
the config_file column, completing the migration from file-based to
database-based configurations.
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic
revision = '010'
down_revision = '009'
branch_labels = None
depends_on = None
def upgrade():
"""
Add config_id to alert_rules table and remove config_file.
"""
with op.batch_alter_table('alert_rules', schema=None) as batch_op:
# Add config_id column with foreign key
batch_op.add_column(sa.Column('config_id', sa.Integer(), nullable=True, comment='FK to scan_configs table'))
batch_op.create_index('ix_alert_rules_config_id', ['config_id'], unique=False)
batch_op.create_foreign_key('fk_alert_rules_config_id', 'scan_configs', ['config_id'], ['id'])
# Remove the old config_file column
batch_op.drop_column('config_file')
print("✓ Migration complete: AlertRule now uses config_id")
print(" - Added config_id foreign key to alert_rules table")
print(" - Removed deprecated config_file column")
def downgrade():
"""Remove config_id and restore config_file on alert_rules."""
with op.batch_alter_table('alert_rules', schema=None) as batch_op:
# Remove foreign key and config_id column
batch_op.drop_constraint('fk_alert_rules_config_id', type_='foreignkey')
batch_op.drop_index('ix_alert_rules_config_id')
batch_op.drop_column('config_id')
# Restore config_file column
batch_op.add_column(sa.Column('config_file', sa.String(255), nullable=True, comment='Optional: specific config file this rule applies to'))
print("✓ Downgrade complete: AlertRule config_id removed, config_file restored")

View File

@@ -0,0 +1,86 @@
"""Drop deprecated config_file columns
Revision ID: 011
Revises: 010
Create Date: 2025-11-19
This migration removes the deprecated config_file columns from scans and schedules
tables. All functionality now uses config_id to reference database-stored configs.
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic
revision = '011'
down_revision = '010'
branch_labels = None
depends_on = None
def upgrade():
"""
Drop config_file columns from scans and schedules tables.
Prerequisites:
- All scans must have config_id set
- All schedules must have config_id set
- Code must be updated to no longer reference config_file
"""
connection = op.get_bind()
# Check for any records missing config_id
result = connection.execute(sa.text(
"SELECT COUNT(*) FROM scans WHERE config_id IS NULL"
))
scans_without_config = result.scalar()
result = connection.execute(sa.text(
"SELECT COUNT(*) FROM schedules WHERE config_id IS NULL"
))
schedules_without_config = result.scalar()
if scans_without_config > 0:
print(f"WARNING: {scans_without_config} scans have NULL config_id")
print(" These scans will lose their config reference after migration")
if schedules_without_config > 0:
raise Exception(
f"Cannot proceed: {schedules_without_config} schedules have NULL config_id. "
"Please set config_id for all schedules before running this migration."
)
# Drop config_file from scans table
with op.batch_alter_table('scans', schema=None) as batch_op:
batch_op.drop_column('config_file')
# Drop config_file from schedules table
with op.batch_alter_table('schedules', schema=None) as batch_op:
batch_op.drop_column('config_file')
print("✓ Migration complete: Dropped config_file columns")
print(" - Removed config_file from scans table")
print(" - Removed config_file from schedules table")
print(" - All references should now use config_id")
def downgrade():
"""Re-add config_file columns (data will be lost)."""
# Add config_file back to scans
with op.batch_alter_table('scans', schema=None) as batch_op:
batch_op.add_column(
sa.Column('config_file', sa.Text(), nullable=True,
comment='Path to YAML config used (deprecated)')
)
# Add config_file back to schedules
with op.batch_alter_table('schedules', schema=None) as batch_op:
batch_op.add_column(
sa.Column('config_file', sa.Text(), nullable=True,
comment='Path to YAML config (deprecated)')
)
print("✓ Downgrade complete: Re-added config_file columns")
print(" WARNING: config_file values are lost and will be NULL")

View File

@@ -0,0 +1,58 @@
"""Add scan progress tracking
Revision ID: 012
Revises: 011
Create Date: 2024-01-01 00:00:00.000000
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '012'
down_revision = '011'
branch_labels = None
depends_on = None
def upgrade():
# Add progress tracking columns to scans table
op.add_column('scans', sa.Column('current_phase', sa.String(50), nullable=True,
comment='Current scan phase: ping, tcp_scan, udp_scan, service_detection, http_analysis'))
op.add_column('scans', sa.Column('total_ips', sa.Integer(), nullable=True,
comment='Total number of IPs to scan'))
op.add_column('scans', sa.Column('completed_ips', sa.Integer(), nullable=True, default=0,
comment='Number of IPs completed in current phase'))
# Create scan_progress table for per-IP progress tracking
op.create_table(
'scan_progress',
sa.Column('id', sa.Integer(), primary_key=True, autoincrement=True),
sa.Column('scan_id', sa.Integer(), sa.ForeignKey('scans.id'), nullable=False, index=True),
sa.Column('ip_address', sa.String(45), nullable=False, comment='IP address being scanned'),
sa.Column('site_name', sa.String(255), nullable=True, comment='Site name this IP belongs to'),
sa.Column('phase', sa.String(50), nullable=False,
comment='Phase: ping, tcp_scan, udp_scan, service_detection, http_analysis'),
sa.Column('status', sa.String(20), nullable=False, default='pending',
comment='pending, in_progress, completed, failed'),
sa.Column('ping_result', sa.Boolean(), nullable=True, comment='Ping response result'),
sa.Column('tcp_ports', sa.Text(), nullable=True, comment='JSON array of discovered TCP ports'),
sa.Column('udp_ports', sa.Text(), nullable=True, comment='JSON array of discovered UDP ports'),
sa.Column('services', sa.Text(), nullable=True, comment='JSON array of detected services'),
sa.Column('created_at', sa.DateTime(), nullable=False, server_default=sa.func.now(),
comment='Entry creation time'),
sa.Column('updated_at', sa.DateTime(), nullable=False, server_default=sa.func.now(),
onupdate=sa.func.now(), comment='Last update time'),
sa.UniqueConstraint('scan_id', 'ip_address', name='uix_scan_progress_ip')
)
def downgrade():
# Drop scan_progress table
op.drop_table('scan_progress')
# Remove progress tracking columns from scans table
op.drop_column('scans', 'completed_ips')
op.drop_column('scans', 'total_ips')
op.drop_column('scans', 'current_phase')

View File

@@ -12,7 +12,7 @@ alembic==1.13.0
# Authentication & Security
Flask-Login==0.6.3
bcrypt==4.1.2
cryptography==41.0.7
cryptography>=46.0.0
# API & Serialization
Flask-CORS==4.0.0
@@ -26,6 +26,9 @@ croniter==2.0.1
# Email Support (Phase 4)
Flask-Mail==0.9.1
# Webhook Support (Phase 5)
requests==2.31.0
# Configuration Management
python-dotenv==1.0.0

View File

@@ -1,5 +1,5 @@
PyYAML==6.0.1
python-libnmap==0.7.3
sslyze==6.0.0
sslyze==6.2.0
playwright==1.40.0
Jinja2==3.1.2

View File

@@ -78,7 +78,7 @@ class HTMLReportGenerator:
'title': self.report_data.get('title', 'SneakyScanner Report'),
'scan_time': self.report_data.get('scan_time'),
'scan_duration': self.report_data.get('scan_duration'),
'config_file': self.report_data.get('config_file'),
'config_id': self.report_data.get('config_id'),
'sites': self.report_data.get('sites', []),
'summary_stats': summary_stats,
'drift_alerts': drift_alerts,

View File

@@ -6,14 +6,17 @@ SneakyScanner - Masscan-based network scanner with YAML configuration
import argparse
import json
import logging
import os
import signal
import subprocess
import sys
import tempfile
import threading
import time
import zipfile
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any
from typing import Dict, List, Any, Callable, Optional
import xml.etree.ElementTree as ET
import yaml
@@ -22,24 +25,93 @@ from libnmap.parser import NmapParser
from src.screenshot_capture import ScreenshotCapture
from src.report_generator import HTMLReportGenerator
from web.config import NMAP_HOST_TIMEOUT
# Force unbuffered output for Docker
sys.stdout.reconfigure(line_buffering=True)
sys.stderr.reconfigure(line_buffering=True)
class SneakyScanner:
"""Wrapper for masscan to perform network scans based on YAML config"""
class ScanCancelledError(Exception):
"""Raised when a scan is cancelled by the user."""
pass
def __init__(self, config_path: str, output_dir: str = "/app/output"):
self.config_path = Path(config_path)
class SneakyScanner:
"""Wrapper for masscan to perform network scans based on YAML config or database config"""
def __init__(self, config_path: str = None, config_id: int = None, config_dict: Dict = None, output_dir: str = "/app/output"):
"""
Initialize scanner with configuration.
Args:
config_path: Path to YAML config file (legacy)
config_id: Database config ID (preferred)
config_dict: Config dictionary (for direct use)
output_dir: Output directory for scan results
Note: Provide exactly one of config_path, config_id, or config_dict
"""
if sum([config_path is not None, config_id is not None, config_dict is not None]) != 1:
raise ValueError("Must provide exactly one of: config_path, config_id, or config_dict")
self.config_path = Path(config_path) if config_path else None
self.config_id = config_id
self.output_dir = Path(output_dir)
self.output_dir.mkdir(parents=True, exist_ok=True)
if config_dict:
self.config = config_dict
# Process sites: resolve references and expand CIDRs
if 'sites' in self.config:
self.config['sites'] = self._resolve_sites(self.config['sites'])
else:
self.config = self._load_config()
self.screenshot_capture = None
# Cancellation support
self._cancelled = False
self._cancel_lock = threading.Lock()
self._active_process = None
self._process_lock = threading.Lock()
def cancel(self):
"""
Cancel the running scan.
Terminates any active subprocess and sets cancellation flag.
"""
with self._cancel_lock:
self._cancelled = True
with self._process_lock:
if self._active_process and self._active_process.poll() is None:
try:
# Terminate the process group
os.killpg(os.getpgid(self._active_process.pid), signal.SIGTERM)
except (ProcessLookupError, OSError):
pass
def is_cancelled(self) -> bool:
"""Check if scan has been cancelled."""
with self._cancel_lock:
return self._cancelled
def _load_config(self) -> Dict[str, Any]:
"""Load and validate YAML configuration"""
"""
Load and validate configuration from file or database.
Supports three formats:
1. Legacy: Sites with explicit IP lists
2. Site references: Sites referencing database-stored sites
3. Inline CIDRs: Sites with CIDR ranges
"""
# Load from database if config_id provided
if self.config_id:
return self._load_config_from_database(self.config_id)
# Load from YAML file
if not self.config_path.exists():
raise FileNotFoundError(f"Config file not found: {self.config_path}")
@@ -51,8 +123,256 @@ class SneakyScanner:
if not config.get('sites'):
raise ValueError("Config must include 'sites' field")
# Process sites: resolve references and expand CIDRs
config['sites'] = self._resolve_sites(config['sites'])
return config
def _load_config_from_database(self, config_id: int) -> Dict[str, Any]:
"""
Load configuration from database by ID.
Args:
config_id: Database config ID
Returns:
Config dictionary with expanded sites
Raises:
ValueError: If config not found or invalid
"""
try:
# Import here to avoid circular dependencies and allow scanner to work standalone
import os
import sys
# Add parent directory to path for imports
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from web.models import ScanConfig
# Create database session
db_url = os.environ.get('DATABASE_URL', 'sqlite:////app/data/sneakyscanner.db')
engine = create_engine(db_url)
Session = sessionmaker(bind=engine)
session = Session()
try:
# Load config from database
db_config = session.query(ScanConfig).filter_by(id=config_id).first()
if not db_config:
raise ValueError(f"Config with ID {config_id} not found in database")
# Build config dict with site references
config = {
'title': db_config.title,
'sites': []
}
# Add each site as a site_ref
for assoc in db_config.site_associations:
site = assoc.site
config['sites'].append({
'site_ref': site.name
})
# Process sites: resolve references and expand CIDRs
config['sites'] = self._resolve_sites(config['sites'])
return config
finally:
session.close()
except ImportError as e:
raise ValueError(f"Failed to load config from database (import error): {str(e)}")
except Exception as e:
raise ValueError(f"Failed to load config from database: {str(e)}")
def _resolve_sites(self, sites: List[Dict]) -> List[Dict]:
"""
Resolve site references and expand CIDRs to IP lists.
Converts all site formats into the legacy format (with explicit IPs)
for compatibility with the existing scan logic.
Args:
sites: List of site definitions from config
Returns:
List of sites with expanded IP lists
"""
import ipaddress
resolved_sites = []
for site_def in sites:
# Handle site references
if 'site_ref' in site_def:
site_ref = site_def['site_ref']
# Load site from database
site_data = self._load_site_from_database(site_ref)
if site_data:
resolved_sites.append(site_data)
else:
print(f"WARNING: Site reference '{site_ref}' not found in database", file=sys.stderr)
continue
# Handle inline CIDR definitions
if 'cidrs' in site_def:
site_name = site_def.get('name', 'Unknown Site')
expanded_ips = []
for cidr_def in site_def['cidrs']:
cidr = cidr_def['cidr']
expected_ping = cidr_def.get('expected_ping', False)
expected_tcp_ports = cidr_def.get('expected_tcp_ports', [])
expected_udp_ports = cidr_def.get('expected_udp_ports', [])
# Check if there are IP-level overrides (from database sites)
ip_overrides = cidr_def.get('ip_overrides', [])
override_map = {
override['ip_address']: override
for override in ip_overrides
}
# Expand CIDR to IP list
try:
network = ipaddress.ip_network(cidr, strict=False)
ip_list = [str(ip) for ip in network.hosts()]
# If network has only 1 address (like /32), hosts() returns empty
if not ip_list:
ip_list = [str(network.network_address)]
# Create IP config for each IP in the CIDR
for ip_address in ip_list:
# Check if this IP has an override
if ip_address in override_map:
override = override_map[ip_address]
ip_config = {
'address': ip_address,
'expected': {
'ping': override.get('expected_ping', expected_ping),
'tcp_ports': override.get('expected_tcp_ports', expected_tcp_ports),
'udp_ports': override.get('expected_udp_ports', expected_udp_ports)
}
}
else:
# Use CIDR-level defaults
ip_config = {
'address': ip_address,
'expected': {
'ping': expected_ping,
'tcp_ports': expected_tcp_ports,
'udp_ports': expected_udp_ports
}
}
expanded_ips.append(ip_config)
except ValueError as e:
print(f"WARNING: Invalid CIDR '{cidr}': {e}", file=sys.stderr)
continue
# Add expanded site
resolved_sites.append({
'name': site_name,
'ips': expanded_ips
})
continue
# Legacy format: already has 'ips' list
if 'ips' in site_def:
resolved_sites.append(site_def)
continue
print(f"WARNING: Site definition missing required fields: {site_def}", file=sys.stderr)
return resolved_sites
def _load_site_from_database(self, site_name: str) -> Dict[str, Any]:
"""
Load a site definition from the database.
IPs are pre-expanded in the database, so we just load them directly.
Args:
site_name: Name of the site to load
Returns:
Site definition dict with IPs, or None if not found
"""
try:
# Import database modules
import os
import sys
# Add parent directory to path if needed
parent_dir = str(Path(__file__).parent.parent)
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, joinedload
from web.models import Site
# Get database URL from environment
database_url = os.environ.get('DATABASE_URL', 'sqlite:///./sneakyscanner.db')
# Create engine and session
engine = create_engine(database_url)
Session = sessionmaker(bind=engine)
session = Session()
# Query site with all IPs (CIDRs are already expanded)
site = (
session.query(Site)
.options(joinedload(Site.ips))
.filter(Site.name == site_name)
.first()
)
if not site:
session.close()
return None
# Load all IPs directly from database (already expanded)
expanded_ips = []
for ip_obj in site.ips:
# Get settings from IP (no need to merge with CIDR defaults)
expected_ping = ip_obj.expected_ping if ip_obj.expected_ping is not None else False
expected_tcp_ports = json.loads(ip_obj.expected_tcp_ports) if ip_obj.expected_tcp_ports else []
expected_udp_ports = json.loads(ip_obj.expected_udp_ports) if ip_obj.expected_udp_ports else []
ip_config = {
'address': ip_obj.ip_address,
'expected': {
'ping': expected_ping,
'tcp_ports': expected_tcp_ports,
'udp_ports': expected_udp_ports
}
}
expanded_ips.append(ip_config)
session.close()
return {
'name': site.name,
'ips': expanded_ips
}
except Exception as e:
print(f"ERROR: Failed to load site '{site_name}' from database: {e}", file=sys.stderr)
import traceback
traceback.print_exc()
return None
def _run_masscan(self, targets: List[str], ports: str, protocol: str) -> List[Dict]:
"""
Run masscan and return parsed results
@@ -98,11 +418,31 @@ class SneakyScanner:
raise ValueError(f"Invalid protocol: {protocol}")
print(f"Running: {' '.join(cmd)}", flush=True)
result = subprocess.run(cmd, capture_output=True, text=True)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
stdout, stderr = self._active_process.communicate()
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
return []
print(f"Masscan {protocol.upper()} scan completed", flush=True)
if result.returncode != 0:
print(f"Masscan stderr: {result.stderr}", file=sys.stderr)
if returncode != 0:
print(f"Masscan stderr: {stderr}", file=sys.stderr)
# Parse masscan JSON output
results = []
@@ -150,11 +490,31 @@ class SneakyScanner:
]
print(f"Running: {' '.join(cmd)}", flush=True)
result = subprocess.run(cmd, capture_output=True, text=True)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
stdout, stderr = self._active_process.communicate()
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
return {}
print(f"Masscan PING scan completed", flush=True)
if result.returncode != 0:
print(f"Masscan stderr: {result.stderr}", file=sys.stderr, flush=True)
if returncode != 0:
print(f"Masscan stderr: {stderr}", file=sys.stderr, flush=True)
# Parse results
responding_ips = set()
@@ -192,6 +552,10 @@ class SneakyScanner:
all_services = {}
for ip, ports in ip_ports.items():
# Check if cancelled before each host
if self.is_cancelled():
break
if not ports:
all_services[ip] = []
continue
@@ -213,14 +577,33 @@ class SneakyScanner:
'--version-intensity', '5', # Balanced speed/accuracy
'-p', port_list,
'-oX', xml_output, # XML output
'--host-timeout', '5m', # Timeout per host
'--host-timeout', NMAP_HOST_TIMEOUT, # Timeout per host
ip
]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=600)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
if result.returncode != 0:
print(f" Nmap warning for {ip}: {result.stderr}", file=sys.stderr, flush=True)
stdout, stderr = self._active_process.communicate(timeout=600)
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
Path(xml_output).unlink(missing_ok=True)
break
if returncode != 0:
print(f" Nmap warning for {ip}: {stderr}", file=sys.stderr, flush=True)
# Parse XML output
services = self._parse_nmap_xml(xml_output)
@@ -293,29 +676,57 @@ class SneakyScanner:
return services
def _is_likely_web_service(self, service: Dict) -> bool:
def _is_likely_web_service(self, service: Dict, ip: str = None) -> bool:
"""
Check if a service is likely HTTP/HTTPS based on nmap detection or common web ports
Check if a service is a web server by actually making an HTTP request
Args:
service: Service dictionary from nmap results
ip: IP address to test (required for HTTP probe)
Returns:
True if service appears to be web-related
True if service responds to HTTP/HTTPS requests
"""
# Check service name
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Quick check for known web service names first
web_services = ['http', 'https', 'ssl', 'http-proxy', 'https-alt',
'http-alt', 'ssl/http', 'ssl/https']
service_name = service.get('service', '').lower()
# If no IP provided, can't do HTTP probe
port = service.get('port')
if not ip or not port:
# check just the service if no IP - honestly shouldn't get here, but just incase...
if service_name in web_services:
return True
return False
# Check common non-standard web ports
web_ports = [80, 443, 8000, 8006, 8008, 8080, 8081, 8443, 8888, 9443]
port = service.get('port')
# Actually try to connect - this is the definitive test
# Try HTTPS first, then HTTP
for protocol in ['https', 'http']:
url = f"{protocol}://{ip}:{port}/"
try:
response = requests.get(
url,
timeout=3,
verify=False,
allow_redirects=False
)
# Any status code means it's a web server
# (including 404, 500, etc. - still a web server)
return True
except requests.exceptions.SSLError:
# SSL error on HTTPS, try HTTP next
continue
except (requests.exceptions.ConnectionError,
requests.exceptions.Timeout,
requests.exceptions.RequestException):
continue
return port in web_ports
return False
def _detect_http_https(self, ip: str, port: int, timeout: int = 5) -> str:
"""
@@ -503,7 +914,7 @@ class SneakyScanner:
ip_results = {}
for service in services:
if not self._is_likely_web_service(service):
if not self._is_likely_web_service(service, ip):
continue
port = service['port']
@@ -549,14 +960,24 @@ class SneakyScanner:
return all_results
def scan(self) -> Dict[str, Any]:
def scan(self, progress_callback: Optional[Callable] = None) -> Dict[str, Any]:
"""
Perform complete scan based on configuration
Args:
progress_callback: Optional callback function for progress updates.
Called with (phase, ip, data) where:
- phase: 'init', 'ping', 'tcp_scan', 'udp_scan', 'service_detection', 'http_analysis'
- ip: IP address being processed (or None for phase start)
- data: Dict with progress data (results, counts, etc.)
Returns:
Dictionary containing scan results
"""
print(f"Starting scan: {self.config['title']}", flush=True)
if self.config_id:
print(f"Config ID: {self.config_id}", flush=True)
elif self.config_path:
print(f"Config: {self.config_path}", flush=True)
# Record start time
@@ -586,17 +1007,61 @@ class SneakyScanner:
all_ips = sorted(list(all_ips))
print(f"Total IPs to scan: {len(all_ips)}", flush=True)
# Report initialization with total IP count
if progress_callback:
progress_callback('init', None, {
'total_ips': len(all_ips),
'ip_to_site': ip_to_site
})
# Perform ping scan
print(f"\n[1/5] Performing ping scan on {len(all_ips)} IPs...", flush=True)
if progress_callback:
progress_callback('ping', None, {'status': 'starting'})
ping_results = self._run_ping_scan(all_ips)
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Report ping results
if progress_callback:
progress_callback('ping', None, {
'status': 'completed',
'results': ping_results
})
# Perform TCP scan (all ports)
print(f"\n[2/5] Performing TCP scan on {len(all_ips)} IPs (ports 0-65535)...", flush=True)
if progress_callback:
progress_callback('tcp_scan', None, {'status': 'starting'})
tcp_results = self._run_masscan(all_ips, '0-65535', 'tcp')
# Perform UDP scan (all ports)
print(f"\n[3/5] Performing UDP scan on {len(all_ips)} IPs (ports 0-65535)...", flush=True)
udp_results = self._run_masscan(all_ips, '0-65535', 'udp')
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Perform UDP scan (if enabled)
udp_enabled = os.environ.get('UDP_SCAN_ENABLED', 'false').lower() == 'true'
udp_ports = os.environ.get('UDP_PORTS', '53,67,68,69,123,161,500,514,1900')
if udp_enabled:
print(f"\n[3/5] Performing UDP scan on {len(all_ips)} IPs (ports {udp_ports})...", flush=True)
if progress_callback:
progress_callback('udp_scan', None, {'status': 'starting'})
udp_results = self._run_masscan(all_ips, udp_ports, 'udp')
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
else:
print(f"\n[3/5] Skipping UDP scan (disabled)...", flush=True)
if progress_callback:
progress_callback('udp_scan', None, {'status': 'skipped'})
udp_results = []
# Organize results by IP
results_by_ip = {}
@@ -631,20 +1096,56 @@ class SneakyScanner:
results_by_ip[ip]['actual']['tcp_ports'].sort()
results_by_ip[ip]['actual']['udp_ports'].sort()
# Report TCP/UDP scan results with discovered ports per IP
if progress_callback:
tcp_udp_results = {}
for ip in all_ips:
tcp_udp_results[ip] = {
'tcp_ports': results_by_ip[ip]['actual']['tcp_ports'],
'udp_ports': results_by_ip[ip]['actual']['udp_ports']
}
progress_callback('tcp_scan', None, {
'status': 'completed',
'results': tcp_udp_results
})
# Perform service detection on TCP ports
print(f"\n[4/5] Performing service detection on discovered TCP ports...", flush=True)
if progress_callback:
progress_callback('service_detection', None, {'status': 'starting'})
ip_ports = {ip: results_by_ip[ip]['actual']['tcp_ports'] for ip in all_ips}
service_results = self._run_nmap_service_detection(ip_ports)
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Add service information to results
for ip, services in service_results.items():
if ip in results_by_ip:
results_by_ip[ip]['actual']['services'] = services
# Report service detection results
if progress_callback:
progress_callback('service_detection', None, {
'status': 'completed',
'results': service_results
})
# Perform HTTP/HTTPS analysis on web services
print(f"\n[5/5] Analyzing HTTP/HTTPS services and SSL/TLS configuration...", flush=True)
if progress_callback:
progress_callback('http_analysis', None, {'status': 'starting'})
http_results = self._run_http_analysis(service_results)
# Report HTTP analysis completion
if progress_callback:
progress_callback('http_analysis', None, {
'status': 'completed',
'results': http_results
})
# Merge HTTP analysis into service results
for ip, port_results in http_results.items():
if ip in results_by_ip:
@@ -662,7 +1163,7 @@ class SneakyScanner:
'title': self.config['title'],
'scan_time': datetime.utcnow().isoformat() + 'Z',
'scan_duration': scan_duration,
'config_file': str(self.config_path),
'config_id': self.config_id,
'sites': []
}
@@ -768,6 +1269,8 @@ class SneakyScanner:
# Preserve directory structure in ZIP
arcname = f"{screenshot_dir.name}/{screenshot_file.name}"
zipf.write(screenshot_file, arcname)
# Track screenshot directory for database storage
output_paths['screenshots'] = screenshot_dir
output_paths['zip'] = zip_path
print(f"ZIP archive saved to: {zip_path}", flush=True)

View File

@@ -490,8 +490,8 @@
<div class="header-meta">
<span>📅 <strong>Scan Time:</strong> {{ scan_time | format_date }}</span>
<span>⏱️ <strong>Duration:</strong> {{ scan_duration | format_duration }}</span>
{% if config_file %}
<span>📄 <strong>Config:</strong> {{ config_file }}</span>
{% if config_id %}
<span>📄 <strong>Config ID:</strong> {{ config_id }}</span>
{% endif %}
</div>
</div>

View File

@@ -13,7 +13,7 @@ from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from web.app import create_app
from web.models import Base, Scan
from web.models import Base, Scan, ScanConfig
from web.utils.settings import PasswordManager, SettingsManager
@@ -53,7 +53,7 @@ def sample_scan_report():
'title': 'Test Scan',
'scan_time': '2025-11-14T10:30:00Z',
'scan_duration': 125.5,
'config_file': '/app/configs/test.yaml',
'config_id': 1,
'sites': [
{
'name': 'Test Site',
@@ -199,6 +199,53 @@ def sample_invalid_config_file(tmp_path):
return str(config_file)
@pytest.fixture
def sample_db_config(db):
"""
Create a sample database config for testing.
Args:
db: Database session fixture
Returns:
ScanConfig model instance with ID
"""
import json
config_data = {
'title': 'Test Scan',
'sites': [
{
'name': 'Test Site',
'ips': [
{
'address': '192.168.1.10',
'expected': {
'ping': True,
'tcp_ports': [22, 80, 443],
'udp_ports': [53],
'services': ['ssh', 'http', 'https']
}
}
]
}
]
}
scan_config = ScanConfig(
title='Test Scan',
config_data=json.dumps(config_data),
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
)
db.add(scan_config)
db.commit()
db.refresh(scan_config)
return scan_config
@pytest.fixture(scope='function')
def app():
"""
@@ -269,7 +316,7 @@ def sample_scan(db):
scan = Scan(
timestamp=datetime.utcnow(),
status='completed',
config_file='/app/configs/test.yaml',
config_id=1,
title='Test Scan',
duration=125.5,
triggered_by='test',

View File

@@ -23,12 +23,12 @@ class TestBackgroundJobs:
assert app.scheduler.scheduler is not None
assert app.scheduler.scheduler.running
def test_queue_scan_job(self, app, db, sample_config_file):
def test_queue_scan_job(self, app, db, sample_db_config):
"""Test queuing a scan for background execution."""
# Create a scan via service
scan_service = ScanService(db)
scan_id = scan_service.trigger_scan(
config_file=sample_config_file,
config_id=sample_db_config.id,
triggered_by='test',
scheduler=app.scheduler
)
@@ -43,12 +43,12 @@ class TestBackgroundJobs:
assert job is not None
assert job.id == f'scan_{scan_id}'
def test_trigger_scan_without_scheduler(self, db, sample_config_file):
def test_trigger_scan_without_scheduler(self, db, sample_db_config):
"""Test triggering scan without scheduler logs warning."""
# Create scan without scheduler
scan_service = ScanService(db)
scan_id = scan_service.trigger_scan(
config_file=sample_config_file,
config_id=sample_db_config.id,
triggered_by='test',
scheduler=None # No scheduler
)
@@ -58,13 +58,13 @@ class TestBackgroundJobs:
assert scan is not None
assert scan.status == 'running'
def test_scheduler_service_queue_scan(self, app, db, sample_config_file):
def test_scheduler_service_queue_scan(self, app, db, sample_db_config):
"""Test SchedulerService.queue_scan directly."""
# Create scan record first
scan = Scan(
timestamp=datetime.utcnow(),
status='running',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Test Scan',
triggered_by='test'
)
@@ -72,27 +72,27 @@ class TestBackgroundJobs:
db.commit()
# Queue the scan
job_id = app.scheduler.queue_scan(scan.id, sample_config_file)
job_id = app.scheduler.queue_scan(scan.id, sample_db_config)
# Verify job was queued
assert job_id == f'scan_{scan.id}'
job = app.scheduler.scheduler.get_job(job_id)
assert job is not None
def test_scheduler_list_jobs(self, app, db, sample_config_file):
def test_scheduler_list_jobs(self, app, db, sample_db_config):
"""Test listing scheduled jobs."""
# Queue a few scans
for i in range(3):
scan = Scan(
timestamp=datetime.utcnow(),
status='running',
config_file=sample_config_file,
config_id=sample_db_config.id,
title=f'Test Scan {i}',
triggered_by='test'
)
db.add(scan)
db.commit()
app.scheduler.queue_scan(scan.id, sample_config_file)
app.scheduler.queue_scan(scan.id, sample_db_config)
# List jobs
jobs = app.scheduler.list_jobs()
@@ -106,20 +106,20 @@ class TestBackgroundJobs:
assert 'name' in job
assert 'trigger' in job
def test_scheduler_get_job_status(self, app, db, sample_config_file):
def test_scheduler_get_job_status(self, app, db, sample_db_config):
"""Test getting status of a specific job."""
# Create and queue a scan
scan = Scan(
timestamp=datetime.utcnow(),
status='running',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Test Scan',
triggered_by='test'
)
db.add(scan)
db.commit()
job_id = app.scheduler.queue_scan(scan.id, sample_config_file)
job_id = app.scheduler.queue_scan(scan.id, sample_db_config)
# Get job status
status = app.scheduler.get_job_status(job_id)
@@ -133,13 +133,13 @@ class TestBackgroundJobs:
status = app.scheduler.get_job_status('nonexistent_job_id')
assert status is None
def test_scan_timing_fields(self, db, sample_config_file):
def test_scan_timing_fields(self, db, sample_db_config):
"""Test that scan timing fields are properly set."""
# Create scan with started_at
scan = Scan(
timestamp=datetime.utcnow(),
status='running',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Test Scan',
triggered_by='test',
started_at=datetime.utcnow()
@@ -161,13 +161,13 @@ class TestBackgroundJobs:
assert scan.completed_at is not None
assert (scan.completed_at - scan.started_at).total_seconds() >= 0
def test_scan_error_handling(self, db, sample_config_file):
def test_scan_error_handling(self, db, sample_db_config):
"""Test that error messages are stored correctly."""
# Create failed scan
scan = Scan(
timestamp=datetime.utcnow(),
status='failed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Failed Scan',
triggered_by='test',
started_at=datetime.utcnow(),
@@ -188,7 +188,7 @@ class TestBackgroundJobs:
assert status['error_message'] == 'Test error message'
@pytest.mark.skip(reason="Requires actual scanner execution - slow test")
def test_background_scan_execution(self, app, db, sample_config_file):
def test_background_scan_execution(self, app, db, sample_db_config):
"""
Integration test for actual background scan execution.
@@ -200,7 +200,7 @@ class TestBackgroundJobs:
# Trigger scan
scan_service = ScanService(db)
scan_id = scan_service.trigger_scan(
config_file=sample_config_file,
config_id=sample_db_config.id,
triggered_by='test',
scheduler=app.scheduler
)

View File

@@ -0,0 +1,483 @@
"""
Integration tests for Config API endpoints.
Tests all config API endpoints including CSV/YAML upload, listing, downloading,
and deletion with schedule protection.
"""
import pytest
import os
import tempfile
import shutil
from web.app import create_app
from web.models import Base
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
@pytest.fixture
def app():
"""Create test application"""
# Create temporary database
test_db = tempfile.mktemp(suffix='.db')
# Create temporary configs directory
temp_configs_dir = tempfile.mkdtemp()
app = create_app({
'TESTING': True,
'SQLALCHEMY_DATABASE_URI': f'sqlite:///{test_db}',
'SECRET_KEY': 'test-secret-key',
'WTF_CSRF_ENABLED': False,
})
# Override configs directory in ConfigService
os.environ['CONFIGS_DIR'] = temp_configs_dir
# Create tables
with app.app_context():
Base.metadata.create_all(bind=app.db_session.get_bind())
yield app
# Cleanup
os.unlink(test_db)
shutil.rmtree(temp_configs_dir)
@pytest.fixture
def client(app):
"""Create test client"""
return app.test_client()
@pytest.fixture
def auth_headers(client):
"""Get authentication headers"""
# First register and login a user
from web.auth.models import User
with client.application.app_context():
# Create test user
user = User(username='testuser')
user.set_password('testpass')
client.application.db_session.add(user)
client.application.db_session.commit()
# Login
response = client.post('/auth/login', data={
'username': 'testuser',
'password': 'testpass'
}, follow_redirects=True)
assert response.status_code == 200
# Return empty headers (session-based auth)
return {}
@pytest.fixture
def sample_csv():
"""Sample CSV content"""
return """scan_title,site_name,ip_address,ping_expected,tcp_ports,udp_ports,services
Test Scan,Web Servers,10.10.20.4,true,"22,80,443",53,"ssh,http,https"
Test Scan,Web Servers,10.10.20.5,true,22,,"ssh"
"""
@pytest.fixture
def sample_yaml():
"""Sample YAML content"""
return """title: Test Scan
sites:
- name: Web Servers
ips:
- address: 10.10.20.4
expected:
ping: true
tcp_ports: [22, 80, 443]
udp_ports: [53]
services: [ssh, http, https]
"""
class TestListConfigs:
"""Tests for GET /api/configs"""
def test_list_configs_empty(self, client, auth_headers):
"""Test listing configs when none exist"""
response = client.get('/api/configs', headers=auth_headers)
assert response.status_code == 200
data = response.get_json()
assert 'configs' in data
assert data['configs'] == []
def test_list_configs_with_files(self, client, auth_headers, app, sample_yaml):
"""Test listing configs with existing files"""
# Create a config file
temp_configs_dir = os.environ.get('CONFIGS_DIR', '/app/configs')
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml)
response = client.get('/api/configs', headers=auth_headers)
assert response.status_code == 200
data = response.get_json()
assert len(data['configs']) == 1
assert data['configs'][0]['filename'] == 'test-scan.yaml'
assert data['configs'][0]['title'] == 'Test Scan'
assert 'created_at' in data['configs'][0]
assert 'size_bytes' in data['configs'][0]
assert 'used_by_schedules' in data['configs'][0]
def test_list_configs_requires_auth(self, client):
"""Test that listing configs requires authentication"""
response = client.get('/api/configs')
assert response.status_code in [401, 302] # Unauthorized or redirect
class TestGetConfig:
"""Tests for GET /api/configs/<filename>"""
def test_get_config_valid(self, client, auth_headers, app, sample_yaml):
"""Test getting a valid config file"""
# Create a config file
temp_configs_dir = os.environ.get('CONFIGS_DIR', '/app/configs')
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml)
response = client.get('/api/configs/test-scan.yaml', headers=auth_headers)
assert response.status_code == 200
data = response.get_json()
assert data['filename'] == 'test-scan.yaml'
assert 'content' in data
assert 'parsed' in data
assert data['parsed']['title'] == 'Test Scan'
def test_get_config_not_found(self, client, auth_headers):
"""Test getting non-existent config"""
response = client.get('/api/configs/nonexistent.yaml', headers=auth_headers)
assert response.status_code == 404
data = response.get_json()
assert 'error' in data
def test_get_config_requires_auth(self, client):
"""Test that getting config requires authentication"""
response = client.get('/api/configs/test.yaml')
assert response.status_code in [401, 302]
class TestUploadCSV:
"""Tests for POST /api/configs/upload-csv"""
def test_upload_csv_valid(self, client, auth_headers, sample_csv):
"""Test uploading valid CSV"""
from io import BytesIO
data = {
'file': (BytesIO(sample_csv.encode('utf-8')), 'test.csv')
}
response = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 200
result = response.get_json()
assert result['success'] is True
assert 'filename' in result
assert result['filename'].endswith('.yaml')
assert 'preview' in result
def test_upload_csv_no_file(self, client, auth_headers):
"""Test uploading without file"""
response = client.post('/api/configs/upload-csv', data={},
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
data = response.get_json()
assert 'error' in data
def test_upload_csv_invalid_format(self, client, auth_headers):
"""Test uploading invalid CSV"""
from io import BytesIO
invalid_csv = "not,a,valid,csv\nmissing,columns"
data = {
'file': (BytesIO(invalid_csv.encode('utf-8')), 'test.csv')
}
response = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
result = response.get_json()
assert 'error' in result
def test_upload_csv_wrong_extension(self, client, auth_headers):
"""Test uploading file with wrong extension"""
from io import BytesIO
data = {
'file': (BytesIO(b'test'), 'test.txt')
}
response = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
def test_upload_csv_duplicate_filename(self, client, auth_headers, sample_csv):
"""Test uploading CSV that generates duplicate filename"""
from io import BytesIO
data = {
'file': (BytesIO(sample_csv.encode('utf-8')), 'test.csv')
}
# Upload first time
response1 = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response1.status_code == 200
# Upload second time (should fail)
response2 = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response2.status_code == 400
def test_upload_csv_requires_auth(self, client, sample_csv):
"""Test that uploading CSV requires authentication"""
from io import BytesIO
data = {
'file': (BytesIO(sample_csv.encode('utf-8')), 'test.csv')
}
response = client.post('/api/configs/upload-csv', data=data,
content_type='multipart/form-data')
assert response.status_code in [401, 302]
class TestUploadYAML:
"""Tests for POST /api/configs/upload-yaml"""
def test_upload_yaml_valid(self, client, auth_headers, sample_yaml):
"""Test uploading valid YAML"""
from io import BytesIO
data = {
'file': (BytesIO(sample_yaml.encode('utf-8')), 'test.yaml')
}
response = client.post('/api/configs/upload-yaml', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 200
result = response.get_json()
assert result['success'] is True
assert 'filename' in result
def test_upload_yaml_no_file(self, client, auth_headers):
"""Test uploading without file"""
response = client.post('/api/configs/upload-yaml', data={},
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
def test_upload_yaml_invalid_syntax(self, client, auth_headers):
"""Test uploading YAML with invalid syntax"""
from io import BytesIO
invalid_yaml = "invalid: yaml: syntax: ["
data = {
'file': (BytesIO(invalid_yaml.encode('utf-8')), 'test.yaml')
}
response = client.post('/api/configs/upload-yaml', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
def test_upload_yaml_missing_required_fields(self, client, auth_headers):
"""Test uploading YAML missing required fields"""
from io import BytesIO
invalid_yaml = """sites:
- name: Test
ips:
- address: 10.0.0.1
"""
data = {
'file': (BytesIO(invalid_yaml.encode('utf-8')), 'test.yaml')
}
response = client.post('/api/configs/upload-yaml', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
def test_upload_yaml_wrong_extension(self, client, auth_headers):
"""Test uploading file with wrong extension"""
from io import BytesIO
data = {
'file': (BytesIO(b'test'), 'test.txt')
}
response = client.post('/api/configs/upload-yaml', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 400
def test_upload_yaml_requires_auth(self, client, sample_yaml):
"""Test that uploading YAML requires authentication"""
from io import BytesIO
data = {
'file': (BytesIO(sample_yaml.encode('utf-8')), 'test.yaml')
}
response = client.post('/api/configs/upload-yaml', data=data,
content_type='multipart/form-data')
assert response.status_code in [401, 302]
class TestDownloadTemplate:
"""Tests for GET /api/configs/template"""
def test_download_template(self, client, auth_headers):
"""Test downloading CSV template"""
response = client.get('/api/configs/template', headers=auth_headers)
assert response.status_code == 200
assert response.content_type == 'text/csv; charset=utf-8'
assert b'scan_title,site_name,ip_address' in response.data
def test_download_template_requires_auth(self, client):
"""Test that downloading template requires authentication"""
response = client.get('/api/configs/template')
assert response.status_code in [401, 302]
class TestDownloadConfig:
"""Tests for GET /api/configs/<filename>/download"""
def test_download_config_valid(self, client, auth_headers, app, sample_yaml):
"""Test downloading existing config"""
# Create a config file
temp_configs_dir = os.environ.get('CONFIGS_DIR', '/app/configs')
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml)
response = client.get('/api/configs/test-scan.yaml/download', headers=auth_headers)
assert response.status_code == 200
assert response.content_type == 'application/x-yaml; charset=utf-8'
assert b'title: Test Scan' in response.data
def test_download_config_not_found(self, client, auth_headers):
"""Test downloading non-existent config"""
response = client.get('/api/configs/nonexistent.yaml/download', headers=auth_headers)
assert response.status_code == 404
def test_download_config_requires_auth(self, client):
"""Test that downloading config requires authentication"""
response = client.get('/api/configs/test.yaml/download')
assert response.status_code in [401, 302]
class TestDeleteConfig:
"""Tests for DELETE /api/configs/<filename>"""
def test_delete_config_valid(self, client, auth_headers, app, sample_yaml):
"""Test deleting a config file"""
# Create a config file
temp_configs_dir = os.environ.get('CONFIGS_DIR', '/app/configs')
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml)
response = client.delete('/api/configs/test-scan.yaml', headers=auth_headers)
assert response.status_code == 200
data = response.get_json()
assert data['success'] is True
# Verify file is deleted
assert not os.path.exists(config_path)
def test_delete_config_not_found(self, client, auth_headers):
"""Test deleting non-existent config"""
response = client.delete('/api/configs/nonexistent.yaml', headers=auth_headers)
assert response.status_code == 404
def test_delete_config_requires_auth(self, client):
"""Test that deleting config requires authentication"""
response = client.delete('/api/configs/test.yaml')
assert response.status_code in [401, 302]
class TestEndToEndWorkflow:
"""End-to-end workflow tests"""
def test_complete_csv_workflow(self, client, auth_headers, sample_csv):
"""Test complete CSV upload workflow"""
from io import BytesIO
# 1. Download template
response = client.get('/api/configs/template', headers=auth_headers)
assert response.status_code == 200
# 2. Upload CSV
data = {
'file': (BytesIO(sample_csv.encode('utf-8')), 'workflow-test.csv')
}
response = client.post('/api/configs/upload-csv', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 200
result = response.get_json()
filename = result['filename']
# 3. List configs (should include new one)
response = client.get('/api/configs', headers=auth_headers)
assert response.status_code == 200
configs = response.get_json()['configs']
assert any(c['filename'] == filename for c in configs)
# 4. Get config details
response = client.get(f'/api/configs/{filename}', headers=auth_headers)
assert response.status_code == 200
# 5. Download config
response = client.get(f'/api/configs/{filename}/download', headers=auth_headers)
assert response.status_code == 200
# 6. Delete config
response = client.delete(f'/api/configs/{filename}', headers=auth_headers)
assert response.status_code == 200
# 7. Verify deletion
response = client.get(f'/api/configs/{filename}', headers=auth_headers)
assert response.status_code == 404
def test_yaml_upload_workflow(self, client, auth_headers, sample_yaml):
"""Test YAML upload workflow"""
from io import BytesIO
# Upload YAML
data = {
'file': (BytesIO(sample_yaml.encode('utf-8')), 'yaml-workflow.yaml')
}
response = client.post('/api/configs/upload-yaml', data=data,
headers=auth_headers, content_type='multipart/form-data')
assert response.status_code == 200
filename = response.get_json()['filename']
# Verify it exists
response = client.get(f'/api/configs/{filename}', headers=auth_headers)
assert response.status_code == 200
# Clean up
client.delete(f'/api/configs/{filename}', headers=auth_headers)

View File

@@ -0,0 +1,545 @@
"""
Unit tests for Config Service
Tests the ConfigService class which manages scan configuration files.
"""
import pytest
import os
import yaml
import tempfile
import shutil
from web.services.config_service import ConfigService
class TestConfigService:
"""Test suite for ConfigService"""
@pytest.fixture
def temp_configs_dir(self):
"""Create a temporary directory for config files"""
temp_dir = tempfile.mkdtemp()
yield temp_dir
shutil.rmtree(temp_dir)
@pytest.fixture
def service(self, temp_configs_dir):
"""Create a ConfigService instance with temp directory"""
return ConfigService(configs_dir=temp_configs_dir)
@pytest.fixture
def sample_yaml_config(self):
"""Sample YAML config content"""
return """title: Test Scan
sites:
- name: Web Servers
ips:
- address: 10.10.20.4
expected:
ping: true
tcp_ports: [22, 80, 443]
udp_ports: [53]
services: [ssh, http, https]
"""
@pytest.fixture
def sample_csv_content(self):
"""Sample CSV content"""
return """scan_title,site_name,ip_address,ping_expected,tcp_ports,udp_ports,services
Test Scan,Web Servers,10.10.20.4,true,"22,80,443",53,"ssh,http,https"
Test Scan,Web Servers,10.10.20.5,true,22,,"ssh"
"""
def test_list_configs_empty_directory(self, service):
"""Test listing configs when directory is empty"""
configs = service.list_configs()
assert configs == []
def test_list_configs_with_files(self, service, temp_configs_dir, sample_yaml_config):
"""Test listing configs with existing files"""
# Create a config file
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml_config)
configs = service.list_configs()
assert len(configs) == 1
assert configs[0]['filename'] == 'test-scan.yaml'
assert configs[0]['title'] == 'Test Scan'
assert 'created_at' in configs[0]
assert 'size_bytes' in configs[0]
assert 'used_by_schedules' in configs[0]
def test_list_configs_ignores_non_yaml_files(self, service, temp_configs_dir):
"""Test that non-YAML files are ignored"""
# Create non-YAML files
with open(os.path.join(temp_configs_dir, 'test.txt'), 'w') as f:
f.write('not a yaml file')
with open(os.path.join(temp_configs_dir, 'readme.md'), 'w') as f:
f.write('# README')
configs = service.list_configs()
assert len(configs) == 0
def test_get_config_valid(self, service, temp_configs_dir, sample_yaml_config):
"""Test getting a valid config file"""
# Create a config file
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml_config)
result = service.get_config('test-scan.yaml')
assert result['filename'] == 'test-scan.yaml'
assert 'content' in result
assert 'parsed' in result
assert result['parsed']['title'] == 'Test Scan'
assert len(result['parsed']['sites']) == 1
def test_get_config_not_found(self, service):
"""Test getting a non-existent config"""
with pytest.raises(FileNotFoundError, match="not found"):
service.get_config('nonexistent.yaml')
def test_get_config_invalid_yaml(self, service, temp_configs_dir):
"""Test getting a config with invalid YAML syntax"""
# Create invalid YAML file
config_path = os.path.join(temp_configs_dir, 'invalid.yaml')
with open(config_path, 'w') as f:
f.write("invalid: yaml: syntax: [")
with pytest.raises(ValueError, match="Invalid YAML syntax"):
service.get_config('invalid.yaml')
def test_create_from_yaml_valid(self, service, sample_yaml_config):
"""Test creating config from valid YAML"""
filename = service.create_from_yaml('test-scan.yaml', sample_yaml_config)
assert filename == 'test-scan.yaml'
assert service.config_exists('test-scan.yaml')
# Verify content
result = service.get_config('test-scan.yaml')
assert result['parsed']['title'] == 'Test Scan'
def test_create_from_yaml_adds_extension(self, service, sample_yaml_config):
"""Test that .yaml extension is added if missing"""
filename = service.create_from_yaml('test-scan', sample_yaml_config)
assert filename == 'test-scan.yaml'
assert service.config_exists('test-scan.yaml')
def test_create_from_yaml_sanitizes_filename(self, service, sample_yaml_config):
"""Test that filename is sanitized"""
filename = service.create_from_yaml('../../../etc/test.yaml', sample_yaml_config)
# secure_filename should remove path traversal
assert '..' not in filename
assert '/' not in filename
def test_create_from_yaml_duplicate_filename(self, service, temp_configs_dir, sample_yaml_config):
"""Test creating config with duplicate filename"""
# Create first config
service.create_from_yaml('test-scan.yaml', sample_yaml_config)
# Try to create duplicate
with pytest.raises(ValueError, match="already exists"):
service.create_from_yaml('test-scan.yaml', sample_yaml_config)
def test_create_from_yaml_invalid_syntax(self, service):
"""Test creating config with invalid YAML syntax"""
invalid_yaml = "invalid: yaml: syntax: ["
with pytest.raises(ValueError, match="Invalid YAML syntax"):
service.create_from_yaml('test.yaml', invalid_yaml)
def test_create_from_yaml_invalid_structure(self, service):
"""Test creating config with invalid structure (missing title)"""
invalid_config = """sites:
- name: Test
ips:
- address: 10.0.0.1
expected:
ping: true
"""
with pytest.raises(ValueError, match="Missing required field: 'title'"):
service.create_from_yaml('test.yaml', invalid_config)
def test_create_from_csv_valid(self, service, sample_csv_content):
"""Test creating config from valid CSV"""
filename, yaml_content = service.create_from_csv(sample_csv_content)
assert filename == 'test-scan.yaml'
assert service.config_exists(filename)
# Verify YAML was created correctly
result = service.get_config(filename)
assert result['parsed']['title'] == 'Test Scan'
assert len(result['parsed']['sites']) == 1
assert len(result['parsed']['sites'][0]['ips']) == 2
def test_create_from_csv_with_suggested_filename(self, service, sample_csv_content):
"""Test creating config with suggested filename"""
filename, yaml_content = service.create_from_csv(sample_csv_content, 'custom-name.yaml')
assert filename == 'custom-name.yaml'
assert service.config_exists(filename)
def test_create_from_csv_invalid(self, service):
"""Test creating config from invalid CSV"""
invalid_csv = """scan_title,site_name,ip_address
Missing,Columns,Here
"""
with pytest.raises(ValueError, match="CSV parsing failed"):
service.create_from_csv(invalid_csv)
def test_create_from_csv_duplicate_filename(self, service, sample_csv_content):
"""Test creating CSV config with duplicate filename"""
# Create first config
service.create_from_csv(sample_csv_content)
# Try to create duplicate (same title generates same filename)
with pytest.raises(ValueError, match="already exists"):
service.create_from_csv(sample_csv_content)
def test_delete_config_valid(self, service, temp_configs_dir, sample_yaml_config):
"""Test deleting a config file"""
# Create a config file
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml_config)
assert service.config_exists('test-scan.yaml')
service.delete_config('test-scan.yaml')
assert not service.config_exists('test-scan.yaml')
def test_delete_config_not_found(self, service):
"""Test deleting non-existent config"""
with pytest.raises(FileNotFoundError, match="not found"):
service.delete_config('nonexistent.yaml')
def test_delete_config_used_by_schedule(self, service, temp_configs_dir, sample_yaml_config, monkeypatch):
"""Test deleting config that is used by schedules - should cascade delete schedules"""
# Create a config file
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml_config)
# Mock schedule service interactions
deleted_schedule_ids = []
class MockScheduleService:
def __init__(self, db):
self.db = db
def list_schedules(self, page=1, per_page=10000):
return {
'schedules': [
{
'id': 1,
'name': 'Daily Scan',
'config_file': 'test-scan.yaml',
'enabled': True
},
{
'id': 2,
'name': 'Weekly Audit',
'config_file': 'test-scan.yaml',
'enabled': False # Disabled schedule should also be deleted
}
]
}
def delete_schedule(self, schedule_id):
deleted_schedule_ids.append(schedule_id)
return True
# Mock the ScheduleService import
import sys
from unittest.mock import MagicMock
mock_module = MagicMock()
mock_module.ScheduleService = MockScheduleService
monkeypatch.setitem(sys.modules, 'web.services.schedule_service', mock_module)
# Mock current_app
mock_app = MagicMock()
mock_app.db_session = MagicMock()
import flask
monkeypatch.setattr(flask, 'current_app', mock_app)
# Delete the config - should cascade delete associated schedules
service.delete_config('test-scan.yaml')
# Config should be deleted
assert not service.config_exists('test-scan.yaml')
# Both schedules (enabled and disabled) should be deleted
assert deleted_schedule_ids == [1, 2]
def test_validate_config_content_valid(self, service):
"""Test validating valid config content"""
valid_config = {
'title': 'Test Scan',
'sites': [
{
'name': 'Web Servers',
'ips': [
{
'address': '10.10.20.4',
'expected': {
'ping': True,
'tcp_ports': [22, 80, 443],
'udp_ports': [53]
}
}
]
}
]
}
is_valid, error = service.validate_config_content(valid_config)
assert is_valid is True
assert error == ""
def test_validate_config_content_not_dict(self, service):
"""Test validating non-dict content"""
is_valid, error = service.validate_config_content(['not', 'a', 'dict'])
assert is_valid is False
assert 'must be a dictionary' in error
def test_validate_config_content_missing_title(self, service):
"""Test validating config without title"""
config = {
'sites': []
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "Missing required field: 'title'" in error
def test_validate_config_content_missing_sites(self, service):
"""Test validating config without sites"""
config = {
'title': 'Test'
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "Missing required field: 'sites'" in error
def test_validate_config_content_empty_title(self, service):
"""Test validating config with empty title"""
config = {
'title': '',
'sites': []
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "non-empty string" in error
def test_validate_config_content_sites_not_list(self, service):
"""Test validating config with sites as non-list"""
config = {
'title': 'Test',
'sites': 'not a list'
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "must be a list" in error
def test_validate_config_content_no_sites(self, service):
"""Test validating config with empty sites list"""
config = {
'title': 'Test',
'sites': []
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "at least one site" in error
def test_validate_config_content_site_missing_name(self, service):
"""Test validating site without name"""
config = {
'title': 'Test',
'sites': [
{
'ips': []
}
]
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "missing required field: 'name'" in error
def test_validate_config_content_site_missing_ips(self, service):
"""Test validating site without ips"""
config = {
'title': 'Test',
'sites': [
{
'name': 'Test Site'
}
]
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "missing required field: 'ips'" in error
def test_validate_config_content_site_no_ips(self, service):
"""Test validating site with empty ips list"""
config = {
'title': 'Test',
'sites': [
{
'name': 'Test Site',
'ips': []
}
]
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "at least one IP" in error
def test_validate_config_content_ip_missing_address(self, service):
"""Test validating IP without address"""
config = {
'title': 'Test',
'sites': [
{
'name': 'Test Site',
'ips': [
{
'expected': {}
}
]
}
]
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "missing required field: 'address'" in error
def test_validate_config_content_ip_missing_expected(self, service):
"""Test validating IP without expected"""
config = {
'title': 'Test',
'sites': [
{
'name': 'Test Site',
'ips': [
{
'address': '10.0.0.1'
}
]
}
]
}
is_valid, error = service.validate_config_content(config)
assert is_valid is False
assert "missing required field: 'expected'" in error
def test_generate_filename_from_title_simple(self, service):
"""Test generating filename from simple title"""
filename = service.generate_filename_from_title('Production Scan')
assert filename == 'production-scan.yaml'
def test_generate_filename_from_title_special_chars(self, service):
"""Test generating filename with special characters"""
filename = service.generate_filename_from_title('Prod Scan (2025)!')
assert filename == 'prod-scan-2025.yaml'
assert '(' not in filename
assert ')' not in filename
assert '!' not in filename
def test_generate_filename_from_title_multiple_spaces(self, service):
"""Test generating filename with multiple spaces"""
filename = service.generate_filename_from_title('Test Multiple Spaces')
assert filename == 'test-multiple-spaces.yaml'
# Should not have consecutive hyphens
assert '--' not in filename
def test_generate_filename_from_title_leading_trailing_spaces(self, service):
"""Test generating filename with leading/trailing spaces"""
filename = service.generate_filename_from_title(' Test Scan ')
assert filename == 'test-scan.yaml'
assert not filename.startswith('-')
assert not filename.endswith('-.yaml')
def test_generate_filename_from_title_long(self, service):
"""Test generating filename from long title"""
long_title = 'A' * 300
filename = service.generate_filename_from_title(long_title)
# Should be limited to 200 chars (195 + .yaml)
assert len(filename) <= 200
def test_generate_filename_from_title_empty(self, service):
"""Test generating filename from empty title"""
filename = service.generate_filename_from_title('')
assert filename == 'config.yaml'
def test_generate_filename_from_title_only_special_chars(self, service):
"""Test generating filename from title with only special characters"""
filename = service.generate_filename_from_title('!@#$%^&*()')
assert filename == 'config.yaml'
def test_get_config_path(self, service, temp_configs_dir):
"""Test getting config path"""
path = service.get_config_path('test.yaml')
assert path == os.path.join(temp_configs_dir, 'test.yaml')
def test_config_exists_true(self, service, temp_configs_dir, sample_yaml_config):
"""Test config_exists returns True for existing file"""
config_path = os.path.join(temp_configs_dir, 'test-scan.yaml')
with open(config_path, 'w') as f:
f.write(sample_yaml_config)
assert service.config_exists('test-scan.yaml') is True
def test_config_exists_false(self, service):
"""Test config_exists returns False for non-existent file"""
assert service.config_exists('nonexistent.yaml') is False
def test_get_schedules_using_config_none(self, service):
"""Test getting schedules when none use the config"""
schedules = service.get_schedules_using_config('test.yaml')
# Should return empty list (ScheduleService might not exist in test env)
assert isinstance(schedules, list)
def test_list_configs_sorted_by_date(self, service, temp_configs_dir, sample_yaml_config):
"""Test that configs are sorted by creation date (most recent first)"""
import time
# Create first config
config1_path = os.path.join(temp_configs_dir, 'config1.yaml')
with open(config1_path, 'w') as f:
f.write(sample_yaml_config)
time.sleep(0.1) # Ensure different timestamps
# Create second config
config2_path = os.path.join(temp_configs_dir, 'config2.yaml')
with open(config2_path, 'w') as f:
f.write(sample_yaml_config)
configs = service.list_configs()
assert len(configs) == 2
# Most recent should be first
assert configs[0]['filename'] == 'config2.yaml'
assert configs[1]['filename'] == 'config1.yaml'
def test_list_configs_handles_parse_errors(self, service, temp_configs_dir):
"""Test that list_configs handles files that can't be parsed"""
# Create invalid YAML file
config_path = os.path.join(temp_configs_dir, 'invalid.yaml')
with open(config_path, 'w') as f:
f.write("invalid: yaml: [")
# Should not raise error, just use filename as title
configs = service.list_configs()
assert len(configs) == 1
assert configs[0]['filename'] == 'invalid.yaml'

View File

@@ -37,14 +37,14 @@ class TestScanAPIEndpoints:
assert len(data['scans']) == 1
assert data['scans'][0]['id'] == sample_scan.id
def test_list_scans_pagination(self, client, db):
def test_list_scans_pagination(self, client, db, sample_db_config):
"""Test scan list pagination."""
# Create 25 scans
for i in range(25):
scan = Scan(
timestamp=datetime.utcnow(),
status='completed',
config_file=f'/app/configs/test{i}.yaml',
config_id=sample_db_config.id,
title=f'Test Scan {i}',
triggered_by='test'
)
@@ -81,7 +81,7 @@ class TestScanAPIEndpoints:
scan = Scan(
timestamp=datetime.utcnow(),
status=status,
config_file='/app/configs/test.yaml',
config_id=1,
title=f'{status.capitalize()} Scan',
triggered_by='test'
)
@@ -123,10 +123,10 @@ class TestScanAPIEndpoints:
assert 'error' in data
assert data['error'] == 'Not found'
def test_trigger_scan_success(self, client, db, sample_config_file):
def test_trigger_scan_success(self, client, db, sample_db_config):
"""Test triggering a new scan."""
response = client.post('/api/scans',
json={'config_file': str(sample_config_file)},
json={'config_id': sample_db_config.id},
content_type='application/json'
)
assert response.status_code == 201
@@ -142,8 +142,8 @@ class TestScanAPIEndpoints:
assert scan.status == 'running'
assert scan.triggered_by == 'api'
def test_trigger_scan_missing_config_file(self, client, db):
"""Test triggering scan without config_file."""
def test_trigger_scan_missing_config_id(self, client, db):
"""Test triggering scan without config_id."""
response = client.post('/api/scans',
json={},
content_type='application/json'
@@ -152,12 +152,12 @@ class TestScanAPIEndpoints:
data = json.loads(response.data)
assert 'error' in data
assert 'config_file is required' in data['message']
assert 'config_id is required' in data['message']
def test_trigger_scan_invalid_config_file(self, client, db):
"""Test triggering scan with non-existent config file."""
def test_trigger_scan_invalid_config_id(self, client, db):
"""Test triggering scan with non-existent config."""
response = client.post('/api/scans',
json={'config_file': '/nonexistent/config.yaml'},
json={'config_id': 99999},
content_type='application/json'
)
assert response.status_code == 400
@@ -222,7 +222,7 @@ class TestScanAPIEndpoints:
assert 'error' in data
assert 'message' in data
def test_scan_workflow_integration(self, client, db, sample_config_file):
def test_scan_workflow_integration(self, client, db, sample_db_config):
"""
Test complete scan workflow: trigger status retrieve delete.
@@ -231,7 +231,7 @@ class TestScanAPIEndpoints:
"""
# Step 1: Trigger scan
response = client.post('/api/scans',
json={'config_file': str(sample_config_file)},
json={'config_id': sample_db_config.id},
content_type='application/json'
)
assert response.status_code == 201

View File

@@ -17,10 +17,10 @@ class TestScanComparison:
"""Tests for scan comparison methods."""
@pytest.fixture
def scan1_data(self, test_db, sample_config_file):
def scan1_data(self, test_db, sample_db_config):
"""Create first scan with test data."""
service = ScanService(test_db)
scan_id = service.trigger_scan(sample_config_file, triggered_by='manual')
scan_id = service.trigger_scan(sample_db_config, triggered_by='manual')
# Get scan and add some test data
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
@@ -77,10 +77,10 @@ class TestScanComparison:
return scan_id
@pytest.fixture
def scan2_data(self, test_db, sample_config_file):
def scan2_data(self, test_db, sample_db_config):
"""Create second scan with modified test data."""
service = ScanService(test_db)
scan_id = service.trigger_scan(sample_config_file, triggered_by='manual')
scan_id = service.trigger_scan(sample_db_config, triggered_by='manual')
# Get scan and add some test data
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()

View File

@@ -13,49 +13,42 @@ from web.services.scan_service import ScanService
class TestScanServiceTrigger:
"""Tests for triggering scans."""
def test_trigger_scan_valid_config(self, test_db, sample_config_file):
"""Test triggering a scan with valid config file."""
service = ScanService(test_db)
def test_trigger_scan_valid_config(self, db, sample_db_config):
"""Test triggering a scan with valid config."""
service = ScanService(db)
scan_id = service.trigger_scan(sample_config_file, triggered_by='manual')
scan_id = service.trigger_scan(config_id=sample_db_config.id, triggered_by='manual')
# Verify scan created
assert scan_id is not None
assert isinstance(scan_id, int)
# Verify scan in database
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
scan = db.query(Scan).filter(Scan.id == scan_id).first()
assert scan is not None
assert scan.status == 'running'
assert scan.title == 'Test Scan'
assert scan.triggered_by == 'manual'
assert scan.config_file == sample_config_file
assert scan.config_id == sample_db_config.id
def test_trigger_scan_invalid_config(self, test_db, sample_invalid_config_file):
"""Test triggering a scan with invalid config file."""
service = ScanService(test_db)
def test_trigger_scan_invalid_config(self, db):
"""Test triggering a scan with invalid config ID."""
service = ScanService(db)
with pytest.raises(ValueError, match="Invalid config file"):
service.trigger_scan(sample_invalid_config_file)
with pytest.raises(ValueError, match="not found"):
service.trigger_scan(config_id=99999)
def test_trigger_scan_nonexistent_file(self, test_db):
"""Test triggering a scan with nonexistent config file."""
service = ScanService(test_db)
with pytest.raises(ValueError, match="does not exist"):
service.trigger_scan('/nonexistent/config.yaml')
def test_trigger_scan_with_schedule(self, test_db, sample_config_file):
def test_trigger_scan_with_schedule(self, db, sample_db_config):
"""Test triggering a scan via schedule."""
service = ScanService(test_db)
service = ScanService(db)
scan_id = service.trigger_scan(
sample_config_file,
config_id=sample_db_config.id,
triggered_by='scheduled',
schedule_id=42
)
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
scan = db.query(Scan).filter(Scan.id == scan_id).first()
assert scan.triggered_by == 'scheduled'
assert scan.schedule_id == 42
@@ -63,19 +56,19 @@ class TestScanServiceTrigger:
class TestScanServiceGet:
"""Tests for retrieving scans."""
def test_get_scan_not_found(self, test_db):
def test_get_scan_not_found(self, db):
"""Test getting a nonexistent scan."""
service = ScanService(test_db)
service = ScanService(db)
result = service.get_scan(999)
assert result is None
def test_get_scan_found(self, test_db, sample_config_file):
def test_get_scan_found(self, db, sample_db_config):
"""Test getting an existing scan."""
service = ScanService(test_db)
service = ScanService(db)
# Create a scan
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
# Retrieve it
result = service.get_scan(scan_id)
@@ -90,9 +83,9 @@ class TestScanServiceGet:
class TestScanServiceList:
"""Tests for listing scans."""
def test_list_scans_empty(self, test_db):
def test_list_scans_empty(self, db):
"""Test listing scans when database is empty."""
service = ScanService(test_db)
service = ScanService(db)
result = service.list_scans(page=1, per_page=20)
@@ -100,13 +93,13 @@ class TestScanServiceList:
assert len(result.items) == 0
assert result.pages == 0
def test_list_scans_with_data(self, test_db, sample_config_file):
def test_list_scans_with_data(self, db, sample_db_config):
"""Test listing scans with multiple scans."""
service = ScanService(test_db)
service = ScanService(db)
# Create 3 scans
for i in range(3):
service.trigger_scan(sample_config_file, triggered_by='api')
service.trigger_scan(config_id=sample_db_config.id, triggered_by='api')
# List all scans
result = service.list_scans(page=1, per_page=20)
@@ -115,13 +108,13 @@ class TestScanServiceList:
assert len(result.items) == 3
assert result.pages == 1
def test_list_scans_pagination(self, test_db, sample_config_file):
def test_list_scans_pagination(self, db, sample_db_config):
"""Test pagination."""
service = ScanService(test_db)
service = ScanService(db)
# Create 5 scans
for i in range(5):
service.trigger_scan(sample_config_file)
service.trigger_scan(config_id=sample_db_config.id)
# Get page 1 (2 items per page)
result = service.list_scans(page=1, per_page=2)
@@ -141,18 +134,18 @@ class TestScanServiceList:
assert len(result.items) == 1
assert result.has_next is False
def test_list_scans_filter_by_status(self, test_db, sample_config_file):
def test_list_scans_filter_by_status(self, db, sample_db_config):
"""Test filtering scans by status."""
service = ScanService(test_db)
service = ScanService(db)
# Create scans with different statuses
scan_id_1 = service.trigger_scan(sample_config_file)
scan_id_2 = service.trigger_scan(sample_config_file)
scan_id_1 = service.trigger_scan(config_id=sample_db_config.id)
scan_id_2 = service.trigger_scan(config_id=sample_db_config.id)
# Mark one as completed
scan = test_db.query(Scan).filter(Scan.id == scan_id_1).first()
scan = db.query(Scan).filter(Scan.id == scan_id_1).first()
scan.status = 'completed'
test_db.commit()
db.commit()
# Filter by running
result = service.list_scans(status_filter='running')
@@ -162,9 +155,9 @@ class TestScanServiceList:
result = service.list_scans(status_filter='completed')
assert result.total == 1
def test_list_scans_invalid_status_filter(self, test_db):
def test_list_scans_invalid_status_filter(self, db):
"""Test filtering with invalid status."""
service = ScanService(test_db)
service = ScanService(db)
with pytest.raises(ValueError, match="Invalid status"):
service.list_scans(status_filter='invalid_status')
@@ -173,46 +166,46 @@ class TestScanServiceList:
class TestScanServiceDelete:
"""Tests for deleting scans."""
def test_delete_scan_not_found(self, test_db):
def test_delete_scan_not_found(self, db):
"""Test deleting a nonexistent scan."""
service = ScanService(test_db)
service = ScanService(db)
with pytest.raises(ValueError, match="not found"):
service.delete_scan(999)
def test_delete_scan_success(self, test_db, sample_config_file):
def test_delete_scan_success(self, db, sample_db_config):
"""Test successful scan deletion."""
service = ScanService(test_db)
service = ScanService(db)
# Create a scan
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
# Verify it exists
assert test_db.query(Scan).filter(Scan.id == scan_id).first() is not None
assert db.query(Scan).filter(Scan.id == scan_id).first() is not None
# Delete it
result = service.delete_scan(scan_id)
assert result is True
# Verify it's gone
assert test_db.query(Scan).filter(Scan.id == scan_id).first() is None
assert db.query(Scan).filter(Scan.id == scan_id).first() is None
class TestScanServiceStatus:
"""Tests for scan status retrieval."""
def test_get_scan_status_not_found(self, test_db):
def test_get_scan_status_not_found(self, db):
"""Test getting status of nonexistent scan."""
service = ScanService(test_db)
service = ScanService(db)
result = service.get_scan_status(999)
assert result is None
def test_get_scan_status_running(self, test_db, sample_config_file):
def test_get_scan_status_running(self, db, sample_db_config):
"""Test getting status of running scan."""
service = ScanService(test_db)
service = ScanService(db)
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
status = service.get_scan_status(scan_id)
assert status is not None
@@ -221,16 +214,16 @@ class TestScanServiceStatus:
assert status['progress'] == 'In progress'
assert status['title'] == 'Test Scan'
def test_get_scan_status_completed(self, test_db, sample_config_file):
def test_get_scan_status_completed(self, db, sample_db_config):
"""Test getting status of completed scan."""
service = ScanService(test_db)
service = ScanService(db)
# Create and mark as completed
scan_id = service.trigger_scan(sample_config_file)
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
scan_id = service.trigger_scan(config_id=sample_db_config.id)
scan = db.query(Scan).filter(Scan.id == scan_id).first()
scan.status = 'completed'
scan.duration = 125.5
test_db.commit()
db.commit()
status = service.get_scan_status(scan_id)
@@ -242,35 +235,35 @@ class TestScanServiceStatus:
class TestScanServiceDatabaseMapping:
"""Tests for mapping scan reports to database models."""
def test_save_scan_to_db(self, test_db, sample_config_file, sample_scan_report):
def test_save_scan_to_db(self, db, sample_db_config, sample_scan_report):
"""Test saving a complete scan report to database."""
service = ScanService(test_db)
service = ScanService(db)
# Create a scan
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
# Save report to database
service._save_scan_to_db(sample_scan_report, scan_id, status='completed')
# Verify scan updated
scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
scan = db.query(Scan).filter(Scan.id == scan_id).first()
assert scan.status == 'completed'
assert scan.duration == 125.5
# Verify sites created
sites = test_db.query(ScanSite).filter(ScanSite.scan_id == scan_id).all()
sites = db.query(ScanSite).filter(ScanSite.scan_id == scan_id).all()
assert len(sites) == 1
assert sites[0].site_name == 'Test Site'
# Verify IPs created
ips = test_db.query(ScanIP).filter(ScanIP.scan_id == scan_id).all()
ips = db.query(ScanIP).filter(ScanIP.scan_id == scan_id).all()
assert len(ips) == 1
assert ips[0].ip_address == '192.168.1.10'
assert ips[0].ping_expected is True
assert ips[0].ping_actual is True
# Verify ports created (TCP: 22, 80, 443, 8080 | UDP: 53)
ports = test_db.query(ScanPort).filter(ScanPort.scan_id == scan_id).all()
ports = db.query(ScanPort).filter(ScanPort.scan_id == scan_id).all()
assert len(ports) == 5 # 4 TCP + 1 UDP
# Verify TCP ports
@@ -285,7 +278,7 @@ class TestScanServiceDatabaseMapping:
assert udp_ports[0].port == 53
# Verify services created
services = test_db.query(ScanServiceModel).filter(
services = db.query(ScanServiceModel).filter(
ScanServiceModel.scan_id == scan_id
).all()
assert len(services) == 4 # SSH, HTTP (80), HTTPS, HTTP (8080)
@@ -300,15 +293,15 @@ class TestScanServiceDatabaseMapping:
assert https_service.http_protocol == 'https'
assert https_service.screenshot_path == 'screenshots/192_168_1_10_443.png'
def test_map_port_expected_vs_actual(self, test_db, sample_config_file, sample_scan_report):
def test_map_port_expected_vs_actual(self, db, sample_db_config, sample_scan_report):
"""Test that expected vs actual ports are correctly flagged."""
service = ScanService(test_db)
service = ScanService(db)
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
service._save_scan_to_db(sample_scan_report, scan_id)
# Check TCP ports
tcp_ports = test_db.query(ScanPort).filter(
tcp_ports = db.query(ScanPort).filter(
ScanPort.scan_id == scan_id,
ScanPort.protocol == 'tcp'
).all()
@@ -322,15 +315,15 @@ class TestScanServiceDatabaseMapping:
# Port 8080 was not expected
assert port.expected is False, f"Port {port.port} should not be expected"
def test_map_certificate_and_tls(self, test_db, sample_config_file, sample_scan_report):
def test_map_certificate_and_tls(self, db, sample_db_config, sample_scan_report):
"""Test that certificate and TLS data are correctly mapped."""
service = ScanService(test_db)
service = ScanService(db)
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
service._save_scan_to_db(sample_scan_report, scan_id)
# Find HTTPS service
https_service = test_db.query(ScanServiceModel).filter(
https_service = db.query(ScanServiceModel).filter(
ScanServiceModel.scan_id == scan_id,
ScanServiceModel.service_name == 'https'
).first()
@@ -363,11 +356,11 @@ class TestScanServiceDatabaseMapping:
assert tls_13 is not None
assert tls_13.supported is True
def test_get_scan_with_full_details(self, test_db, sample_config_file, sample_scan_report):
def test_get_scan_with_full_details(self, db, sample_db_config, sample_scan_report):
"""Test retrieving scan with all nested relationships."""
service = ScanService(test_db)
service = ScanService(db)
scan_id = service.trigger_scan(sample_config_file)
scan_id = service.trigger_scan(config_id=sample_db_config.id)
service._save_scan_to_db(sample_scan_report, scan_id)
# Get full scan details

View File

@@ -13,20 +13,20 @@ from web.models import Schedule, Scan
@pytest.fixture
def sample_schedule(db, sample_config_file):
def sample_schedule(db, sample_db_config):
"""
Create a sample schedule in the database for testing.
Args:
db: Database session fixture
sample_config_file: Path to test config file
sample_db_config: Path to test config file
Returns:
Schedule model instance
"""
schedule = Schedule(
name='Daily Test Scan',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True,
last_run=None,
@@ -68,13 +68,13 @@ class TestScheduleAPIEndpoints:
assert data['schedules'][0]['name'] == sample_schedule.name
assert data['schedules'][0]['cron_expression'] == sample_schedule.cron_expression
def test_list_schedules_pagination(self, client, db, sample_config_file):
def test_list_schedules_pagination(self, client, db, sample_db_config):
"""Test schedule list pagination."""
# Create 25 schedules
for i in range(25):
schedule = Schedule(
name=f'Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True,
created_at=datetime.utcnow()
@@ -101,13 +101,13 @@ class TestScheduleAPIEndpoints:
assert len(data['schedules']) == 10
assert data['page'] == 2
def test_list_schedules_filter_enabled(self, client, db, sample_config_file):
def test_list_schedules_filter_enabled(self, client, db, sample_db_config):
"""Test filtering schedules by enabled status."""
# Create enabled and disabled schedules
for i in range(3):
schedule = Schedule(
name=f'Enabled Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True,
created_at=datetime.utcnow()
@@ -117,7 +117,7 @@ class TestScheduleAPIEndpoints:
for i in range(2):
schedule = Schedule(
name=f'Disabled Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 3 * * *',
enabled=False,
created_at=datetime.utcnow()
@@ -151,7 +151,7 @@ class TestScheduleAPIEndpoints:
data = json.loads(response.data)
assert data['id'] == sample_schedule.id
assert data['name'] == sample_schedule.name
assert data['config_file'] == sample_schedule.config_file
assert data['config_id'] == sample_schedule.config_id
assert data['cron_expression'] == sample_schedule.cron_expression
assert data['enabled'] == sample_schedule.enabled
assert 'history' in data
@@ -165,11 +165,11 @@ class TestScheduleAPIEndpoints:
assert 'error' in data
assert 'not found' in data['error'].lower()
def test_create_schedule(self, client, db, sample_config_file):
def test_create_schedule(self, client, db, sample_db_config):
"""Test creating a new schedule."""
schedule_data = {
'name': 'New Test Schedule',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': '0 3 * * *',
'enabled': True
}
@@ -197,7 +197,7 @@ class TestScheduleAPIEndpoints:
# Missing cron_expression
schedule_data = {
'name': 'Incomplete Schedule',
'config_file': '/app/configs/test.yaml'
'config_id': 1
}
response = client.post(
@@ -211,11 +211,11 @@ class TestScheduleAPIEndpoints:
assert 'error' in data
assert 'missing' in data['error'].lower()
def test_create_schedule_invalid_cron(self, client, db, sample_config_file):
def test_create_schedule_invalid_cron(self, client, db, sample_db_config):
"""Test creating schedule with invalid cron expression."""
schedule_data = {
'name': 'Invalid Cron Schedule',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': 'invalid cron'
}
@@ -231,10 +231,10 @@ class TestScheduleAPIEndpoints:
assert 'invalid' in data['error'].lower() or 'cron' in data['error'].lower()
def test_create_schedule_invalid_config(self, client, db):
"""Test creating schedule with non-existent config file."""
"""Test creating schedule with non-existent config."""
schedule_data = {
'name': 'Invalid Config Schedule',
'config_file': '/nonexistent/config.yaml',
'config_id': 99999,
'cron_expression': '0 2 * * *'
}
@@ -360,13 +360,13 @@ class TestScheduleAPIEndpoints:
data = json.loads(response.data)
assert 'error' in data
def test_delete_schedule_preserves_scans(self, client, db, sample_schedule, sample_config_file):
def test_delete_schedule_preserves_scans(self, client, db, sample_schedule, sample_db_config):
"""Test that deleting schedule preserves associated scans."""
# Create a scan associated with the schedule
scan = Scan(
timestamp=datetime.utcnow(),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Test Scan',
triggered_by='scheduled',
schedule_id=sample_schedule.id
@@ -399,7 +399,7 @@ class TestScheduleAPIEndpoints:
assert scan is not None
assert scan.triggered_by == 'manual'
assert scan.schedule_id == sample_schedule.id
assert scan.config_file == sample_schedule.config_file
assert scan.config_id == sample_schedule.config_id
def test_trigger_schedule_not_found(self, client, db):
"""Test triggering non-existent schedule."""
@@ -409,14 +409,14 @@ class TestScheduleAPIEndpoints:
data = json.loads(response.data)
assert 'error' in data
def test_get_schedule_with_history(self, client, db, sample_schedule, sample_config_file):
def test_get_schedule_with_history(self, client, db, sample_schedule, sample_db_config):
"""Test getting schedule includes execution history."""
# Create some scans for this schedule
for i in range(5):
scan = Scan(
timestamp=datetime.utcnow(),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title=f'Scheduled Scan {i}',
triggered_by='scheduled',
schedule_id=sample_schedule.id
@@ -431,12 +431,12 @@ class TestScheduleAPIEndpoints:
assert 'history' in data
assert len(data['history']) == 5
def test_schedule_workflow_integration(self, client, db, sample_config_file):
def test_schedule_workflow_integration(self, client, db, sample_db_config):
"""Test complete schedule workflow: create → update → trigger → delete."""
# 1. Create schedule
schedule_data = {
'name': 'Integration Test Schedule',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': '0 2 * * *',
'enabled': True
}
@@ -482,14 +482,14 @@ class TestScheduleAPIEndpoints:
scan = db.query(Scan).filter(Scan.id == scan_id).first()
assert scan is not None
def test_list_schedules_ordering(self, client, db, sample_config_file):
def test_list_schedules_ordering(self, client, db, sample_db_config):
"""Test that schedules are ordered by next_run time."""
# Create schedules with different next_run times
schedules = []
for i in range(3):
schedule = Schedule(
name=f'Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True,
next_run=datetime(2025, 11, 15 + i, 2, 0, 0),
@@ -501,7 +501,7 @@ class TestScheduleAPIEndpoints:
# Create a disabled schedule (next_run is None)
disabled_schedule = Schedule(
name='Disabled Schedule',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 3 * * *',
enabled=False,
next_run=None,
@@ -523,11 +523,11 @@ class TestScheduleAPIEndpoints:
assert returned_schedules[2]['id'] == schedules[2].id
assert returned_schedules[3]['id'] == disabled_schedule.id
def test_create_schedule_with_disabled(self, client, db, sample_config_file):
def test_create_schedule_with_disabled(self, client, db, sample_db_config):
"""Test creating a disabled schedule."""
schedule_data = {
'name': 'Disabled Schedule',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': '0 2 * * *',
'enabled': False
}
@@ -587,7 +587,7 @@ class TestScheduleAPIAuthentication:
class TestScheduleAPICronValidation:
"""Test suite for cron expression validation."""
def test_valid_cron_expressions(self, client, db, sample_config_file):
def test_valid_cron_expressions(self, client, db, sample_db_config):
"""Test various valid cron expressions."""
valid_expressions = [
'0 2 * * *', # Daily at 2am
@@ -600,7 +600,7 @@ class TestScheduleAPICronValidation:
for cron_expr in valid_expressions:
schedule_data = {
'name': f'Schedule for {cron_expr}',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': cron_expr
}
@@ -612,7 +612,7 @@ class TestScheduleAPICronValidation:
assert response.status_code == 201, \
f"Valid cron expression '{cron_expr}' should be accepted"
def test_invalid_cron_expressions(self, client, db, sample_config_file):
def test_invalid_cron_expressions(self, client, db, sample_db_config):
"""Test various invalid cron expressions."""
invalid_expressions = [
'invalid',
@@ -626,7 +626,7 @@ class TestScheduleAPICronValidation:
for cron_expr in invalid_expressions:
schedule_data = {
'name': f'Schedule for {cron_expr}',
'config_file': sample_config_file,
'config_id': sample_db_config.id,
'cron_expression': cron_expr
}

View File

@@ -15,13 +15,13 @@ from web.services.schedule_service import ScheduleService
class TestScheduleServiceCreate:
"""Tests for creating schedules."""
def test_create_schedule_valid(self, test_db, sample_config_file):
def test_create_schedule_valid(self, db, sample_db_config):
"""Test creating a schedule with valid parameters."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Daily Scan',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -31,57 +31,57 @@ class TestScheduleServiceCreate:
assert isinstance(schedule_id, int)
# Verify schedule in database
schedule = test_db.query(Schedule).filter(Schedule.id == schedule_id).first()
schedule = db.query(Schedule).filter(Schedule.id == schedule_id).first()
assert schedule is not None
assert schedule.name == 'Daily Scan'
assert schedule.config_file == sample_config_file
assert schedule.config_id == sample_db_config.id
assert schedule.cron_expression == '0 2 * * *'
assert schedule.enabled is True
assert schedule.next_run is not None
assert schedule.last_run is None
def test_create_schedule_disabled(self, test_db, sample_config_file):
def test_create_schedule_disabled(self, db, sample_db_config):
"""Test creating a disabled schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Disabled Scan',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 3 * * *',
enabled=False
)
schedule = test_db.query(Schedule).filter(Schedule.id == schedule_id).first()
schedule = db.query(Schedule).filter(Schedule.id == schedule_id).first()
assert schedule.enabled is False
assert schedule.next_run is None
def test_create_schedule_invalid_cron(self, test_db, sample_config_file):
def test_create_schedule_invalid_cron(self, db, sample_db_config):
"""Test creating a schedule with invalid cron expression."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Invalid cron expression"):
service.create_schedule(
name='Invalid Schedule',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='invalid cron',
enabled=True
)
def test_create_schedule_nonexistent_config(self, test_db):
"""Test creating a schedule with nonexistent config file."""
service = ScheduleService(test_db)
def test_create_schedule_nonexistent_config(self, db):
"""Test creating a schedule with nonexistent config."""
service = ScheduleService(db)
with pytest.raises(ValueError, match="Config file not found"):
with pytest.raises(ValueError, match="not found"):
service.create_schedule(
name='Bad Config',
config_file='/nonexistent/config.yaml',
config_id=99999,
cron_expression='0 2 * * *',
enabled=True
)
def test_create_schedule_various_cron_expressions(self, test_db, sample_config_file):
def test_create_schedule_various_cron_expressions(self, db, sample_db_config):
"""Test creating schedules with various valid cron expressions."""
service = ScheduleService(test_db)
service = ScheduleService(db)
cron_expressions = [
'0 0 * * *', # Daily at midnight
@@ -94,7 +94,7 @@ class TestScheduleServiceCreate:
for i, cron in enumerate(cron_expressions):
schedule_id = service.create_schedule(
name=f'Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression=cron,
enabled=True
)
@@ -104,21 +104,21 @@ class TestScheduleServiceCreate:
class TestScheduleServiceGet:
"""Tests for retrieving schedules."""
def test_get_schedule_not_found(self, test_db):
def test_get_schedule_not_found(self, db):
"""Test getting a nonexistent schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Schedule .* not found"):
service.get_schedule(999)
def test_get_schedule_found(self, test_db, sample_config_file):
def test_get_schedule_found(self, db, sample_db_config):
"""Test getting an existing schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create a schedule
schedule_id = service.create_schedule(
name='Test Schedule',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -134,14 +134,14 @@ class TestScheduleServiceGet:
assert 'history' in result
assert isinstance(result['history'], list)
def test_get_schedule_with_history(self, test_db, sample_config_file):
def test_get_schedule_with_history(self, db, sample_db_config):
"""Test getting schedule includes execution history."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create schedule
schedule_id = service.create_schedule(
name='Test Schedule',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -151,13 +151,13 @@ class TestScheduleServiceGet:
scan = Scan(
timestamp=datetime.utcnow() - timedelta(days=i),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title=f'Scan {i}',
triggered_by='scheduled',
schedule_id=schedule_id
)
test_db.add(scan)
test_db.commit()
db.add(scan)
db.commit()
# Get schedule
result = service.get_schedule(schedule_id)
@@ -169,9 +169,9 @@ class TestScheduleServiceGet:
class TestScheduleServiceList:
"""Tests for listing schedules."""
def test_list_schedules_empty(self, test_db):
def test_list_schedules_empty(self, db):
"""Test listing schedules when database is empty."""
service = ScheduleService(test_db)
service = ScheduleService(db)
result = service.list_schedules(page=1, per_page=20)
@@ -180,15 +180,15 @@ class TestScheduleServiceList:
assert result['page'] == 1
assert result['per_page'] == 20
def test_list_schedules_populated(self, test_db, sample_config_file):
def test_list_schedules_populated(self, db, sample_db_config):
"""Test listing schedules with data."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create multiple schedules
for i in range(5):
service.create_schedule(
name=f'Schedule {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -199,15 +199,15 @@ class TestScheduleServiceList:
assert len(result['schedules']) == 5
assert all('name' in s for s in result['schedules'])
def test_list_schedules_pagination(self, test_db, sample_config_file):
def test_list_schedules_pagination(self, db, sample_db_config):
"""Test schedule pagination."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create 25 schedules
for i in range(25):
service.create_schedule(
name=f'Schedule {i:02d}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -226,22 +226,22 @@ class TestScheduleServiceList:
result_page3 = service.list_schedules(page=3, per_page=10)
assert len(result_page3['schedules']) == 5
def test_list_schedules_filter_enabled(self, test_db, sample_config_file):
def test_list_schedules_filter_enabled(self, db, sample_db_config):
"""Test filtering schedules by enabled status."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create enabled and disabled schedules
for i in range(3):
service.create_schedule(
name=f'Enabled {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
for i in range(2):
service.create_schedule(
name=f'Disabled {i}',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=False
)
@@ -262,13 +262,13 @@ class TestScheduleServiceList:
class TestScheduleServiceUpdate:
"""Tests for updating schedules."""
def test_update_schedule_name(self, test_db, sample_config_file):
def test_update_schedule_name(self, db, sample_db_config):
"""Test updating schedule name."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Old Name',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -278,13 +278,13 @@ class TestScheduleServiceUpdate:
assert result['name'] == 'New Name'
assert result['cron_expression'] == '0 2 * * *'
def test_update_schedule_cron(self, test_db, sample_config_file):
def test_update_schedule_cron(self, db, sample_db_config):
"""Test updating cron expression recalculates next_run."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -302,13 +302,13 @@ class TestScheduleServiceUpdate:
assert result['cron_expression'] == '0 3 * * *'
assert result['next_run'] != original_next_run
def test_update_schedule_invalid_cron(self, test_db, sample_config_file):
def test_update_schedule_invalid_cron(self, db, sample_db_config):
"""Test updating with invalid cron expression fails."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -316,67 +316,67 @@ class TestScheduleServiceUpdate:
with pytest.raises(ValueError, match="Invalid cron expression"):
service.update_schedule(schedule_id, cron_expression='invalid')
def test_update_schedule_not_found(self, test_db):
def test_update_schedule_not_found(self, db):
"""Test updating nonexistent schedule fails."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Schedule .* not found"):
service.update_schedule(999, name='New Name')
def test_update_schedule_invalid_config_file(self, test_db, sample_config_file):
"""Test updating with nonexistent config file fails."""
service = ScheduleService(test_db)
def test_update_schedule_invalid_config_id(self, db, sample_db_config):
"""Test updating with nonexistent config ID fails."""
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
with pytest.raises(ValueError, match="Config file not found"):
service.update_schedule(schedule_id, config_file='/nonexistent.yaml')
with pytest.raises(ValueError, match="not found"):
service.update_schedule(schedule_id, config_id=99999)
class TestScheduleServiceDelete:
"""Tests for deleting schedules."""
def test_delete_schedule(self, test_db, sample_config_file):
def test_delete_schedule(self, db, sample_db_config):
"""Test deleting a schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='To Delete',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
# Verify exists
assert test_db.query(Schedule).filter(Schedule.id == schedule_id).first() is not None
assert db.query(Schedule).filter(Schedule.id == schedule_id).first() is not None
# Delete
result = service.delete_schedule(schedule_id)
assert result is True
# Verify deleted
assert test_db.query(Schedule).filter(Schedule.id == schedule_id).first() is None
assert db.query(Schedule).filter(Schedule.id == schedule_id).first() is None
def test_delete_schedule_not_found(self, test_db):
def test_delete_schedule_not_found(self, db):
"""Test deleting nonexistent schedule fails."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Schedule .* not found"):
service.delete_schedule(999)
def test_delete_schedule_preserves_scans(self, test_db, sample_config_file):
def test_delete_schedule_preserves_scans(self, db, sample_db_config):
"""Test that deleting schedule preserves associated scans."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Create schedule
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -385,20 +385,20 @@ class TestScheduleServiceDelete:
scan = Scan(
timestamp=datetime.utcnow(),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title='Test Scan',
triggered_by='scheduled',
schedule_id=schedule_id
)
test_db.add(scan)
test_db.commit()
db.add(scan)
db.commit()
scan_id = scan.id
# Delete schedule
service.delete_schedule(schedule_id)
# Verify scan still exists (schedule_id becomes null)
remaining_scan = test_db.query(Scan).filter(Scan.id == scan_id).first()
remaining_scan = db.query(Scan).filter(Scan.id == scan_id).first()
assert remaining_scan is not None
assert remaining_scan.schedule_id is None
@@ -406,13 +406,13 @@ class TestScheduleServiceDelete:
class TestScheduleServiceToggle:
"""Tests for toggling schedule enabled status."""
def test_toggle_enabled_to_disabled(self, test_db, sample_config_file):
def test_toggle_enabled_to_disabled(self, db, sample_db_config):
"""Test disabling an enabled schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -422,13 +422,13 @@ class TestScheduleServiceToggle:
assert result['enabled'] is False
assert result['next_run'] is None
def test_toggle_disabled_to_enabled(self, test_db, sample_config_file):
def test_toggle_disabled_to_enabled(self, db, sample_db_config):
"""Test enabling a disabled schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=False
)
@@ -442,13 +442,13 @@ class TestScheduleServiceToggle:
class TestScheduleServiceRunTimes:
"""Tests for updating run times."""
def test_update_run_times(self, test_db, sample_config_file):
def test_update_run_times(self, db, sample_db_config):
"""Test updating last_run and next_run."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -463,9 +463,9 @@ class TestScheduleServiceRunTimes:
assert schedule['last_run'] is not None
assert schedule['next_run'] is not None
def test_update_run_times_not_found(self, test_db):
def test_update_run_times_not_found(self, db):
"""Test updating run times for nonexistent schedule."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Schedule .* not found"):
service.update_run_times(
@@ -478,9 +478,9 @@ class TestScheduleServiceRunTimes:
class TestCronValidation:
"""Tests for cron expression validation."""
def test_validate_cron_valid_expressions(self, test_db):
def test_validate_cron_valid_expressions(self, db):
"""Test validating various valid cron expressions."""
service = ScheduleService(test_db)
service = ScheduleService(db)
valid_expressions = [
'0 0 * * *', # Daily at midnight
@@ -496,9 +496,9 @@ class TestCronValidation:
assert is_valid is True, f"Expression '{expr}' should be valid"
assert error is None
def test_validate_cron_invalid_expressions(self, test_db):
def test_validate_cron_invalid_expressions(self, db):
"""Test validating invalid cron expressions."""
service = ScheduleService(test_db)
service = ScheduleService(db)
invalid_expressions = [
'invalid',
@@ -518,9 +518,9 @@ class TestCronValidation:
class TestNextRunCalculation:
"""Tests for next run time calculation."""
def test_calculate_next_run(self, test_db):
def test_calculate_next_run(self, db):
"""Test calculating next run time."""
service = ScheduleService(test_db)
service = ScheduleService(db)
# Daily at 2 AM
next_run = service.calculate_next_run('0 2 * * *')
@@ -529,9 +529,9 @@ class TestNextRunCalculation:
assert isinstance(next_run, datetime)
assert next_run > datetime.utcnow()
def test_calculate_next_run_from_time(self, test_db):
def test_calculate_next_run_from_time(self, db):
"""Test calculating next run from specific time."""
service = ScheduleService(test_db)
service = ScheduleService(db)
base_time = datetime(2025, 1, 1, 0, 0, 0)
next_run = service.calculate_next_run('0 2 * * *', from_time=base_time)
@@ -540,9 +540,9 @@ class TestNextRunCalculation:
assert next_run.hour == 2
assert next_run.minute == 0
def test_calculate_next_run_invalid_cron(self, test_db):
def test_calculate_next_run_invalid_cron(self, db):
"""Test calculating next run with invalid cron raises error."""
service = ScheduleService(test_db)
service = ScheduleService(db)
with pytest.raises(ValueError, match="Invalid cron expression"):
service.calculate_next_run('invalid cron')
@@ -551,13 +551,13 @@ class TestNextRunCalculation:
class TestScheduleHistory:
"""Tests for schedule execution history."""
def test_get_schedule_history_empty(self, test_db, sample_config_file):
def test_get_schedule_history_empty(self, db, sample_db_config):
"""Test getting history for schedule with no executions."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -565,13 +565,13 @@ class TestScheduleHistory:
history = service.get_schedule_history(schedule_id)
assert len(history) == 0
def test_get_schedule_history_with_scans(self, test_db, sample_config_file):
def test_get_schedule_history_with_scans(self, db, sample_db_config):
"""Test getting history with multiple scans."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -581,26 +581,26 @@ class TestScheduleHistory:
scan = Scan(
timestamp=datetime.utcnow() - timedelta(days=i),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title=f'Scan {i}',
triggered_by='scheduled',
schedule_id=schedule_id
)
test_db.add(scan)
test_db.commit()
db.add(scan)
db.commit()
# Get history (default limit 10)
history = service.get_schedule_history(schedule_id, limit=10)
assert len(history) == 10
assert history[0]['title'] == 'Scan 0' # Most recent first
def test_get_schedule_history_custom_limit(self, test_db, sample_config_file):
def test_get_schedule_history_custom_limit(self, db, sample_db_config):
"""Test getting history with custom limit."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -610,13 +610,13 @@ class TestScheduleHistory:
scan = Scan(
timestamp=datetime.utcnow() - timedelta(days=i),
status='completed',
config_file=sample_config_file,
config_id=sample_db_config.id,
title=f'Scan {i}',
triggered_by='scheduled',
schedule_id=schedule_id
)
test_db.add(scan)
test_db.commit()
db.add(scan)
db.commit()
# Get only 5
history = service.get_schedule_history(schedule_id, limit=5)
@@ -626,13 +626,13 @@ class TestScheduleHistory:
class TestScheduleSerialization:
"""Tests for schedule serialization."""
def test_schedule_to_dict(self, test_db, sample_config_file):
def test_schedule_to_dict(self, db, sample_db_config):
"""Test converting schedule to dictionary."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test Schedule',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)
@@ -642,7 +642,7 @@ class TestScheduleSerialization:
# Verify all required fields
assert 'id' in result
assert 'name' in result
assert 'config_file' in result
assert 'config_id' in result
assert 'cron_expression' in result
assert 'enabled' in result
assert 'last_run' in result
@@ -652,13 +652,13 @@ class TestScheduleSerialization:
assert 'updated_at' in result
assert 'history' in result
def test_schedule_relative_time_formatting(self, test_db, sample_config_file):
def test_schedule_relative_time_formatting(self, db, sample_db_config):
"""Test relative time formatting in schedule dict."""
service = ScheduleService(test_db)
service = ScheduleService(db)
schedule_id = service.create_schedule(
name='Test',
config_file=sample_config_file,
config_id=sample_db_config.id,
cron_expression='0 2 * * *',
enabled=True
)

View File

@@ -20,7 +20,7 @@ class TestStatsAPI:
scan_date = today - timedelta(days=i)
for j in range(i + 1): # Create 1, 2, 3, 4, 5 scans per day
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=scan_date,
status='completed',
duration=10.5
@@ -56,7 +56,7 @@ class TestStatsAPI:
today = datetime.utcnow()
for i in range(10):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=i),
status='completed',
duration=10.5
@@ -105,7 +105,7 @@ class TestStatsAPI:
# Create scan 5 days ago
scan1 = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=5),
status='completed',
duration=10.5
@@ -114,7 +114,7 @@ class TestStatsAPI:
# Create scan 10 days ago
scan2 = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=10),
status='completed',
duration=10.5
@@ -148,7 +148,7 @@ class TestStatsAPI:
# 5 completed scans
for i in range(5):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=i),
status='completed',
duration=10.5
@@ -158,7 +158,7 @@ class TestStatsAPI:
# 2 failed scans
for i in range(2):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=i),
status='failed',
duration=5.0
@@ -167,7 +167,7 @@ class TestStatsAPI:
# 1 running scan
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today,
status='running',
duration=None
@@ -217,7 +217,7 @@ class TestStatsAPI:
# Create 3 scans today
for i in range(3):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today,
status='completed',
duration=10.5
@@ -227,7 +227,7 @@ class TestStatsAPI:
# Create 2 scans yesterday
for i in range(2):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=yesterday,
status='completed',
duration=10.5
@@ -250,7 +250,7 @@ class TestStatsAPI:
# Create scans over the last 10 days
for i in range(10):
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=today - timedelta(days=i),
status='completed',
duration=10.5
@@ -275,7 +275,7 @@ class TestStatsAPI:
"""Test scan trend returns dates in correct format."""
# Create a scan
scan = Scan(
config_file='/app/configs/test.yaml',
config_id=1,
timestamp=datetime.utcnow(),
status='completed',
duration=10.5

535
app/web/api/alerts.py Normal file
View File

@@ -0,0 +1,535 @@
"""
Alerts API blueprint.
Handles endpoints for viewing alert history and managing alert rules.
"""
import json
from datetime import datetime, timedelta, timezone
from flask import Blueprint, jsonify, request, current_app
from web.auth.decorators import api_auth_required
from web.models import Alert, AlertRule, Scan
from web.services.alert_service import AlertService
bp = Blueprint('alerts', __name__)
@bp.route('', methods=['GET'])
@api_auth_required
def list_alerts():
"""
List recent alerts.
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 20)
alert_type: Filter by alert type
severity: Filter by severity (info, warning, critical)
acknowledged: Filter by acknowledgment status (true/false)
scan_id: Filter by specific scan
start_date: Filter alerts after this date (ISO format)
end_date: Filter alerts before this date (ISO format)
Returns:
JSON response with alerts list
"""
# Get query parameters
page = request.args.get('page', 1, type=int)
per_page = min(request.args.get('per_page', 20, type=int), 100) # Max 100 items
alert_type = request.args.get('alert_type')
severity = request.args.get('severity')
acknowledged = request.args.get('acknowledged')
scan_id = request.args.get('scan_id', type=int)
start_date = request.args.get('start_date')
end_date = request.args.get('end_date')
# Build query
query = current_app.db_session.query(Alert)
# Apply filters
if alert_type:
query = query.filter(Alert.alert_type == alert_type)
if severity:
query = query.filter(Alert.severity == severity)
if acknowledged is not None:
ack_bool = acknowledged.lower() == 'true'
query = query.filter(Alert.acknowledged == ack_bool)
if scan_id:
query = query.filter(Alert.scan_id == scan_id)
if start_date:
try:
start_dt = datetime.fromisoformat(start_date.replace('Z', '+00:00'))
query = query.filter(Alert.created_at >= start_dt)
except ValueError:
pass # Ignore invalid date format
if end_date:
try:
end_dt = datetime.fromisoformat(end_date.replace('Z', '+00:00'))
query = query.filter(Alert.created_at <= end_dt)
except ValueError:
pass # Ignore invalid date format
# Order by severity and date
query = query.order_by(
Alert.severity.desc(), # Critical first, then warning, then info
Alert.created_at.desc() # Most recent first
)
# Paginate
total = query.count()
alerts = query.offset((page - 1) * per_page).limit(per_page).all()
# Format response
alerts_data = []
for alert in alerts:
# Get scan info
scan = current_app.db_session.query(Scan).filter(Scan.id == alert.scan_id).first()
alerts_data.append({
'id': alert.id,
'scan_id': alert.scan_id,
'scan_title': scan.title if scan else None,
'rule_id': alert.rule_id,
'alert_type': alert.alert_type,
'severity': alert.severity,
'message': alert.message,
'ip_address': alert.ip_address,
'port': alert.port,
'acknowledged': alert.acknowledged,
'acknowledged_at': alert.acknowledged_at.isoformat() if alert.acknowledged_at else None,
'acknowledged_by': alert.acknowledged_by,
'email_sent': alert.email_sent,
'email_sent_at': alert.email_sent_at.isoformat() if alert.email_sent_at else None,
'webhook_sent': alert.webhook_sent,
'webhook_sent_at': alert.webhook_sent_at.isoformat() if alert.webhook_sent_at else None,
'created_at': alert.created_at.isoformat()
})
return jsonify({
'alerts': alerts_data,
'total': total,
'page': page,
'per_page': per_page,
'pages': (total + per_page - 1) // per_page # Ceiling division
})
@bp.route('/<int:alert_id>/acknowledge', methods=['POST'])
@api_auth_required
def acknowledge_alert(alert_id):
"""
Acknowledge an alert.
Args:
alert_id: Alert ID to acknowledge
Returns:
JSON response with acknowledgment status
"""
# Get username from auth context or default to 'api'
acknowledged_by = request.json.get('acknowledged_by', 'api') if request.json else 'api'
alert_service = AlertService(current_app.db_session)
success = alert_service.acknowledge_alert(alert_id, acknowledged_by)
if success:
return jsonify({
'status': 'success',
'message': f'Alert {alert_id} acknowledged',
'acknowledged_by': acknowledged_by
})
else:
return jsonify({
'status': 'error',
'message': f'Failed to acknowledge alert {alert_id}'
}), 400
@bp.route('/acknowledge-all', methods=['POST'])
@api_auth_required
def acknowledge_all_alerts():
"""
Acknowledge all unacknowledged alerts.
Returns:
JSON response with count of acknowledged alerts
"""
acknowledged_by = request.json.get('acknowledged_by', 'api') if request.json else 'api'
try:
# Get all unacknowledged alerts
unacked_alerts = current_app.db_session.query(Alert).filter(
Alert.acknowledged == False
).all()
count = 0
for alert in unacked_alerts:
alert.acknowledged = True
alert.acknowledged_at = datetime.now(timezone.utc)
alert.acknowledged_by = acknowledged_by
count += 1
current_app.db_session.commit()
return jsonify({
'status': 'success',
'message': f'Acknowledged {count} alerts',
'count': count,
'acknowledged_by': acknowledged_by
})
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to acknowledge alerts: {str(e)}'
}), 500
@bp.route('/rules', methods=['GET'])
@api_auth_required
def list_alert_rules():
"""
List all alert rules.
Returns:
JSON response with alert rules
"""
rules = current_app.db_session.query(AlertRule).order_by(AlertRule.name, AlertRule.rule_type).all()
rules_data = []
for rule in rules:
rules_data.append({
'id': rule.id,
'name': rule.name,
'rule_type': rule.rule_type,
'enabled': rule.enabled,
'threshold': rule.threshold,
'email_enabled': rule.email_enabled,
'webhook_enabled': rule.webhook_enabled,
'severity': rule.severity,
'filter_conditions': json.loads(rule.filter_conditions) if rule.filter_conditions else None,
'config_id': rule.config_id,
'config_title': rule.config.title if rule.config else None,
'created_at': rule.created_at.isoformat(),
'updated_at': rule.updated_at.isoformat() if rule.updated_at else None
})
return jsonify({
'rules': rules_data,
'total': len(rules_data)
})
@bp.route('/rules', methods=['POST'])
@api_auth_required
def create_alert_rule():
"""
Create a new alert rule.
Request body:
name: User-friendly rule name
rule_type: Type of alert rule (unexpected_port, drift_detection, cert_expiry, weak_tls, ping_failed)
threshold: Threshold value (e.g., days for cert expiry, percentage for drift)
enabled: Whether rule is active (default: true)
email_enabled: Send email for this rule (default: false)
webhook_enabled: Send webhook for this rule (default: false)
severity: Alert severity (critical, warning, info)
filter_conditions: JSON object with filter conditions
config_id: Optional config ID to apply rule to
Returns:
JSON response with created rule
"""
data = request.get_json() or {}
# Validate required fields
if not data.get('rule_type'):
return jsonify({
'status': 'error',
'message': 'rule_type is required'
}), 400
# Valid rule types
valid_rule_types = ['unexpected_port', 'drift_detection', 'cert_expiry', 'weak_tls', 'ping_failed']
if data['rule_type'] not in valid_rule_types:
return jsonify({
'status': 'error',
'message': f'Invalid rule_type. Must be one of: {", ".join(valid_rule_types)}'
}), 400
# Valid severities
valid_severities = ['critical', 'warning', 'info']
if data.get('severity') and data['severity'] not in valid_severities:
return jsonify({
'status': 'error',
'message': f'Invalid severity. Must be one of: {", ".join(valid_severities)}'
}), 400
try:
# Validate config_id if provided
config_id = data.get('config_id')
if config_id:
from web.models import ScanConfig
config = current_app.db_session.query(ScanConfig).filter_by(id=config_id).first()
if not config:
return jsonify({
'status': 'error',
'message': f'Config with ID {config_id} not found'
}), 400
# Create new rule
rule = AlertRule(
name=data.get('name', f"{data['rule_type']} rule"),
rule_type=data['rule_type'],
enabled=data.get('enabled', True),
threshold=data.get('threshold'),
email_enabled=data.get('email_enabled', False),
webhook_enabled=data.get('webhook_enabled', False),
severity=data.get('severity', 'warning'),
filter_conditions=json.dumps(data['filter_conditions']) if data.get('filter_conditions') else None,
config_id=config_id,
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc)
)
current_app.db_session.add(rule)
current_app.db_session.commit()
return jsonify({
'status': 'success',
'message': 'Alert rule created successfully',
'rule': {
'id': rule.id,
'name': rule.name,
'rule_type': rule.rule_type,
'enabled': rule.enabled,
'threshold': rule.threshold,
'email_enabled': rule.email_enabled,
'webhook_enabled': rule.webhook_enabled,
'severity': rule.severity,
'filter_conditions': json.loads(rule.filter_conditions) if rule.filter_conditions else None,
'config_id': rule.config_id,
'config_title': rule.config.title if rule.config else None,
'created_at': rule.created_at.isoformat(),
'updated_at': rule.updated_at.isoformat()
}
}), 201
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to create alert rule: {str(e)}'
}), 500
@bp.route('/rules/<int:rule_id>', methods=['PUT'])
@api_auth_required
def update_alert_rule(rule_id):
"""
Update an existing alert rule.
Args:
rule_id: Alert rule ID to update
Request body:
name: User-friendly rule name (optional)
threshold: Threshold value (optional)
enabled: Whether rule is active (optional)
email_enabled: Send email for this rule (optional)
webhook_enabled: Send webhook for this rule (optional)
severity: Alert severity (optional)
filter_conditions: JSON object with filter conditions (optional)
config_id: Config ID to apply rule to (optional)
Returns:
JSON response with update status
"""
data = request.get_json() or {}
# Get existing rule
rule = current_app.db_session.query(AlertRule).filter(AlertRule.id == rule_id).first()
if not rule:
return jsonify({
'status': 'error',
'message': f'Alert rule {rule_id} not found'
}), 404
# Valid severities
valid_severities = ['critical', 'warning', 'info']
if data.get('severity') and data['severity'] not in valid_severities:
return jsonify({
'status': 'error',
'message': f'Invalid severity. Must be one of: {", ".join(valid_severities)}'
}), 400
try:
# Validate config_id if provided
if 'config_id' in data:
config_id = data['config_id']
if config_id:
from web.models import ScanConfig
config = current_app.db_session.query(ScanConfig).filter_by(id=config_id).first()
if not config:
return jsonify({
'status': 'error',
'message': f'Config with ID {config_id} not found'
}), 400
# Update fields if provided
if 'name' in data:
rule.name = data['name']
if 'threshold' in data:
rule.threshold = data['threshold']
if 'enabled' in data:
rule.enabled = data['enabled']
if 'email_enabled' in data:
rule.email_enabled = data['email_enabled']
if 'webhook_enabled' in data:
rule.webhook_enabled = data['webhook_enabled']
if 'severity' in data:
rule.severity = data['severity']
if 'filter_conditions' in data:
rule.filter_conditions = json.dumps(data['filter_conditions']) if data['filter_conditions'] else None
if 'config_id' in data:
rule.config_id = data['config_id']
rule.updated_at = datetime.now(timezone.utc)
current_app.db_session.commit()
return jsonify({
'status': 'success',
'message': 'Alert rule updated successfully',
'rule': {
'id': rule.id,
'name': rule.name,
'rule_type': rule.rule_type,
'enabled': rule.enabled,
'threshold': rule.threshold,
'email_enabled': rule.email_enabled,
'webhook_enabled': rule.webhook_enabled,
'severity': rule.severity,
'filter_conditions': json.loads(rule.filter_conditions) if rule.filter_conditions else None,
'config_id': rule.config_id,
'config_title': rule.config.title if rule.config else None,
'created_at': rule.created_at.isoformat(),
'updated_at': rule.updated_at.isoformat()
}
})
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to update alert rule: {str(e)}'
}), 500
@bp.route('/rules/<int:rule_id>', methods=['DELETE'])
@api_auth_required
def delete_alert_rule(rule_id):
"""
Delete an alert rule.
Args:
rule_id: Alert rule ID to delete
Returns:
JSON response with deletion status
"""
# Get existing rule
rule = current_app.db_session.query(AlertRule).filter(AlertRule.id == rule_id).first()
if not rule:
return jsonify({
'status': 'error',
'message': f'Alert rule {rule_id} not found'
}), 404
try:
# Delete the rule (cascade will delete related alerts)
current_app.db_session.delete(rule)
current_app.db_session.commit()
return jsonify({
'status': 'success',
'message': f'Alert rule {rule_id} deleted successfully'
})
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to delete alert rule: {str(e)}'
}), 500
@bp.route('/stats', methods=['GET'])
@api_auth_required
def alert_stats():
"""
Get alert statistics.
Query params:
days: Number of days to look back (default: 7)
Returns:
JSON response with alert statistics
"""
days = request.args.get('days', 7, type=int)
cutoff_date = datetime.now(timezone.utc) - timedelta(days=days)
# Get alerts in date range
alerts = current_app.db_session.query(Alert).filter(Alert.created_at >= cutoff_date).all()
# Calculate statistics
total_alerts = len(alerts)
alerts_by_severity = {'critical': 0, 'warning': 0, 'info': 0}
alerts_by_type = {}
unacknowledged_count = 0
for alert in alerts:
# Count by severity
if alert.severity in alerts_by_severity:
alerts_by_severity[alert.severity] += 1
# Count by type
if alert.alert_type not in alerts_by_type:
alerts_by_type[alert.alert_type] = 0
alerts_by_type[alert.alert_type] += 1
# Count unacknowledged
if not alert.acknowledged:
unacknowledged_count += 1
return jsonify({
'stats': {
'total_alerts': total_alerts,
'unacknowledged_count': unacknowledged_count,
'alerts_by_severity': alerts_by_severity,
'alerts_by_type': alerts_by_type,
'date_range': {
'start': cutoff_date.isoformat(),
'end': datetime.now(timezone.utc).isoformat(),
'days': days
}
}
})
# Health check endpoint
@bp.route('/health', methods=['GET'])
def health_check():
"""
Health check endpoint for monitoring.
Returns:
JSON response with API health status
"""
return jsonify({
'status': 'healthy',
'api': 'alerts',
'version': '1.0.0-phase5'
})

461
app/web/api/configs.py Normal file
View File

@@ -0,0 +1,461 @@
"""
Configs API blueprint.
Handles endpoints for managing scan configurations stored in the database.
Provides REST API for creating, updating, and deleting configs that reference sites.
"""
import logging
from flask import Blueprint, jsonify, request, current_app
from web.auth.decorators import api_auth_required
from web.services.config_service import ConfigService
bp = Blueprint('configs', __name__)
logger = logging.getLogger(__name__)
# ============================================================================
# Database-based Config Endpoints (Primary)
# ============================================================================
@bp.route('', methods=['GET'])
@api_auth_required
def list_configs():
"""
List all scan configurations from database.
Returns:
JSON response with list of configs:
{
"configs": [
{
"id": 1,
"title": "Production Scan",
"description": "Weekly production scan",
"site_count": 3,
"sites": [
{"id": 1, "name": "Production DC"},
{"id": 2, "name": "DMZ"}
],
"created_at": "2025-11-19T10:30:00Z",
"updated_at": "2025-11-19T10:30:00Z"
}
]
}
"""
try:
config_service = ConfigService(db_session=current_app.db_session)
configs = config_service.list_configs_db()
logger.info(f"Listed {len(configs)} configs from database")
return jsonify({
'configs': configs
})
except Exception as e:
logger.error(f"Unexpected error listing configs: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('', methods=['POST'])
@api_auth_required
def create_config():
"""
Create a new scan configuration in the database.
Request:
JSON with:
{
"title": "Production Scan",
"description": "Weekly production scan (optional)",
"site_ids": [1, 2, 3]
}
Returns:
JSON response with created config:
{
"success": true,
"config": {
"id": 1,
"title": "Production Scan",
"description": "...",
"site_count": 3,
"sites": [...],
"created_at": "2025-11-19T10:30:00Z",
"updated_at": "2025-11-19T10:30:00Z"
}
}
Error responses:
- 400: Validation error or missing fields
- 500: Internal server error
"""
try:
data = request.get_json()
if not data:
return jsonify({
'error': 'Bad request',
'message': 'Request body must be JSON'
}), 400
# Validate required fields
if 'title' not in data:
return jsonify({
'error': 'Bad request',
'message': 'Missing required field: title'
}), 400
if 'site_ids' not in data:
return jsonify({
'error': 'Bad request',
'message': 'Missing required field: site_ids'
}), 400
title = data['title']
description = data.get('description', None)
site_ids = data['site_ids']
if not isinstance(site_ids, list):
return jsonify({
'error': 'Bad request',
'message': 'Field site_ids must be an array'
}), 400
# Create config
config_service = ConfigService(db_session=current_app.db_session)
config = config_service.create_config(title, description, site_ids)
logger.info(f"Created config: {config['title']} (ID: {config['id']})")
return jsonify({
'success': True,
'config': config
}), 201
except ValueError as e:
logger.warning(f"Config validation failed: {str(e)}")
return jsonify({
'error': 'Validation error',
'message': str(e)
}), 400
except Exception as e:
logger.error(f"Unexpected error creating config: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:config_id>', methods=['GET'])
@api_auth_required
def get_config(config_id: int):
"""
Get a scan configuration by ID.
Args:
config_id: Configuration ID
Returns:
JSON response with config details:
{
"id": 1,
"title": "Production Scan",
"description": "...",
"site_count": 3,
"sites": [
{
"id": 1,
"name": "Production DC",
"description": "...",
"ip_count": 5
}
],
"created_at": "2025-11-19T10:30:00Z",
"updated_at": "2025-11-19T10:30:00Z"
}
Error responses:
- 404: Config not found
- 500: Internal server error
"""
try:
config_service = ConfigService(db_session=current_app.db_session)
config = config_service.get_config_by_id(config_id)
logger.info(f"Retrieved config: {config['title']} (ID: {config_id})")
return jsonify(config)
except ValueError as e:
logger.warning(f"Config not found: {config_id}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
except Exception as e:
logger.error(f"Unexpected error getting config {config_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:config_id>', methods=['PUT'])
@api_auth_required
def update_config(config_id: int):
"""
Update an existing scan configuration.
Args:
config_id: Configuration ID
Request:
JSON with (all fields optional):
{
"title": "New Title",
"description": "New Description",
"site_ids": [1, 2, 3]
}
Returns:
JSON response with updated config:
{
"success": true,
"config": {...}
}
Error responses:
- 400: Validation error
- 404: Config not found
- 500: Internal server error
"""
try:
data = request.get_json()
if not data:
return jsonify({
'error': 'Bad request',
'message': 'Request body must be JSON'
}), 400
title = data.get('title', None)
description = data.get('description', None)
site_ids = data.get('site_ids', None)
if site_ids is not None and not isinstance(site_ids, list):
return jsonify({
'error': 'Bad request',
'message': 'Field site_ids must be an array'
}), 400
# Update config
config_service = ConfigService(db_session=current_app.db_session)
config = config_service.update_config(config_id, title, description, site_ids)
logger.info(f"Updated config: {config['title']} (ID: {config_id})")
return jsonify({
'success': True,
'config': config
})
except ValueError as e:
if 'not found' in str(e).lower():
logger.warning(f"Config not found: {config_id}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
else:
logger.warning(f"Config validation failed: {str(e)}")
return jsonify({
'error': 'Validation error',
'message': str(e)
}), 400
except Exception as e:
logger.error(f"Unexpected error updating config {config_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:config_id>', methods=['DELETE'])
@api_auth_required
def delete_config(config_id: int):
"""
Delete a scan configuration.
Args:
config_id: Configuration ID
Returns:
JSON response with success status:
{
"success": true,
"message": "Config deleted successfully"
}
Error responses:
- 404: Config not found
- 500: Internal server error
"""
try:
config_service = ConfigService(db_session=current_app.db_session)
config_service.delete_config(config_id)
logger.info(f"Deleted config (ID: {config_id})")
return jsonify({
'success': True,
'message': 'Config deleted successfully'
})
except ValueError as e:
logger.warning(f"Config not found: {config_id}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
except Exception as e:
logger.error(f"Unexpected error deleting config {config_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:config_id>/sites', methods=['POST'])
@api_auth_required
def add_site_to_config(config_id: int):
"""
Add a site to an existing config.
Args:
config_id: Configuration ID
Request:
JSON with:
{
"site_id": 5
}
Returns:
JSON response with updated config:
{
"success": true,
"config": {...}
}
Error responses:
- 400: Validation error or site already in config
- 404: Config or site not found
- 500: Internal server error
"""
try:
data = request.get_json()
if not data or 'site_id' not in data:
return jsonify({
'error': 'Bad request',
'message': 'Missing required field: site_id'
}), 400
site_id = data['site_id']
# Add site to config
config_service = ConfigService(db_session=current_app.db_session)
config = config_service.add_site_to_config(config_id, site_id)
logger.info(f"Added site {site_id} to config {config_id}")
return jsonify({
'success': True,
'config': config
})
except ValueError as e:
if 'not found' in str(e).lower():
logger.warning(f"Config or site not found: {str(e)}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
else:
logger.warning(f"Validation error: {str(e)}")
return jsonify({
'error': 'Validation error',
'message': str(e)
}), 400
except Exception as e:
logger.error(f"Unexpected error adding site to config: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:config_id>/sites/<int:site_id>', methods=['DELETE'])
@api_auth_required
def remove_site_from_config(config_id: int, site_id: int):
"""
Remove a site from a config.
Args:
config_id: Configuration ID
site_id: Site ID to remove
Returns:
JSON response with updated config:
{
"success": true,
"config": {...}
}
Error responses:
- 400: Validation error (e.g., last site cannot be removed)
- 404: Config not found or site not in config
- 500: Internal server error
"""
try:
config_service = ConfigService(db_session=current_app.db_session)
config = config_service.remove_site_from_config(config_id, site_id)
logger.info(f"Removed site {site_id} from config {config_id}")
return jsonify({
'success': True,
'config': config
})
except ValueError as e:
if 'not found' in str(e).lower() or 'not in this config' in str(e).lower():
logger.warning(f"Config or site not found: {str(e)}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
else:
logger.warning(f"Validation error: {str(e)}")
return jsonify({
'error': 'Validation error',
'message': str(e)
}), 400
except Exception as e:
logger.error(f"Unexpected error removing site from config: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500

642
app/web/api/scans.py Normal file
View File

@@ -0,0 +1,642 @@
"""
Scans API blueprint.
Handles endpoints for triggering scans, listing scan history, and retrieving
scan results.
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from flask import Blueprint, current_app, jsonify, request
from sqlalchemy.exc import SQLAlchemyError
from web.auth.decorators import api_auth_required
from web.models import Scan, ScanProgress
from web.services.scan_service import ScanService
from web.utils.pagination import validate_page_params
from web.jobs.scan_job import stop_scan
bp = Blueprint('scans', __name__)
logger = logging.getLogger(__name__)
def _recover_orphaned_scan(scan: Scan, session) -> dict:
"""
Recover an orphaned scan by checking for output files.
If output files exist: mark as 'completed' (smart recovery)
If no output files: mark as 'cancelled'
Args:
scan: The orphaned Scan object
session: Database session
Returns:
Dictionary with recovery result for API response
"""
# Check for existing output files
output_exists = False
output_files_found = []
# Check paths stored in database
if scan.json_path and Path(scan.json_path).exists():
output_exists = True
output_files_found.append('json')
if scan.html_path and Path(scan.html_path).exists():
output_files_found.append('html')
if scan.zip_path and Path(scan.zip_path).exists():
output_files_found.append('zip')
# Also check by timestamp pattern if paths not stored yet
if not output_exists and scan.started_at:
output_dir = Path('/app/output')
if output_dir.exists():
timestamp_pattern = scan.started_at.strftime('%Y%m%d')
for json_file in output_dir.glob(f'scan_report_{timestamp_pattern}*.json'):
output_exists = True
output_files_found.append('json')
# Update scan record with found paths
scan.json_path = str(json_file)
html_file = json_file.with_suffix('.html')
if html_file.exists():
scan.html_path = str(html_file)
output_files_found.append('html')
zip_file = json_file.with_suffix('.zip')
if zip_file.exists():
scan.zip_path = str(zip_file)
output_files_found.append('zip')
break
if output_exists:
# Smart recovery: outputs exist, mark as completed
scan.status = 'completed'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.error_message = None
session.commit()
logger.info(f"Scan {scan.id}: Recovered as completed (files: {output_files_found})")
return {
'scan_id': scan.id,
'status': 'completed',
'message': f'Scan recovered as completed (output files found: {", ".join(output_files_found)})',
'recovery_type': 'smart_recovery'
}
else:
# No outputs: mark as cancelled
scan.status = 'cancelled'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.error_message = 'Scan process was interrupted before completion. No output files were generated.'
session.commit()
logger.info(f"Scan {scan.id}: Marked as cancelled (orphaned, no output files)")
return {
'scan_id': scan.id,
'status': 'cancelled',
'message': 'Orphaned scan cancelled (no output files found)',
'recovery_type': 'orphan_cleanup'
}
@bp.route('', methods=['GET'])
@api_auth_required
def list_scans():
"""
List all scans with pagination.
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 20, max: 100)
status: Filter by status (running, completed, failed)
Returns:
JSON response with scans list and pagination info
"""
try:
# Get and validate query parameters
page = request.args.get('page', 1, type=int)
per_page = request.args.get('per_page', 20, type=int)
status_filter = request.args.get('status', None, type=str)
# Validate pagination params
page, per_page = validate_page_params(page, per_page)
# Get scans from service
scan_service = ScanService(current_app.db_session)
paginated_result = scan_service.list_scans(
page=page,
per_page=per_page,
status_filter=status_filter
)
logger.info(f"Listed scans: page={page}, per_page={per_page}, status={status_filter}, total={paginated_result.total}")
return jsonify({
'scans': paginated_result.items,
'total': paginated_result.total,
'page': paginated_result.page,
'per_page': paginated_result.per_page,
'total_pages': paginated_result.pages,
'has_prev': paginated_result.has_prev,
'has_next': paginated_result.has_next
})
except ValueError as e:
logger.warning(f"Invalid request parameters: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error listing scans: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scans'
}), 500
except Exception as e:
logger.error(f"Unexpected error listing scans: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>', methods=['GET'])
@api_auth_required
def get_scan(scan_id):
"""
Get details for a specific scan.
Args:
scan_id: Scan ID
Returns:
JSON response with scan details
"""
try:
# Get scan from service
scan_service = ScanService(current_app.db_session)
scan = scan_service.get_scan(scan_id)
if not scan:
logger.warning(f"Scan not found: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
logger.info(f"Retrieved scan details: {scan_id}")
return jsonify(scan)
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scan {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scan'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scan {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('', methods=['POST'])
@api_auth_required
def trigger_scan():
"""
Trigger a new scan.
Request body:
config_id: Database config ID (required)
Returns:
JSON response with scan_id and status
"""
try:
# Get request data
data = request.get_json() or {}
config_id = data.get('config_id')
# Validate required fields
if not config_id:
logger.warning("Scan trigger request missing config_id")
return jsonify({
'error': 'Invalid request',
'message': 'config_id is required'
}), 400
# Validate config_id is an integer
try:
config_id = int(config_id)
except (TypeError, ValueError):
logger.warning(f"Invalid config_id type: {config_id}")
return jsonify({
'error': 'Invalid request',
'message': 'config_id must be an integer'
}), 400
# Trigger scan via service
scan_service = ScanService(current_app.db_session)
scan_id = scan_service.trigger_scan(
config_id=config_id,
triggered_by='api',
scheduler=current_app.scheduler
)
logger.info(f"Scan {scan_id} triggered via API: config_id={config_id}")
return jsonify({
'scan_id': scan_id,
'status': 'running',
'message': 'Scan queued successfully'
}), 201
except ValueError as e:
# Config validation error
error_message = str(e)
logger.warning(f"Invalid config: {error_message}")
logger.warning(f"Request data: config_id='{config_id}'")
return jsonify({
'error': 'Invalid request',
'message': error_message
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error triggering scan: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to create scan'
}), 500
except Exception as e:
logger.error(f"Unexpected error triggering scan: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>', methods=['DELETE'])
@api_auth_required
def delete_scan(scan_id):
"""
Delete a scan and its associated files.
Args:
scan_id: Scan ID to delete
Returns:
JSON response with deletion status
"""
try:
# Delete scan via service
scan_service = ScanService(current_app.db_session)
scan_service.delete_scan(scan_id)
logger.info(f"Scan {scan_id} deleted successfully")
return jsonify({
'scan_id': scan_id,
'message': 'Scan deleted successfully'
}), 200
except ValueError as e:
# Scan not found
logger.warning(f"Scan deletion failed: {str(e)}")
return jsonify({
'error': 'Not found',
'message': str(e)
}), 404
except SQLAlchemyError as e:
logger.error(f"Database error deleting scan {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to delete scan'
}), 500
except Exception as e:
logger.error(f"Unexpected error deleting scan {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>/stop', methods=['POST'])
@api_auth_required
def stop_running_scan(scan_id):
"""
Stop a running scan with smart recovery for orphaned scans.
If the scan is actively running in the registry, sends a cancel signal.
If the scan shows as running/finalizing but is not in the registry (orphaned),
performs smart recovery: marks as 'completed' if output files exist,
otherwise marks as 'cancelled'.
Args:
scan_id: Scan ID to stop
Returns:
JSON response with stop status or recovery result
"""
try:
session = current_app.db_session
# Check if scan exists
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
logger.warning(f"Scan not found for stop request: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
# Allow stopping scans with status 'running' or 'finalizing'
if scan.status not in ('running', 'finalizing'):
logger.warning(f"Cannot stop scan {scan_id}: status is '{scan.status}'")
return jsonify({
'error': 'Invalid state',
'message': f"Cannot stop scan: status is '{scan.status}'"
}), 400
# Get database URL from app config
db_url = current_app.config['SQLALCHEMY_DATABASE_URI']
# Attempt to stop the scan
stopped = stop_scan(scan_id, db_url)
if stopped:
logger.info(f"Stop signal sent to scan {scan_id}")
return jsonify({
'scan_id': scan_id,
'message': 'Stop signal sent to scan',
'status': 'stopping'
}), 200
else:
# Scanner not in registry - this is an orphaned scan
# Attempt smart recovery
logger.warning(f"Scan {scan_id} not in registry, attempting smart recovery")
recovery_result = _recover_orphaned_scan(scan, session)
return jsonify(recovery_result), 200
except SQLAlchemyError as e:
logger.error(f"Database error stopping scan {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to stop scan'
}), 500
except Exception as e:
logger.error(f"Unexpected error stopping scan {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>/status', methods=['GET'])
@api_auth_required
def get_scan_status(scan_id):
"""
Get current status of a running scan.
Args:
scan_id: Scan ID
Returns:
JSON response with scan status and progress
"""
try:
# Get scan status from service
scan_service = ScanService(current_app.db_session)
status = scan_service.get_scan_status(scan_id)
if not status:
logger.warning(f"Scan not found for status check: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
logger.debug(f"Retrieved status for scan {scan_id}: {status['status']}")
return jsonify(status)
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scan status {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scan status'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scan status {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>/progress', methods=['GET'])
@api_auth_required
def get_scan_progress(scan_id):
"""
Get detailed progress for a running scan including per-IP results.
Args:
scan_id: Scan ID
Returns:
JSON response with scan progress including:
- current_phase: Current scan phase
- total_ips: Total IPs being scanned
- completed_ips: Number of IPs completed in current phase
- progress_entries: List of per-IP progress with discovered results
"""
try:
session = current_app.db_session
# Get scan record
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
logger.warning(f"Scan not found for progress check: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
# Get progress entries
progress_entries = session.query(ScanProgress).filter_by(scan_id=scan_id).all()
# Build progress data
entries = []
for entry in progress_entries:
entry_data = {
'ip_address': entry.ip_address,
'site_name': entry.site_name,
'phase': entry.phase,
'status': entry.status,
'ping_result': entry.ping_result
}
# Parse JSON fields
if entry.tcp_ports:
entry_data['tcp_ports'] = json.loads(entry.tcp_ports)
else:
entry_data['tcp_ports'] = []
if entry.udp_ports:
entry_data['udp_ports'] = json.loads(entry.udp_ports)
else:
entry_data['udp_ports'] = []
if entry.services:
entry_data['services'] = json.loads(entry.services)
else:
entry_data['services'] = []
entries.append(entry_data)
# Sort entries by site name then IP (numerically)
def ip_sort_key(ip_str):
"""Convert IP to tuple of integers for proper numeric sorting."""
try:
return tuple(int(octet) for octet in ip_str.split('.'))
except (ValueError, AttributeError):
return (0, 0, 0, 0)
entries.sort(key=lambda x: (x['site_name'] or '', ip_sort_key(x['ip_address'])))
response = {
'scan_id': scan_id,
'status': scan.status,
'current_phase': scan.current_phase or 'pending',
'total_ips': scan.total_ips or 0,
'completed_ips': scan.completed_ips or 0,
'progress_entries': entries
}
logger.debug(f"Retrieved progress for scan {scan_id}: phase={scan.current_phase}, {scan.completed_ips}/{scan.total_ips} IPs")
return jsonify(response)
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scan progress {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scan progress'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scan progress {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/by-ip/<ip_address>', methods=['GET'])
@api_auth_required
def get_scans_by_ip(ip_address):
"""
Get last 10 scans containing a specific IP address.
Args:
ip_address: IP address to search for
Returns:
JSON response with list of scans containing the IP
"""
try:
# Get scans from service
scan_service = ScanService(current_app.db_session)
scans = scan_service.get_scans_by_ip(ip_address)
logger.info(f"Retrieved {len(scans)} scans for IP: {ip_address}")
return jsonify({
'ip_address': ip_address,
'scans': scans,
'count': len(scans)
})
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scans for IP {ip_address}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scans'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scans for IP {ip_address}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id1>/compare/<int:scan_id2>', methods=['GET'])
@api_auth_required
def compare_scans(scan_id1, scan_id2):
"""
Compare two scans and show differences.
Compares ports, services, and certificates between two scans,
highlighting added, removed, and changed items.
Args:
scan_id1: First (older) scan ID
scan_id2: Second (newer) scan ID
Returns:
JSON response with comparison results including:
- scan1, scan2: Metadata for both scans
- ports: Added, removed, and unchanged ports
- services: Added, removed, and changed services
- certificates: Added, removed, and changed certificates
- drift_score: Overall drift metric (0.0-1.0)
"""
try:
# Compare scans using service
scan_service = ScanService(current_app.db_session)
comparison = scan_service.compare_scans(scan_id1, scan_id2)
if not comparison:
logger.warning(f"Scan comparison failed: one or both scans not found ({scan_id1}, {scan_id2})")
return jsonify({
'error': 'Not found',
'message': 'One or both scans not found'
}), 404
logger.info(f"Compared scans {scan_id1} and {scan_id2}: drift_score={comparison['drift_score']}")
return jsonify(comparison), 200
except SQLAlchemyError as e:
logger.error(f"Database error comparing scans {scan_id1} and {scan_id2}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to compare scans'
}), 500
except Exception as e:
logger.error(f"Unexpected error comparing scans {scan_id1} and {scan_id2}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
# Health check endpoint
@bp.route('/health', methods=['GET'])
def health_check():
"""
Health check endpoint for monitoring.
Returns:
JSON response with API health status
"""
return jsonify({
'status': 'healthy',
'api': 'scans',
'version': '1.0.0-phase1'
})

View File

@@ -88,7 +88,7 @@ def create_schedule():
Request body:
name: Schedule name (required)
config_file: Path to YAML config (required)
config_id: Database config ID (required)
cron_expression: Cron expression (required, e.g., '0 2 * * *')
enabled: Whether schedule is active (optional, default: true)
@@ -99,7 +99,7 @@ def create_schedule():
data = request.get_json() or {}
# Validate required fields
required = ['name', 'config_file', 'cron_expression']
required = ['name', 'config_id', 'cron_expression']
missing = [field for field in required if field not in data]
if missing:
return jsonify({'error': f'Missing required fields: {", ".join(missing)}'}), 400
@@ -108,7 +108,7 @@ def create_schedule():
schedule_service = ScheduleService(current_app.db_session)
schedule_id = schedule_service.create_schedule(
name=data['name'],
config_file=data['config_file'],
config_id=data['config_id'],
cron_expression=data['cron_expression'],
enabled=data.get('enabled', True)
)
@@ -121,7 +121,7 @@ def create_schedule():
try:
current_app.scheduler.add_scheduled_scan(
schedule_id=schedule_id,
config_file=schedule['config_file'],
config_id=schedule['config_id'],
cron_expression=schedule['cron_expression']
)
logger.info(f"Schedule {schedule_id} added to APScheduler")
@@ -154,7 +154,7 @@ def update_schedule(schedule_id):
Request body:
name: Schedule name (optional)
config_file: Path to YAML config (optional)
config_id: Database config ID (optional)
cron_expression: Cron expression (optional)
enabled: Whether schedule is active (optional)
@@ -181,7 +181,7 @@ def update_schedule(schedule_id):
try:
# If cron expression or config changed, or enabled status changed
cron_changed = 'cron_expression' in data
config_changed = 'config_file' in data
config_changed = 'config_id' in data
enabled_changed = 'enabled' in data
if enabled_changed:
@@ -189,7 +189,7 @@ def update_schedule(schedule_id):
# Re-add to scheduler (replaces existing)
current_app.scheduler.add_scheduled_scan(
schedule_id=schedule_id,
config_file=updated_schedule['config_file'],
config_id=updated_schedule['config_id'],
cron_expression=updated_schedule['cron_expression']
)
logger.info(f"Schedule {schedule_id} enabled and added to APScheduler")
@@ -201,7 +201,7 @@ def update_schedule(schedule_id):
# Reload schedule in APScheduler
current_app.scheduler.add_scheduled_scan(
schedule_id=schedule_id,
config_file=updated_schedule['config_file'],
config_id=updated_schedule['config_id'],
cron_expression=updated_schedule['cron_expression']
)
logger.info(f"Schedule {schedule_id} reloaded in APScheduler")
@@ -293,7 +293,7 @@ def trigger_schedule(schedule_id):
scheduler = current_app.scheduler if hasattr(current_app, 'scheduler') else None
scan_id = scan_service.trigger_scan(
config_file=schedule['config_file'],
config_id=schedule['config_id'],
triggered_by='manual',
schedule_id=schedule_id,
scheduler=scheduler

View File

@@ -75,6 +75,12 @@ def update_settings():
'status': 'success',
'message': f'Updated {len(settings_dict)} settings'
})
except ValueError as e:
# Handle read-only setting attempts
return jsonify({
'status': 'error',
'message': str(e)
}), 403
except Exception as e:
current_app.logger.error(f"Failed to update settings: {e}")
return jsonify({
@@ -112,7 +118,8 @@ def get_setting(key):
return jsonify({
'status': 'success',
'key': key,
'value': value
'value': value,
'read_only': settings_manager._is_read_only(key)
})
except Exception as e:
current_app.logger.error(f"Failed to retrieve setting {key}: {e}")
@@ -154,6 +161,12 @@ def update_setting(key):
'status': 'success',
'message': f'Setting "{key}" updated'
})
except ValueError as e:
# Handle read-only setting attempts
return jsonify({
'status': 'error',
'message': str(e)
}), 403
except Exception as e:
current_app.logger.error(f"Failed to update setting {key}: {e}")
return jsonify({
@@ -176,6 +189,14 @@ def delete_setting(key):
"""
try:
settings_manager = get_settings_manager()
# Prevent deletion of read-only settings
if settings_manager._is_read_only(key):
return jsonify({
'status': 'error',
'message': f'Setting "{key}" is read-only and cannot be deleted'
}), 403
deleted = settings_manager.delete(key)
if not deleted:

661
app/web/api/sites.py Normal file
View File

@@ -0,0 +1,661 @@
"""
Sites API blueprint.
Handles endpoints for managing reusable site definitions, including CIDR ranges
and IP-level overrides.
"""
import logging
from flask import Blueprint, current_app, jsonify, request
from sqlalchemy.exc import SQLAlchemyError
from web.auth.decorators import api_auth_required
from web.services.site_service import SiteService
from web.utils.pagination import validate_page_params
bp = Blueprint('sites', __name__)
logger = logging.getLogger(__name__)
@bp.route('', methods=['GET'])
@api_auth_required
def list_sites():
"""
List all sites with pagination.
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 20, max: 100)
all: If 'true', returns all sites without pagination (for dropdowns)
Returns:
JSON response with sites list and pagination info
"""
try:
# Check if requesting all sites (no pagination)
if request.args.get('all', '').lower() == 'true':
site_service = SiteService(current_app.db_session)
sites = site_service.list_all_sites()
ip_stats = site_service.get_global_ip_stats()
logger.info(f"Listed all sites (count={len(sites)})")
return jsonify({
'sites': sites,
'total_ips': ip_stats['total_ips'],
'unique_ips': ip_stats['unique_ips'],
'duplicate_ips': ip_stats['duplicate_ips']
})
# Get and validate query parameters
page = request.args.get('page', 1, type=int)
per_page = request.args.get('per_page', 20, type=int)
# Validate pagination params
page, per_page = validate_page_params(page, per_page)
# Get sites from service
site_service = SiteService(current_app.db_session)
paginated_result = site_service.list_sites(page=page, per_page=per_page)
logger.info(f"Listed sites: page={page}, per_page={per_page}, total={paginated_result.total}")
return jsonify({
'sites': paginated_result.items,
'total': paginated_result.total,
'page': paginated_result.page,
'per_page': paginated_result.per_page,
'total_pages': paginated_result.pages,
'has_prev': paginated_result.has_prev,
'has_next': paginated_result.has_next
})
except ValueError as e:
logger.warning(f"Invalid request parameters: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error listing sites: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve sites'
}), 500
except Exception as e:
logger.error(f"Unexpected error listing sites: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>', methods=['GET'])
@api_auth_required
def get_site(site_id):
"""
Get details for a specific site.
Args:
site_id: Site ID
Returns:
JSON response with site details including CIDRs and IP overrides
"""
try:
site_service = SiteService(current_app.db_session)
site = site_service.get_site(site_id)
if not site:
logger.warning(f"Site not found: {site_id}")
return jsonify({
'error': 'Not found',
'message': f'Site with ID {site_id} not found'
}), 404
logger.info(f"Retrieved site details: {site_id}")
return jsonify(site)
except SQLAlchemyError as e:
logger.error(f"Database error retrieving site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve site'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('', methods=['POST'])
@api_auth_required
def create_site():
"""
Create a new site.
Request body:
name: Site name (required, must be unique)
description: Site description (optional)
cidrs: List of CIDR definitions (optional, but recommended)
[
{
"cidr": "10.0.0.0/24",
"expected_ping": true,
"expected_tcp_ports": [22, 80, 443],
"expected_udp_ports": [53]
}
]
Returns:
JSON response with created site data
"""
try:
data = request.get_json() or {}
# Validate required fields
name = data.get('name')
if not name:
logger.warning("Site creation request missing name")
return jsonify({
'error': 'Invalid request',
'message': 'name is required'
}), 400
description = data.get('description')
# Create site (empty initially)
site_service = SiteService(current_app.db_session)
site = site_service.create_site(
name=name,
description=description
)
logger.info(f"Created site '{name}' (id={site['id']})")
return jsonify(site), 201
except ValueError as e:
logger.warning(f"Invalid site creation request: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error creating site: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to create site'
}), 500
except Exception as e:
logger.error(f"Unexpected error creating site: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>', methods=['PUT'])
@api_auth_required
def update_site(site_id):
"""
Update site metadata (name and/or description).
Args:
site_id: Site ID
Request body:
name: New site name (optional, must be unique)
description: New description (optional)
Returns:
JSON response with updated site data
"""
try:
data = request.get_json() or {}
name = data.get('name')
description = data.get('description')
# Update site
site_service = SiteService(current_app.db_session)
site = site_service.update_site(
site_id=site_id,
name=name,
description=description
)
logger.info(f"Updated site {site_id}")
return jsonify(site)
except ValueError as e:
logger.warning(f"Invalid site update request: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error updating site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to update site'
}), 500
except Exception as e:
logger.error(f"Unexpected error updating site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>', methods=['DELETE'])
@api_auth_required
def delete_site(site_id):
"""
Delete a site.
Prevents deletion if site is used in any scan.
Args:
site_id: Site ID
Returns:
JSON response with success message
"""
try:
site_service = SiteService(current_app.db_session)
site_service.delete_site(site_id)
logger.info(f"Deleted site {site_id}")
return jsonify({
'message': f'Site {site_id} deleted successfully'
})
except ValueError as e:
logger.warning(f"Cannot delete site {site_id}: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error deleting site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to delete site'
}), 500
except Exception as e:
logger.error(f"Unexpected error deleting site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/ips/bulk', methods=['POST'])
@api_auth_required
def bulk_add_ips(site_id):
"""
Bulk add IPs to a site from CIDR or list.
Args:
site_id: Site ID
Request body:
source_type: "cidr" or "list" (required)
cidr: CIDR notation if source_type="cidr" (e.g., "10.0.0.0/24")
ips: List of IP addresses if source_type="list" (e.g., ["10.0.0.1", "10.0.0.2"])
expected_ping: Expected ping response for all IPs (optional)
expected_tcp_ports: List of expected TCP ports for all IPs (optional)
expected_udp_ports: List of expected UDP ports for all IPs (optional)
Returns:
JSON response with count of IPs added and any errors
"""
try:
data = request.get_json() or {}
source_type = data.get('source_type')
if source_type not in ['cidr', 'list']:
return jsonify({
'error': 'Invalid request',
'message': 'source_type must be "cidr" or "list"'
}), 400
expected_ping = data.get('expected_ping')
expected_tcp_ports = data.get('expected_tcp_ports', [])
expected_udp_ports = data.get('expected_udp_ports', [])
site_service = SiteService(current_app.db_session)
if source_type == 'cidr':
cidr = data.get('cidr')
if not cidr:
return jsonify({
'error': 'Invalid request',
'message': 'cidr is required when source_type="cidr"'
}), 400
result = site_service.bulk_add_ips_from_cidr(
site_id=site_id,
cidr=cidr,
expected_ping=expected_ping,
expected_tcp_ports=expected_tcp_ports,
expected_udp_ports=expected_udp_ports
)
logger.info(f"Bulk added {result['ip_count']} IPs from CIDR '{cidr}' to site {site_id}")
return jsonify(result), 201
else: # source_type == 'list'
ip_list = data.get('ips', [])
if not isinstance(ip_list, list):
return jsonify({
'error': 'Invalid request',
'message': 'ips must be a list when source_type="list"'
}), 400
result = site_service.bulk_add_ips_from_list(
site_id=site_id,
ip_list=ip_list,
expected_ping=expected_ping,
expected_tcp_ports=expected_tcp_ports,
expected_udp_ports=expected_udp_ports
)
logger.info(f"Bulk added {result['ip_count']} IPs from list to site {site_id}")
return jsonify(result), 201
except ValueError as e:
logger.warning(f"Invalid bulk IP request: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error bulk adding IPs to site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to add IPs'
}), 500
except Exception as e:
logger.error(f"Unexpected error bulk adding IPs to site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/ips', methods=['GET'])
@api_auth_required
def list_ips(site_id):
"""
List IPs in a site with pagination.
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 50, max: 200)
Returns:
JSON response with IPs list and pagination info
"""
try:
# Get and validate query parameters
page = request.args.get('page', 1, type=int)
per_page = request.args.get('per_page', 50, type=int)
# Validate pagination params
page, per_page = validate_page_params(page, per_page, max_per_page=200)
# Get IPs from service
site_service = SiteService(current_app.db_session)
paginated_result = site_service.list_ips(
site_id=site_id,
page=page,
per_page=per_page
)
logger.info(f"Listed IPs for site {site_id}: page={page}, per_page={per_page}, total={paginated_result.total}")
return jsonify({
'ips': paginated_result.items,
'total': paginated_result.total,
'page': paginated_result.page,
'per_page': paginated_result.per_page,
'total_pages': paginated_result.pages,
'has_prev': paginated_result.has_prev,
'has_next': paginated_result.has_next
})
except ValueError as e:
logger.warning(f"Invalid request parameters: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error listing IPs for site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve IPs'
}), 500
except Exception as e:
logger.error(f"Unexpected error listing IPs for site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/ips', methods=['POST'])
@api_auth_required
def add_standalone_ip(site_id):
"""
Add a standalone IP (without CIDR parent) to a site.
Args:
site_id: Site ID
Request body:
ip_address: IP address (required)
expected_ping: Expected ping response (optional)
expected_tcp_ports: List of expected TCP ports (optional)
expected_udp_ports: List of expected UDP ports (optional)
Returns:
JSON response with created IP data
"""
try:
data = request.get_json() or {}
# Validate required fields
ip_address = data.get('ip_address')
if not ip_address:
logger.warning("Standalone IP creation request missing ip_address")
return jsonify({
'error': 'Invalid request',
'message': 'ip_address is required'
}), 400
expected_ping = data.get('expected_ping')
expected_tcp_ports = data.get('expected_tcp_ports', [])
expected_udp_ports = data.get('expected_udp_ports', [])
# Add standalone IP
site_service = SiteService(current_app.db_session)
ip_data = site_service.add_standalone_ip(
site_id=site_id,
ip_address=ip_address,
expected_ping=expected_ping,
expected_tcp_ports=expected_tcp_ports,
expected_udp_ports=expected_udp_ports
)
logger.info(f"Added standalone IP '{ip_address}' to site {site_id}")
return jsonify(ip_data), 201
except ValueError as e:
logger.warning(f"Invalid standalone IP creation request: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error adding standalone IP to site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to add IP'
}), 500
except Exception as e:
logger.error(f"Unexpected error adding standalone IP to site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/ips/<int:ip_id>', methods=['PUT'])
@api_auth_required
def update_ip_settings(site_id, ip_id):
"""
Update settings for an individual IP.
Args:
site_id: Site ID
ip_id: IP ID
Request body:
expected_ping: New ping expectation (optional)
expected_tcp_ports: New TCP ports expectation (optional)
expected_udp_ports: New UDP ports expectation (optional)
Returns:
JSON response with updated IP data
"""
try:
data = request.get_json() or {}
expected_ping = data.get('expected_ping')
expected_tcp_ports = data.get('expected_tcp_ports')
expected_udp_ports = data.get('expected_udp_ports')
# Update IP settings
site_service = SiteService(current_app.db_session)
ip_data = site_service.update_ip_settings(
site_id=site_id,
ip_id=ip_id,
expected_ping=expected_ping,
expected_tcp_ports=expected_tcp_ports,
expected_udp_ports=expected_udp_ports
)
logger.info(f"Updated IP {ip_id} in site {site_id}")
return jsonify(ip_data)
except ValueError as e:
logger.warning(f"Invalid IP update request: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error updating IP {ip_id} in site {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to update IP'
}), 500
except Exception as e:
logger.error(f"Unexpected error updating IP {ip_id} in site {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/ips/<int:ip_id>', methods=['DELETE'])
@api_auth_required
def remove_ip(site_id, ip_id):
"""
Remove an IP from a site.
Args:
site_id: Site ID
ip_id: IP ID
Returns:
JSON response with success message
"""
try:
site_service = SiteService(current_app.db_session)
site_service.remove_ip(site_id, ip_id)
logger.info(f"Removed IP {ip_id} from site {site_id}")
return jsonify({
'message': f'IP {ip_id} removed successfully'
})
except ValueError as e:
logger.warning(f"Cannot remove IP {ip_id}: {str(e)}")
return jsonify({
'error': 'Invalid request',
'message': str(e)
}), 400
except SQLAlchemyError as e:
logger.error(f"Database error removing IP {ip_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to remove IP'
}), 500
except Exception as e:
logger.error(f"Unexpected error removing IP {ip_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:site_id>/usage', methods=['GET'])
@api_auth_required
def get_site_usage(site_id):
"""
Get list of scans that use this site.
Args:
site_id: Site ID
Returns:
JSON response with list of scans
"""
try:
site_service = SiteService(current_app.db_session)
# First check if site exists
site = site_service.get_site(site_id)
if not site:
logger.warning(f"Site not found: {site_id}")
return jsonify({
'error': 'Not found',
'message': f'Site with ID {site_id} not found'
}), 404
scans = site_service.get_scan_usage(site_id)
logger.info(f"Retrieved usage for site {site_id} (count={len(scans)})")
return jsonify({
'site_id': site_id,
'site_name': site['name'],
'scans': scans,
'count': len(scans)
})
except SQLAlchemyError as e:
logger.error(f"Database error retrieving site usage {site_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve site usage'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving site usage {site_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500

View File

@@ -198,12 +198,12 @@ def scan_history(scan_id):
if not reference_scan:
return jsonify({'error': 'Scan not found'}), 404
config_file = reference_scan.config_file
config_id = reference_scan.config_id
# Query historical scans with the same config file
# Query historical scans with the same config_id
historical_scans = (
db_session.query(Scan)
.filter(Scan.config_file == config_file)
.filter(Scan.config_id == config_id)
.filter(Scan.status == 'completed')
.order_by(Scan.timestamp.desc())
.limit(limit)
@@ -247,7 +247,7 @@ def scan_history(scan_id):
'scans': scans_data,
'labels': labels,
'port_counts': port_counts,
'config_file': config_file
'config_id': config_id
}), 200
except SQLAlchemyError as e:

677
app/web/api/webhooks.py Normal file
View File

@@ -0,0 +1,677 @@
"""
Webhooks API blueprint.
Handles endpoints for managing webhook configurations and viewing delivery logs.
"""
import json
from datetime import datetime, timezone
from flask import Blueprint, jsonify, request, current_app
from web.auth.decorators import api_auth_required
from web.models import Webhook, WebhookDeliveryLog, Alert
from web.services.webhook_service import WebhookService
from web.services.template_service import get_template_service
bp = Blueprint('webhooks_api', __name__)
@bp.route('', methods=['GET'])
@api_auth_required
def list_webhooks():
"""
List all webhooks with optional filtering.
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 20)
enabled: Filter by enabled status (true/false)
Returns:
JSON response with webhooks list
"""
# Get query parameters
page = request.args.get('page', 1, type=int)
per_page = min(request.args.get('per_page', 20, type=int), 100) # Max 100 items
enabled = request.args.get('enabled')
# Build query
query = current_app.db_session.query(Webhook)
# Apply enabled filter
if enabled is not None:
enabled_bool = enabled.lower() == 'true'
query = query.filter(Webhook.enabled == enabled_bool)
# Order by name
query = query.order_by(Webhook.name)
# Paginate
total = query.count()
webhooks = query.offset((page - 1) * per_page).limit(per_page).all()
# Format response
webhooks_data = []
for webhook in webhooks:
# Parse JSON fields
alert_types = json.loads(webhook.alert_types) if webhook.alert_types else None
severity_filter = json.loads(webhook.severity_filter) if webhook.severity_filter else None
custom_headers = json.loads(webhook.custom_headers) if webhook.custom_headers else None
webhooks_data.append({
'id': webhook.id,
'name': webhook.name,
'url': webhook.url,
'enabled': webhook.enabled,
'auth_type': webhook.auth_type,
'auth_token': '***ENCRYPTED***' if webhook.auth_token else None, # Mask sensitive data
'custom_headers': custom_headers,
'alert_types': alert_types,
'severity_filter': severity_filter,
'timeout': webhook.timeout,
'retry_count': webhook.retry_count,
'created_at': webhook.created_at.isoformat() if webhook.created_at else None,
'updated_at': webhook.updated_at.isoformat() if webhook.updated_at else None
})
return jsonify({
'webhooks': webhooks_data,
'total': total,
'page': page,
'per_page': per_page,
'pages': (total + per_page - 1) // per_page
})
@bp.route('/<int:webhook_id>', methods=['GET'])
@api_auth_required
def get_webhook(webhook_id):
"""
Get a specific webhook by ID.
Args:
webhook_id: Webhook ID
Returns:
JSON response with webhook details
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return jsonify({
'status': 'error',
'message': f'Webhook {webhook_id} not found'
}), 404
# Parse JSON fields
alert_types = json.loads(webhook.alert_types) if webhook.alert_types else None
severity_filter = json.loads(webhook.severity_filter) if webhook.severity_filter else None
custom_headers = json.loads(webhook.custom_headers) if webhook.custom_headers else None
return jsonify({
'webhook': {
'id': webhook.id,
'name': webhook.name,
'url': webhook.url,
'enabled': webhook.enabled,
'auth_type': webhook.auth_type,
'auth_token': '***ENCRYPTED***' if webhook.auth_token else None,
'custom_headers': custom_headers,
'alert_types': alert_types,
'severity_filter': severity_filter,
'timeout': webhook.timeout,
'retry_count': webhook.retry_count,
'created_at': webhook.created_at.isoformat() if webhook.created_at else None,
'updated_at': webhook.updated_at.isoformat() if webhook.updated_at else None
}
})
@bp.route('', methods=['POST'])
@api_auth_required
def create_webhook():
"""
Create a new webhook.
Request body:
name: Webhook name (required)
url: Webhook URL (required)
enabled: Whether webhook is enabled (default: true)
auth_type: Authentication type (none, bearer, basic, custom)
auth_token: Authentication token (encrypted on storage)
custom_headers: JSON object with custom headers
alert_types: Array of alert types to filter
severity_filter: Array of severities to filter
timeout: Request timeout in seconds (default: 10)
retry_count: Number of retry attempts (default: 3)
template: Jinja2 template for custom payload (optional)
template_format: Template format - 'json' or 'text' (default: json)
content_type_override: Custom Content-Type header (optional)
Returns:
JSON response with created webhook
"""
data = request.get_json() or {}
# Validate required fields
if not data.get('name'):
return jsonify({
'status': 'error',
'message': 'name is required'
}), 400
if not data.get('url'):
return jsonify({
'status': 'error',
'message': 'url is required'
}), 400
# Validate auth_type
valid_auth_types = ['none', 'bearer', 'basic', 'custom']
auth_type = data.get('auth_type', 'none')
if auth_type not in valid_auth_types:
return jsonify({
'status': 'error',
'message': f'Invalid auth_type. Must be one of: {", ".join(valid_auth_types)}'
}), 400
# Validate template_format
valid_template_formats = ['json', 'text']
template_format = data.get('template_format', 'json')
if template_format not in valid_template_formats:
return jsonify({
'status': 'error',
'message': f'Invalid template_format. Must be one of: {", ".join(valid_template_formats)}'
}), 400
# Validate template if provided
template = data.get('template')
if template:
template_service = get_template_service()
is_valid, error_msg = template_service.validate_template(template, template_format)
if not is_valid:
return jsonify({
'status': 'error',
'message': f'Invalid template: {error_msg}'
}), 400
try:
webhook_service = WebhookService(current_app.db_session)
# Encrypt auth_token if provided
auth_token = None
if data.get('auth_token'):
auth_token = webhook_service._encrypt_value(data['auth_token'])
# Serialize JSON fields
alert_types = json.dumps(data['alert_types']) if data.get('alert_types') else None
severity_filter = json.dumps(data['severity_filter']) if data.get('severity_filter') else None
custom_headers = json.dumps(data['custom_headers']) if data.get('custom_headers') else None
# Create webhook
webhook = Webhook(
name=data['name'],
url=data['url'],
enabled=data.get('enabled', True),
auth_type=auth_type,
auth_token=auth_token,
custom_headers=custom_headers,
alert_types=alert_types,
severity_filter=severity_filter,
timeout=data.get('timeout', 10),
retry_count=data.get('retry_count', 3),
template=template,
template_format=template_format,
content_type_override=data.get('content_type_override'),
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc)
)
current_app.db_session.add(webhook)
current_app.db_session.commit()
# Parse for response
alert_types_parsed = json.loads(alert_types) if alert_types else None
severity_filter_parsed = json.loads(severity_filter) if severity_filter else None
custom_headers_parsed = json.loads(custom_headers) if custom_headers else None
return jsonify({
'status': 'success',
'message': 'Webhook created successfully',
'webhook': {
'id': webhook.id,
'name': webhook.name,
'url': webhook.url,
'enabled': webhook.enabled,
'auth_type': webhook.auth_type,
'alert_types': alert_types_parsed,
'severity_filter': severity_filter_parsed,
'custom_headers': custom_headers_parsed,
'timeout': webhook.timeout,
'retry_count': webhook.retry_count,
'template': webhook.template,
'template_format': webhook.template_format,
'content_type_override': webhook.content_type_override,
'created_at': webhook.created_at.isoformat()
}
}), 201
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to create webhook: {str(e)}'
}), 500
@bp.route('/<int:webhook_id>', methods=['PUT'])
@api_auth_required
def update_webhook(webhook_id):
"""
Update an existing webhook.
Args:
webhook_id: Webhook ID
Request body (all optional):
name: Webhook name
url: Webhook URL
enabled: Whether webhook is enabled
auth_type: Authentication type
auth_token: Authentication token
custom_headers: JSON object with custom headers
alert_types: Array of alert types
severity_filter: Array of severities
timeout: Request timeout
retry_count: Retry attempts
template: Jinja2 template for custom payload
template_format: Template format - 'json' or 'text'
content_type_override: Custom Content-Type header
Returns:
JSON response with update status
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return jsonify({
'status': 'error',
'message': f'Webhook {webhook_id} not found'
}), 404
data = request.get_json() or {}
# Validate auth_type if provided
if 'auth_type' in data:
valid_auth_types = ['none', 'bearer', 'basic', 'custom']
if data['auth_type'] not in valid_auth_types:
return jsonify({
'status': 'error',
'message': f'Invalid auth_type. Must be one of: {", ".join(valid_auth_types)}'
}), 400
# Validate template_format if provided
if 'template_format' in data:
valid_template_formats = ['json', 'text']
if data['template_format'] not in valid_template_formats:
return jsonify({
'status': 'error',
'message': f'Invalid template_format. Must be one of: {", ".join(valid_template_formats)}'
}), 400
# Validate template if provided
if 'template' in data and data['template']:
template_format = data.get('template_format', webhook.template_format or 'json')
template_service = get_template_service()
is_valid, error_msg = template_service.validate_template(data['template'], template_format)
if not is_valid:
return jsonify({
'status': 'error',
'message': f'Invalid template: {error_msg}'
}), 400
try:
webhook_service = WebhookService(current_app.db_session)
# Update fields if provided
if 'name' in data:
webhook.name = data['name']
if 'url' in data:
webhook.url = data['url']
if 'enabled' in data:
webhook.enabled = data['enabled']
if 'auth_type' in data:
webhook.auth_type = data['auth_type']
if 'auth_token' in data:
# Encrypt new token
webhook.auth_token = webhook_service._encrypt_value(data['auth_token'])
if 'custom_headers' in data:
webhook.custom_headers = json.dumps(data['custom_headers']) if data['custom_headers'] else None
if 'alert_types' in data:
webhook.alert_types = json.dumps(data['alert_types']) if data['alert_types'] else None
if 'severity_filter' in data:
webhook.severity_filter = json.dumps(data['severity_filter']) if data['severity_filter'] else None
if 'timeout' in data:
webhook.timeout = data['timeout']
if 'retry_count' in data:
webhook.retry_count = data['retry_count']
if 'template' in data:
webhook.template = data['template']
if 'template_format' in data:
webhook.template_format = data['template_format']
if 'content_type_override' in data:
webhook.content_type_override = data['content_type_override']
webhook.updated_at = datetime.now(timezone.utc)
current_app.db_session.commit()
# Parse for response
alert_types = json.loads(webhook.alert_types) if webhook.alert_types else None
severity_filter = json.loads(webhook.severity_filter) if webhook.severity_filter else None
custom_headers = json.loads(webhook.custom_headers) if webhook.custom_headers else None
return jsonify({
'status': 'success',
'message': 'Webhook updated successfully',
'webhook': {
'id': webhook.id,
'name': webhook.name,
'url': webhook.url,
'enabled': webhook.enabled,
'auth_type': webhook.auth_type,
'alert_types': alert_types,
'severity_filter': severity_filter,
'custom_headers': custom_headers,
'timeout': webhook.timeout,
'retry_count': webhook.retry_count,
'template': webhook.template,
'template_format': webhook.template_format,
'content_type_override': webhook.content_type_override,
'updated_at': webhook.updated_at.isoformat()
}
})
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to update webhook: {str(e)}'
}), 500
@bp.route('/<int:webhook_id>', methods=['DELETE'])
@api_auth_required
def delete_webhook(webhook_id):
"""
Delete a webhook.
Args:
webhook_id: Webhook ID
Returns:
JSON response with deletion status
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return jsonify({
'status': 'error',
'message': f'Webhook {webhook_id} not found'
}), 404
try:
# Delete webhook (delivery logs will be cascade deleted)
current_app.db_session.delete(webhook)
current_app.db_session.commit()
return jsonify({
'status': 'success',
'message': f'Webhook {webhook_id} deleted successfully'
})
except Exception as e:
current_app.db_session.rollback()
return jsonify({
'status': 'error',
'message': f'Failed to delete webhook: {str(e)}'
}), 500
@bp.route('/<int:webhook_id>/test', methods=['POST'])
@api_auth_required
def test_webhook(webhook_id):
"""
Send a test payload to a webhook.
Args:
webhook_id: Webhook ID
Returns:
JSON response with test result
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return jsonify({
'status': 'error',
'message': f'Webhook {webhook_id} not found'
}), 404
# Test webhook delivery
webhook_service = WebhookService(current_app.db_session)
result = webhook_service.test_webhook(webhook_id)
return jsonify({
'status': 'success' if result['success'] else 'error',
'message': result['message'],
'status_code': result['status_code'],
'response_body': result.get('response_body')
})
@bp.route('/<int:webhook_id>/logs', methods=['GET'])
@api_auth_required
def get_webhook_logs(webhook_id):
"""
Get delivery logs for a specific webhook.
Args:
webhook_id: Webhook ID
Query params:
page: Page number (default: 1)
per_page: Items per page (default: 20)
status: Filter by status (success/failed)
Returns:
JSON response with delivery logs
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return jsonify({
'status': 'error',
'message': f'Webhook {webhook_id} not found'
}), 404
# Get query parameters
page = request.args.get('page', 1, type=int)
per_page = min(request.args.get('per_page', 20, type=int), 100)
status_filter = request.args.get('status')
# Build query
query = current_app.db_session.query(WebhookDeliveryLog).filter(
WebhookDeliveryLog.webhook_id == webhook_id
)
# Apply status filter
if status_filter:
query = query.filter(WebhookDeliveryLog.status == status_filter)
# Order by most recent first
query = query.order_by(WebhookDeliveryLog.delivered_at.desc())
# Paginate
total = query.count()
logs = query.offset((page - 1) * per_page).limit(per_page).all()
# Format response
logs_data = []
for log in logs:
# Get alert info
alert = current_app.db_session.query(Alert).filter(Alert.id == log.alert_id).first()
logs_data.append({
'id': log.id,
'alert_id': log.alert_id,
'alert_type': alert.alert_type if alert else None,
'alert_message': alert.message if alert else None,
'status': log.status,
'response_code': log.response_code,
'response_body': log.response_body,
'error_message': log.error_message,
'attempt_number': log.attempt_number,
'delivered_at': log.delivered_at.isoformat() if log.delivered_at else None
})
return jsonify({
'webhook_id': webhook_id,
'webhook_name': webhook.name,
'logs': logs_data,
'total': total,
'page': page,
'per_page': per_page,
'pages': (total + per_page - 1) // per_page
})
@bp.route('/preview-template', methods=['POST'])
@api_auth_required
def preview_template():
"""
Preview a webhook template with sample data.
Request body:
template: Jinja2 template string (required)
template_format: Template format - 'json' or 'text' (default: json)
Returns:
JSON response with rendered template preview
"""
data = request.get_json() or {}
if not data.get('template'):
return jsonify({
'status': 'error',
'message': 'template is required'
}), 400
template = data['template']
template_format = data.get('template_format', 'json')
# Validate template format
if template_format not in ['json', 'text']:
return jsonify({
'status': 'error',
'message': 'Invalid template_format. Must be json or text'
}), 400
try:
template_service = get_template_service()
# Validate template
is_valid, error_msg = template_service.validate_template(template, template_format)
if not is_valid:
return jsonify({
'status': 'error',
'message': f'Template validation error: {error_msg}'
}), 400
# Render with sample data
rendered, error = template_service.render_test_payload(template, template_format)
if error:
return jsonify({
'status': 'error',
'message': f'Template rendering error: {error}'
}), 400
return jsonify({
'status': 'success',
'rendered': rendered,
'format': template_format
})
except Exception as e:
return jsonify({
'status': 'error',
'message': f'Failed to preview template: {str(e)}'
}), 500
@bp.route('/template-presets', methods=['GET'])
@api_auth_required
def get_template_presets():
"""
Get list of available webhook template presets.
Returns:
JSON response with template presets
"""
import os
try:
# Load presets manifest
presets_file = os.path.join(
os.path.dirname(__file__),
'../templates/webhook_presets/presets.json'
)
with open(presets_file, 'r') as f:
presets_manifest = json.load(f)
# Load template contents for each preset
presets_dir = os.path.join(
os.path.dirname(__file__),
'../templates/webhook_presets'
)
for preset in presets_manifest:
template_file = os.path.join(presets_dir, preset['file'])
with open(template_file, 'r') as f:
preset['template'] = f.read()
# Remove file reference from response
del preset['file']
return jsonify({
'status': 'success',
'presets': presets_manifest
})
except FileNotFoundError as e:
return jsonify({
'status': 'error',
'message': f'Template presets not found: {str(e)}'
}), 500
except Exception as e:
return jsonify({
'status': 'error',
'message': f'Failed to load template presets: {str(e)}'
}), 500
# Health check endpoint
@bp.route('/health', methods=['GET'])
def health_check():
"""
Health check endpoint for monitoring.
Returns:
JSON response with API health status
"""
return jsonify({
'status': 'healthy',
'api': 'webhooks',
'version': '1.0.0-phase5'
})

View File

@@ -61,7 +61,7 @@ def create_app(config: dict = None) -> Flask:
SQLALCHEMY_DATABASE_URI=os.environ.get('DATABASE_URL', 'sqlite:///./sneakyscanner.db'),
SQLALCHEMY_TRACK_MODIFICATIONS=False,
JSON_SORT_KEYS=False, # Preserve order in JSON responses
MAX_CONTENT_LENGTH=50 * 1024 * 1024, # 50MB max upload size
MAX_CONTENT_LENGTH=50 * 1024 * 1024, # 50MB max upload size (supports config files up to ~2MB)
)
# Override with custom config if provided
@@ -95,6 +95,9 @@ def create_app(config: dict = None) -> Flask:
# Register error handlers
register_error_handlers(app)
# Register context processors
register_context_processors(app)
# Add request/response handlers
register_request_handlers(app)
@@ -304,9 +307,12 @@ def init_scheduler(app: Flask) -> None:
with app.app_context():
# Clean up any orphaned scans from previous crashes/restarts
scan_service = ScanService(app.db_session)
orphaned_count = scan_service.cleanup_orphaned_scans()
if orphaned_count > 0:
app.logger.warning(f"Cleaned up {orphaned_count} orphaned scan(s) on startup")
cleanup_result = scan_service.cleanup_orphaned_scans()
if cleanup_result['total'] > 0:
app.logger.warning(
f"Cleaned up {cleanup_result['total']} orphaned scan(s) on startup: "
f"{cleanup_result['recovered']} recovered, {cleanup_result['failed']} failed"
)
# Load all enabled schedules from database
scheduler.load_schedules_on_startup()
@@ -328,10 +334,14 @@ def register_blueprints(app: Flask) -> None:
from web.api.scans import bp as scans_bp
from web.api.schedules import bp as schedules_bp
from web.api.alerts import bp as alerts_bp
from web.api.webhooks import bp as webhooks_api_bp
from web.api.settings import bp as settings_bp
from web.api.stats import bp as stats_bp
from web.api.configs import bp as configs_bp
from web.api.sites import bp as sites_bp
from web.auth.routes import bp as auth_bp
from web.routes.main import bp as main_bp
from web.routes.webhooks import bp as webhooks_bp
# Register authentication blueprint
app.register_blueprint(auth_bp, url_prefix='/auth')
@@ -339,12 +349,18 @@ def register_blueprints(app: Flask) -> None:
# Register main web routes blueprint
app.register_blueprint(main_bp, url_prefix='/')
# Register webhooks web routes blueprint
app.register_blueprint(webhooks_bp, url_prefix='/webhooks')
# Register API blueprints
app.register_blueprint(scans_bp, url_prefix='/api/scans')
app.register_blueprint(schedules_bp, url_prefix='/api/schedules')
app.register_blueprint(alerts_bp, url_prefix='/api/alerts')
app.register_blueprint(webhooks_api_bp, url_prefix='/api/webhooks')
app.register_blueprint(settings_bp, url_prefix='/api/settings')
app.register_blueprint(stats_bp, url_prefix='/api/stats')
app.register_blueprint(configs_bp, url_prefix='/api/configs')
app.register_blueprint(sites_bp, url_prefix='/api/sites')
app.logger.info("Blueprints registered")
@@ -485,6 +501,35 @@ def register_error_handlers(app: Flask) -> None:
return render_template('errors/500.html', error=error), 500
def register_context_processors(app: Flask) -> None:
"""
Register template context processors.
Makes common variables available to all templates without having to
pass them explicitly in every render_template call.
Args:
app: Flask application instance
"""
@app.context_processor
def inject_app_settings():
"""
Inject application metadata into all templates.
Returns:
Dictionary of variables to add to template context
"""
from web.config import APP_NAME, APP_VERSION, REPO_URL
return {
'app_name': APP_NAME,
'app_version': APP_VERSION,
'repo_url': REPO_URL
}
app.logger.info("Context processors registered")
def register_request_handlers(app: Flask) -> None:
"""
Register request and response handlers.

16
app/web/config.py Normal file
View File

@@ -0,0 +1,16 @@
"""
Application configuration and metadata.
Contains version information and other application-level constants
that are managed by developers, not stored in the database.
"""
# Application metadata
APP_NAME = 'SneakyScanner'
APP_VERSION = '1.0.0-beta'
# Repository URL
REPO_URL = 'https://git.sneakygeek.net/sneakygeek/SneakyScan'
# Scanner settings
NMAP_HOST_TIMEOUT = '2m' # Timeout per host for nmap service detection

381
app/web/jobs/scan_job.py Normal file
View File

@@ -0,0 +1,381 @@
"""
Background scan job execution.
This module handles the execution of scans in background threads,
updating database status and handling errors.
"""
import json
import logging
import threading
import traceback
from datetime import datetime
from pathlib import Path
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.scanner import SneakyScanner, ScanCancelledError
from web.models import Scan, ScanProgress
from web.services.scan_service import ScanService
from web.services.alert_service import AlertService
logger = logging.getLogger(__name__)
# Registry for tracking running scanners (scan_id -> SneakyScanner instance)
_running_scanners = {}
_running_scanners_lock = threading.Lock()
def get_running_scanner(scan_id: int):
"""Get a running scanner instance by scan ID."""
with _running_scanners_lock:
return _running_scanners.get(scan_id)
def stop_scan(scan_id: int, db_url: str) -> bool:
"""
Stop a running scan.
Args:
scan_id: ID of the scan to stop
db_url: Database connection URL
Returns:
True if scan was cancelled, False if not found or already stopped
"""
logger.info(f"Attempting to stop scan {scan_id}")
# Get the scanner instance
scanner = get_running_scanner(scan_id)
if not scanner:
logger.warning(f"Scanner for scan {scan_id} not found in registry")
return False
# Cancel the scanner
scanner.cancel()
logger.info(f"Cancellation signal sent to scan {scan_id}")
return True
def create_progress_callback(scan_id: int, session):
"""
Create a progress callback function for updating scan progress in database.
Args:
scan_id: ID of the scan record
session: Database session
Returns:
Callback function that accepts (phase, ip, data)
"""
ip_to_site = {}
def progress_callback(phase: str, ip: str, data: dict):
"""Update scan progress in database."""
nonlocal ip_to_site
try:
# Get scan record
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
return
# Handle initialization phase
if phase == 'init':
scan.total_ips = data.get('total_ips', 0)
scan.completed_ips = 0
scan.current_phase = 'ping'
ip_to_site = data.get('ip_to_site', {})
# Create progress entries for all IPs
for ip_addr, site_name in ip_to_site.items():
progress = ScanProgress(
scan_id=scan_id,
ip_address=ip_addr,
site_name=site_name,
phase='pending',
status='pending'
)
session.add(progress)
session.commit()
return
# Update current phase
if data.get('status') == 'starting':
scan.current_phase = phase
scan.completed_ips = 0
session.commit()
return
# Handle phase completion with results
if data.get('status') == 'completed':
results = data.get('results', {})
if phase == 'ping':
# Update progress entries with ping results
for ip_addr, ping_result in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
progress.ping_result = ping_result
progress.phase = 'ping'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'tcp_scan':
# Update progress entries with TCP/UDP port results
for ip_addr, port_data in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
progress.tcp_ports = json.dumps(port_data.get('tcp_ports', []))
progress.udp_ports = json.dumps(port_data.get('udp_ports', []))
progress.phase = 'tcp_scan'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'service_detection':
# Update progress entries with service detection results
for ip_addr, services in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
# Simplify service data for storage
service_list = []
for svc in services:
service_list.append({
'port': svc.get('port'),
'service': svc.get('service', 'unknown'),
'product': svc.get('product', ''),
'version': svc.get('version', '')
})
progress.services = json.dumps(service_list)
progress.phase = 'service_detection'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'http_analysis':
# Mark HTTP analysis as complete
scan.current_phase = 'completed'
scan.completed_ips = scan.total_ips
session.commit()
except Exception as e:
logger.error(f"Progress callback error for scan {scan_id}: {str(e)}")
# Don't re-raise - we don't want to break the scan
session.rollback()
return progress_callback
def execute_scan(scan_id: int, config_id: int, db_url: str = None):
"""
Execute a scan in the background.
This function is designed to run in a background thread via APScheduler.
It creates its own database session to avoid conflicts with the main
application thread.
Args:
scan_id: ID of the scan record in database
config_id: Database config ID
db_url: Database connection URL
Workflow:
1. Create new database session for this thread
2. Update scan status to 'running'
3. Execute scanner
4. Generate output files (JSON, HTML, ZIP)
5. Save results to database
6. Update status to 'completed' or 'failed'
"""
logger.info(f"Starting background scan execution: scan_id={scan_id}, config_id={config_id}")
# Create new database session for this thread
engine = create_engine(db_url, echo=False)
Session = sessionmaker(bind=engine)
session = Session()
try:
# Get scan record
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
logger.error(f"Scan {scan_id} not found in database")
return
# Update status to running (in case it wasn't already)
scan.status = 'running'
scan.started_at = datetime.utcnow()
session.commit()
logger.info(f"Scan {scan_id}: Initializing scanner with config_id={config_id}")
# Initialize scanner with database config
scanner = SneakyScanner(config_id=config_id)
# Register scanner in the running registry
with _running_scanners_lock:
_running_scanners[scan_id] = scanner
logger.debug(f"Scan {scan_id}: Registered in running scanners registry")
# Create progress callback
progress_callback = create_progress_callback(scan_id, session)
# Execute scan with progress tracking
logger.info(f"Scan {scan_id}: Running scanner...")
start_time = datetime.utcnow()
report, timestamp = scanner.scan(progress_callback=progress_callback)
end_time = datetime.utcnow()
scan_duration = (end_time - start_time).total_seconds()
logger.info(f"Scan {scan_id}: Scanner completed in {scan_duration:.2f} seconds")
# Transition to 'finalizing' status before output generation
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'finalizing'
scan.current_phase = 'generating_outputs'
session.commit()
logger.info(f"Scan {scan_id}: Status changed to 'finalizing'")
except Exception as e:
logger.error(f"Scan {scan_id}: Failed to update status to finalizing: {e}")
session.rollback()
# Generate output files (JSON, HTML, ZIP) with error handling
output_paths = {}
output_generation_failed = False
try:
logger.info(f"Scan {scan_id}: Generating output files...")
output_paths = scanner.generate_outputs(report, timestamp)
except Exception as e:
output_generation_failed = True
logger.error(f"Scan {scan_id}: Output generation failed: {str(e)}")
logger.error(f"Scan {scan_id}: Traceback:\n{traceback.format_exc()}")
# Still mark scan as completed with warning since scan data is valid
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'completed'
scan.error_message = f"Scan completed but output file generation failed: {str(e)}"
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
session.commit()
logger.info(f"Scan {scan_id}: Marked as completed with output generation warning")
except Exception as db_error:
logger.error(f"Scan {scan_id}: Failed to update status after output error: {db_error}")
# Save results to database (only if output generation succeeded)
if not output_generation_failed:
logger.info(f"Scan {scan_id}: Saving results to database...")
scan_service = ScanService(session)
scan_service._save_scan_to_db(report, scan_id, status='completed', output_paths=output_paths)
# Evaluate alert rules
logger.info(f"Scan {scan_id}: Evaluating alert rules...")
try:
alert_service = AlertService(session)
alerts_triggered = alert_service.evaluate_alert_rules(scan_id)
logger.info(f"Scan {scan_id}: {len(alerts_triggered)} alerts triggered")
except Exception as e:
# Don't fail the scan if alert evaluation fails
logger.error(f"Scan {scan_id}: Alert evaluation failed: {str(e)}")
logger.debug(f"Alert evaluation error details: {traceback.format_exc()}")
logger.info(f"Scan {scan_id}: Completed successfully")
except ScanCancelledError:
# Scan was cancelled by user
logger.info(f"Scan {scan_id}: Cancelled by user")
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'cancelled'
scan.error_message = 'Scan cancelled by user'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
session.commit()
except FileNotFoundError as e:
# Config file not found
error_msg = f"Configuration file not found: {str(e)}"
logger.error(f"Scan {scan_id}: {error_msg}")
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'failed'
scan.error_message = error_msg
scan.completed_at = datetime.utcnow()
session.commit()
except Exception as e:
# Any other error during scan execution
error_msg = f"Scan execution failed: {str(e)}"
logger.error(f"Scan {scan_id}: {error_msg}")
logger.error(f"Scan {scan_id}: Traceback:\n{traceback.format_exc()}")
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'failed'
scan.error_message = error_msg
scan.completed_at = datetime.utcnow()
session.commit()
except Exception as db_error:
logger.error(f"Scan {scan_id}: Failed to update error status in database: {str(db_error)}")
finally:
# Unregister scanner from registry
with _running_scanners_lock:
if scan_id in _running_scanners:
del _running_scanners[scan_id]
logger.debug(f"Scan {scan_id}: Unregistered from running scanners registry")
# Always close the session
session.close()
logger.info(f"Scan {scan_id}: Background job completed, session closed")
def get_scan_status_from_db(scan_id: int, db_url: str) -> dict:
"""
Helper function to get scan status directly from database.
Useful for monitoring background jobs without needing Flask app context.
Args:
scan_id: Scan ID to check
db_url: Database connection URL
Returns:
Dictionary with scan status information
"""
engine = create_engine(db_url, echo=False)
Session = sessionmaker(bind=engine)
session = Session()
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
return None
return {
'scan_id': scan.id,
'status': scan.status,
'timestamp': scan.timestamp.isoformat() if scan.timestamp else None,
'duration': scan.duration,
'error_message': scan.error_message
}
finally:
session.close()

View File

@@ -0,0 +1,59 @@
"""
Background webhook delivery job execution.
This module handles the execution of webhook deliveries in background threads,
updating delivery logs and handling errors.
"""
import logging
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from web.services.webhook_service import WebhookService
logger = logging.getLogger(__name__)
def execute_webhook_delivery(webhook_id: int, alert_id: int, db_url: str):
"""
Execute a webhook delivery in the background.
This function is designed to run in a background thread via APScheduler.
It creates its own database session to avoid conflicts with the main
application thread.
Args:
webhook_id: ID of the webhook to deliver
alert_id: ID of the alert to send
db_url: Database connection URL
Workflow:
1. Create new database session for this thread
2. Call WebhookService to deliver webhook
3. WebhookService handles retry logic and logging
4. Close session
"""
logger.info(f"Starting background webhook delivery: webhook_id={webhook_id}, alert_id={alert_id}")
# Create new database session for this thread
engine = create_engine(db_url, echo=False)
Session = sessionmaker(bind=engine)
session = Session()
try:
# Create webhook service and deliver
webhook_service = WebhookService(session)
success = webhook_service.deliver_webhook(webhook_id, alert_id)
if success:
logger.info(f"Webhook {webhook_id} delivered successfully for alert {alert_id}")
else:
logger.warning(f"Webhook {webhook_id} delivery failed for alert {alert_id}")
except Exception as e:
logger.error(f"Error during webhook delivery: {e}", exc_info=True)
finally:
session.close()
engine.dispose()
logger.info(f"Webhook delivery job completed: webhook_id={webhook_id}, alert_id={alert_id}")

View File

@@ -45,8 +45,8 @@ class Scan(Base):
id = Column(Integer, primary_key=True, autoincrement=True)
timestamp = Column(DateTime, nullable=False, index=True, comment="Scan start time (UTC)")
duration = Column(Float, nullable=True, comment="Total scan duration in seconds")
status = Column(String(20), nullable=False, default='running', comment="running, completed, failed")
config_file = Column(Text, nullable=True, comment="Path to YAML config used")
status = Column(String(20), nullable=False, default='running', comment="running, finalizing, completed, failed, cancelled")
config_id = Column(Integer, ForeignKey('scan_configs.id'), nullable=True, index=True, comment="FK to scan_configs table")
title = Column(Text, nullable=True, comment="Scan title from config")
json_path = Column(Text, nullable=True, comment="Path to JSON report")
html_path = Column(Text, nullable=True, comment="Path to HTML report")
@@ -59,6 +59,11 @@ class Scan(Base):
completed_at = Column(DateTime, nullable=True, comment="Scan execution completion time")
error_message = Column(Text, nullable=True, comment="Error message if scan failed")
# Progress tracking fields
current_phase = Column(String(50), nullable=True, comment="Current scan phase: ping, tcp_scan, udp_scan, service_detection, http_analysis")
total_ips = Column(Integer, nullable=True, comment="Total number of IPs to scan")
completed_ips = Column(Integer, nullable=True, default=0, comment="Number of IPs completed in current phase")
# Relationships
sites = relationship('ScanSite', back_populates='scan', cascade='all, delete-orphan')
ips = relationship('ScanIP', back_populates='scan', cascade='all, delete-orphan')
@@ -68,6 +73,9 @@ class Scan(Base):
tls_versions = relationship('ScanTLSVersion', back_populates='scan', cascade='all, delete-orphan')
alerts = relationship('Alert', back_populates='scan', cascade='all, delete-orphan')
schedule = relationship('Schedule', back_populates='scans')
config = relationship('ScanConfig', back_populates='scans')
site_associations = relationship('ScanSiteAssociation', back_populates='scan', cascade='all, delete-orphan')
progress_entries = relationship('ScanProgress', back_populates='scan', cascade='all, delete-orphan')
def __repr__(self):
return f"<Scan(id={self.id}, title='{self.title}', status='{self.status}')>"
@@ -242,6 +250,185 @@ class ScanTLSVersion(Base):
return f"<ScanTLSVersion(id={self.id}, tls_version='{self.tls_version}', supported={self.supported})>"
class ScanProgress(Base):
"""
Real-time progress tracking for individual IPs during scan execution.
Stores intermediate results as they become available, allowing users to
see progress and results before the full scan completes.
"""
__tablename__ = 'scan_progress'
id = Column(Integer, primary_key=True, autoincrement=True)
scan_id = Column(Integer, ForeignKey('scans.id'), nullable=False, index=True)
ip_address = Column(String(45), nullable=False, comment="IP address being scanned")
site_name = Column(String(255), nullable=True, comment="Site name this IP belongs to")
phase = Column(String(50), nullable=False, comment="Phase: ping, tcp_scan, udp_scan, service_detection, http_analysis")
status = Column(String(20), nullable=False, default='pending', comment="pending, in_progress, completed, failed")
# Results data (stored as JSON)
ping_result = Column(Boolean, nullable=True, comment="Ping response result")
tcp_ports = Column(Text, nullable=True, comment="JSON array of discovered TCP ports")
udp_ports = Column(Text, nullable=True, comment="JSON array of discovered UDP ports")
services = Column(Text, nullable=True, comment="JSON array of detected services")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Entry creation time")
updated_at = Column(DateTime, nullable=False, default=datetime.utcnow, onupdate=datetime.utcnow, comment="Last update time")
# Relationships
scan = relationship('Scan', back_populates='progress_entries')
# Index for efficient lookups
__table_args__ = (
UniqueConstraint('scan_id', 'ip_address', name='uix_scan_progress_ip'),
)
def __repr__(self):
return f"<ScanProgress(id={self.id}, ip='{self.ip_address}', phase='{self.phase}', status='{self.status}')>"
# ============================================================================
# Reusable Site Definition Tables
# ============================================================================
class Site(Base):
"""
Master site definition (reusable across scans).
Sites represent logical network segments (e.g., "Production DC", "DMZ",
"Branch Office") that can be reused across multiple scans. Each site
contains one or more CIDR ranges.
"""
__tablename__ = 'sites'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(255), nullable=False, unique=True, index=True, comment="Unique site name")
description = Column(Text, nullable=True, comment="Site description")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Site creation time")
updated_at = Column(DateTime, nullable=False, default=datetime.utcnow, onupdate=datetime.utcnow, comment="Last modification time")
# Relationships
ips = relationship('SiteIP', back_populates='site', cascade='all, delete-orphan')
scan_associations = relationship('ScanSiteAssociation', back_populates='site')
config_associations = relationship('ScanConfigSite', back_populates='site')
def __repr__(self):
return f"<Site(id={self.id}, name='{self.name}')>"
class SiteIP(Base):
"""
Individual IP addresses with their own settings.
Each IP is directly associated with a site and has its own port and ping settings.
IPs are standalone entities - CIDRs are only used as a convenience for bulk creation.
"""
__tablename__ = 'site_ips'
id = Column(Integer, primary_key=True, autoincrement=True)
site_id = Column(Integer, ForeignKey('sites.id'), nullable=False, index=True, comment="FK to sites")
ip_address = Column(String(45), nullable=False, comment="IPv4 or IPv6 address")
expected_ping = Column(Boolean, nullable=True, comment="Expected ping response for this IP")
expected_tcp_ports = Column(Text, nullable=True, comment="JSON array of expected TCP ports")
expected_udp_ports = Column(Text, nullable=True, comment="JSON array of expected UDP ports")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="IP creation time")
# Relationships
site = relationship('Site', back_populates='ips')
# Index for efficient IP lookups - prevent duplicate IPs within a site
__table_args__ = (
UniqueConstraint('site_id', 'ip_address', name='uix_site_ip_address'),
)
def __repr__(self):
return f"<SiteIP(id={self.id}, ip_address='{self.ip_address}')>"
class ScanSiteAssociation(Base):
"""
Many-to-many relationship between scans and sites.
Tracks which sites were included in which scans. This allows sites
to be reused across multiple scans.
"""
__tablename__ = 'scan_site_associations'
id = Column(Integer, primary_key=True, autoincrement=True)
scan_id = Column(Integer, ForeignKey('scans.id'), nullable=False, index=True)
site_id = Column(Integer, ForeignKey('sites.id'), nullable=False, index=True)
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Association creation time")
# Relationships
scan = relationship('Scan', back_populates='site_associations')
site = relationship('Site', back_populates='scan_associations')
# Index to prevent duplicate associations
__table_args__ = (
UniqueConstraint('scan_id', 'site_id', name='uix_scan_site'),
)
def __repr__(self):
return f"<ScanSiteAssociation(scan_id={self.scan_id}, site_id={self.site_id})>"
# ============================================================================
# Scan Configuration Tables
# ============================================================================
class ScanConfig(Base):
"""
Scan configurations stored in database (replaces YAML files).
Stores reusable scan configurations that reference sites from the
sites table. Configs define what sites to scan together.
"""
__tablename__ = 'scan_configs'
id = Column(Integer, primary_key=True, autoincrement=True)
title = Column(String(255), nullable=False, comment="Configuration title")
description = Column(Text, nullable=True, comment="Configuration description")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Config creation time")
updated_at = Column(DateTime, nullable=False, default=datetime.utcnow, onupdate=datetime.utcnow, comment="Last modification time")
# Relationships
site_associations = relationship('ScanConfigSite', back_populates='config', cascade='all, delete-orphan')
scans = relationship('Scan', back_populates='config')
schedules = relationship('Schedule', back_populates='config')
def __repr__(self):
return f"<ScanConfig(id={self.id}, title='{self.title}')>"
class ScanConfigSite(Base):
"""
Many-to-many relationship between scan configs and sites.
Links scan configurations to the sites they should scan. A config
can reference multiple sites, and sites can be used in multiple configs.
"""
__tablename__ = 'scan_config_sites'
id = Column(Integer, primary_key=True, autoincrement=True)
config_id = Column(Integer, ForeignKey('scan_configs.id'), nullable=False, index=True)
site_id = Column(Integer, ForeignKey('sites.id'), nullable=False, index=True)
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Association creation time")
# Relationships
config = relationship('ScanConfig', back_populates='site_associations')
site = relationship('Site', back_populates='config_associations')
# Index to prevent duplicate associations
__table_args__ = (
UniqueConstraint('config_id', 'site_id', name='uix_config_site'),
)
def __repr__(self):
return f"<ScanConfigSite(config_id={self.config_id}, site_id={self.site_id})>"
# ============================================================================
# Scheduling & Notifications Tables
# ============================================================================
@@ -258,7 +445,7 @@ class Schedule(Base):
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(255), nullable=False, comment="Schedule name (e.g., 'Daily prod scan')")
config_file = Column(Text, nullable=False, comment="Path to YAML config")
config_id = Column(Integer, ForeignKey('scan_configs.id'), nullable=True, index=True, comment="FK to scan_configs table")
cron_expression = Column(String(100), nullable=False, comment="Cron-like schedule (e.g., '0 2 * * *')")
enabled = Column(Boolean, nullable=False, default=True, comment="Is schedule active?")
last_run = Column(DateTime, nullable=True, comment="Last execution time")
@@ -268,6 +455,7 @@ class Schedule(Base):
# Relationships
scans = relationship('Scan', back_populates='schedule')
config = relationship('ScanConfig', back_populates='schedules')
def __repr__(self):
return f"<Schedule(id={self.id}, name='{self.name}', enabled={self.enabled})>"
@@ -284,17 +472,24 @@ class Alert(Base):
id = Column(Integer, primary_key=True, autoincrement=True)
scan_id = Column(Integer, ForeignKey('scans.id'), nullable=False, index=True)
alert_type = Column(String(50), nullable=False, comment="new_port, cert_expiry, service_change, ping_failed")
rule_id = Column(Integer, ForeignKey('alert_rules.id'), nullable=True, index=True, comment="Associated alert rule")
alert_type = Column(String(50), nullable=False, comment="unexpected_port, drift_detection, cert_expiry, service_change, ping_failed")
severity = Column(String(20), nullable=False, comment="info, warning, critical")
message = Column(Text, nullable=False, comment="Human-readable alert message")
ip_address = Column(String(45), nullable=True, comment="Related IP (optional)")
port = Column(Integer, nullable=True, comment="Related port (optional)")
email_sent = Column(Boolean, nullable=False, default=False, comment="Was email notification sent?")
email_sent_at = Column(DateTime, nullable=True, comment="Email send timestamp")
webhook_sent = Column(Boolean, nullable=False, default=False, comment="Was webhook sent?")
webhook_sent_at = Column(DateTime, nullable=True, comment="Webhook send timestamp")
acknowledged = Column(Boolean, nullable=False, default=False, index=True, comment="Was alert acknowledged?")
acknowledged_at = Column(DateTime, nullable=True, comment="Acknowledgment timestamp")
acknowledged_by = Column(String(255), nullable=True, comment="User who acknowledged")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Alert creation time")
# Relationships
scan = relationship('Scan', back_populates='alerts')
rule = relationship('AlertRule', back_populates='alerts')
# Index for alert queries by type and severity
__table_args__ = (
@@ -315,14 +510,83 @@ class AlertRule(Base):
__tablename__ = 'alert_rules'
id = Column(Integer, primary_key=True, autoincrement=True)
rule_type = Column(String(50), nullable=False, comment="unexpected_port, cert_expiry, service_down, etc.")
name = Column(String(255), nullable=True, comment="User-friendly rule name")
rule_type = Column(String(50), nullable=False, comment="unexpected_port, cert_expiry, service_down, drift_detection, etc.")
enabled = Column(Boolean, nullable=False, default=True, comment="Is rule active?")
threshold = Column(Integer, nullable=True, comment="Threshold value (e.g., days for cert expiry)")
email_enabled = Column(Boolean, nullable=False, default=False, comment="Send email for this rule?")
webhook_enabled = Column(Boolean, nullable=False, default=False, comment="Send webhook for this rule?")
severity = Column(String(20), nullable=True, comment="Alert severity: critical, warning, info")
filter_conditions = Column(Text, nullable=True, comment="JSON filter conditions for the rule")
config_id = Column(Integer, ForeignKey('scan_configs.id'), nullable=True, index=True, comment="Optional: specific config this rule applies to")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Rule creation time")
updated_at = Column(DateTime, nullable=True, comment="Last update time")
# Relationships
alerts = relationship("Alert", back_populates="rule", cascade="all, delete-orphan")
config = relationship("ScanConfig", backref="alert_rules")
def __repr__(self):
return f"<AlertRule(id={self.id}, rule_type='{self.rule_type}', enabled={self.enabled})>"
return f"<AlertRule(id={self.id}, name='{self.name}', rule_type='{self.rule_type}', enabled={self.enabled})>"
class Webhook(Base):
"""
Webhook configurations for alert notifications.
Stores webhook endpoints and authentication details for sending alert
notifications to external systems.
"""
__tablename__ = 'webhooks'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(255), nullable=False, comment="Webhook name")
url = Column(Text, nullable=False, comment="Webhook URL")
enabled = Column(Boolean, nullable=False, default=True, comment="Is webhook enabled?")
auth_type = Column(String(20), nullable=True, comment="Authentication type: none, bearer, basic, custom")
auth_token = Column(Text, nullable=True, comment="Encrypted authentication token")
custom_headers = Column(Text, nullable=True, comment="JSON custom headers")
alert_types = Column(Text, nullable=True, comment="JSON array of alert types to trigger on")
severity_filter = Column(Text, nullable=True, comment="JSON array of severities to trigger on")
timeout = Column(Integer, nullable=True, default=10, comment="Request timeout in seconds")
retry_count = Column(Integer, nullable=True, default=3, comment="Number of retry attempts")
template = Column(Text, nullable=True, comment="Jinja2 template for webhook payload")
template_format = Column(String(20), nullable=True, default='json', comment="Template output format: json, text")
content_type_override = Column(String(100), nullable=True, comment="Optional custom Content-Type header")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Creation time")
updated_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Last update time")
# Relationships
delivery_logs = relationship("WebhookDeliveryLog", back_populates="webhook", cascade="all, delete-orphan")
def __repr__(self):
return f"<Webhook(id={self.id}, name='{self.name}', enabled={self.enabled})>"
class WebhookDeliveryLog(Base):
"""
Webhook delivery tracking.
Logs all webhook delivery attempts for auditing and debugging purposes.
"""
__tablename__ = 'webhook_delivery_log'
id = Column(Integer, primary_key=True, autoincrement=True)
webhook_id = Column(Integer, ForeignKey('webhooks.id'), nullable=False, index=True, comment="Associated webhook")
alert_id = Column(Integer, ForeignKey('alerts.id'), nullable=False, index=True, comment="Associated alert")
status = Column(String(20), nullable=True, index=True, comment="Delivery status: success, failed, retrying")
response_code = Column(Integer, nullable=True, comment="HTTP response code")
response_body = Column(Text, nullable=True, comment="Response body from webhook")
error_message = Column(Text, nullable=True, comment="Error message if failed")
attempt_number = Column(Integer, nullable=True, comment="Which attempt this was")
delivered_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Delivery timestamp")
# Relationships
webhook = relationship("Webhook", back_populates="delivery_logs")
alert = relationship("Alert")
def __repr__(self):
return f"<WebhookDeliveryLog(id={self.id}, webhook_id={self.webhook_id}, status='{self.status}')>"
# ============================================================================

288
app/web/routes/main.py Normal file
View File

@@ -0,0 +1,288 @@
"""
Main web routes for SneakyScanner.
Provides dashboard and scan viewing pages.
"""
import logging
import os
from flask import Blueprint, current_app, redirect, render_template, request, send_from_directory, url_for
from web.auth.decorators import login_required
logger = logging.getLogger(__name__)
bp = Blueprint('main', __name__)
@bp.route('/')
def index():
"""
Root route - redirect to dashboard.
Returns:
Redirect to dashboard
"""
return redirect(url_for('main.dashboard'))
@bp.route('/dashboard')
@login_required
def dashboard():
"""
Dashboard page - shows recent scans and statistics.
Returns:
Rendered dashboard template
"""
return render_template('dashboard.html')
@bp.route('/scans')
@login_required
def scans():
"""
Scans list page - shows all scans with pagination.
Returns:
Rendered scans list template
"""
return render_template('scans.html')
@bp.route('/scans/<int:scan_id>')
@login_required
def scan_detail(scan_id):
"""
Scan detail page - shows full scan results.
Args:
scan_id: Scan ID to display
Returns:
Rendered scan detail template
"""
# TODO: Phase 5 - Implement scan detail page
return render_template('scan_detail.html', scan_id=scan_id)
@bp.route('/scans/<int:scan_id1>/compare/<int:scan_id2>')
@login_required
def compare_scans(scan_id1, scan_id2):
"""
Scan comparison page - shows differences between two scans.
Args:
scan_id1: First (older) scan ID
scan_id2: Second (newer) scan ID
Returns:
Rendered comparison template
"""
return render_template('scan_compare.html', scan_id1=scan_id1, scan_id2=scan_id2)
@bp.route('/search/ip')
@login_required
def search_ip():
"""
IP search results page - shows scans containing a specific IP address.
Returns:
Rendered search results template
"""
ip_address = request.args.get('ip', '').strip()
return render_template('ip_search_results.html', ip_address=ip_address)
@bp.route('/schedules')
@login_required
def schedules():
"""
Schedules list page - shows all scheduled scans.
Returns:
Rendered schedules list template
"""
return render_template('schedules.html')
@bp.route('/schedules/create')
@login_required
def create_schedule():
"""
Create new schedule form page.
Returns:
Rendered schedule create template with available configs
"""
from web.models import ScanConfig
# Get list of available configs from database
configs = []
try:
configs = current_app.db_session.query(ScanConfig).order_by(ScanConfig.title).all()
except Exception as e:
logger.error(f"Error listing configs: {e}")
return render_template('schedule_create.html', configs=configs)
@bp.route('/schedules/<int:schedule_id>/edit')
@login_required
def edit_schedule(schedule_id):
"""
Edit existing schedule form page.
Args:
schedule_id: Schedule ID to edit
Returns:
Rendered schedule edit template
"""
# Note: Schedule data is loaded via AJAX in the template
# This just renders the page with the schedule_id in the URL
return render_template('schedule_edit.html', schedule_id=schedule_id)
@bp.route('/sites')
@login_required
def sites():
"""
Sites management page - manage reusable site definitions.
Returns:
Rendered sites template
"""
return render_template('sites.html')
@bp.route('/configs')
@login_required
def configs():
"""
Configuration files list page - shows all config files.
Returns:
Rendered configs list template
"""
return render_template('configs.html')
@bp.route('/alerts')
@login_required
def alerts():
"""
Alerts history page - shows all alerts.
Returns:
Rendered alerts template
"""
from flask import request, current_app
from web.models import Alert, AlertRule, Scan
from web.utils.pagination import paginate
# Get query parameters for filtering
page = request.args.get('page', 1, type=int)
per_page = 20
severity = request.args.get('severity')
alert_type = request.args.get('alert_type')
acknowledged = request.args.get('acknowledged')
# Build query
query = current_app.db_session.query(Alert).join(Scan, isouter=True)
# Apply filters
if severity:
query = query.filter(Alert.severity == severity)
if alert_type:
query = query.filter(Alert.alert_type == alert_type)
if acknowledged is not None:
ack_bool = acknowledged == 'true'
query = query.filter(Alert.acknowledged == ack_bool)
# Order by severity and date
query = query.order_by(Alert.severity.desc(), Alert.created_at.desc())
# Paginate using utility function
pagination = paginate(query, page=page, per_page=per_page)
alerts = pagination.items
# Get unique alert types for filter dropdown
try:
alert_types = current_app.db_session.query(Alert.alert_type).distinct().all()
alert_types = [at[0] for at in alert_types] if alert_types else []
except Exception:
alert_types = []
return render_template(
'alerts.html',
alerts=alerts,
pagination=pagination,
current_severity=severity,
current_alert_type=alert_type,
current_acknowledged=acknowledged,
alert_types=alert_types
)
@bp.route('/alerts/rules')
@login_required
def alert_rules():
"""
Alert rules management page.
Returns:
Rendered alert rules template
"""
from flask import current_app
from web.models import AlertRule
# Get all alert rules with error handling
try:
rules = current_app.db_session.query(AlertRule).order_by(
AlertRule.name.nullslast(),
AlertRule.rule_type
).all()
except Exception as e:
logger.error(f"Error fetching alert rules: {e}")
rules = []
# Ensure rules is always a list
if rules is None:
rules = []
return render_template(
'alert_rules.html',
rules=rules
)
@bp.route('/help')
@login_required
def help():
"""
Help page - explains how to use the application.
Returns:
Rendered help template
"""
return render_template('help.html')
@bp.route('/output/<path:filename>')
@login_required
def serve_output_file(filename):
"""
Serve output files (JSON, HTML, ZIP) from the output directory.
Args:
filename: Name of the file to serve
Returns:
The requested file
"""
output_dir = os.environ.get('OUTPUT_DIR', '/app/output')
return send_from_directory(output_dir, filename)

View File

@@ -0,0 +1,83 @@
"""
Webhook web routes for SneakyScanner.
Provides UI pages for managing webhooks and viewing delivery logs.
"""
import logging
from flask import Blueprint, render_template, request, redirect, url_for, flash, current_app
from web.auth.decorators import login_required
from web.models import Webhook
from web.services.webhook_service import WebhookService
logger = logging.getLogger(__name__)
bp = Blueprint('webhooks', __name__)
@bp.route('')
@login_required
def list_webhooks():
"""
Webhooks list page - shows all configured webhooks.
Returns:
Rendered webhooks list template
"""
return render_template('webhooks/list.html')
@bp.route('/new')
@login_required
def new_webhook():
"""
New webhook form page.
Returns:
Rendered webhook form template
"""
return render_template('webhooks/form.html', webhook=None, mode='create')
@bp.route('/<int:webhook_id>/edit')
@login_required
def edit_webhook(webhook_id):
"""
Edit webhook form page.
Args:
webhook_id: Webhook ID to edit
Returns:
Rendered webhook form template or 404
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
flash('Webhook not found', 'error')
return redirect(url_for('webhooks.list_webhooks'))
return render_template('webhooks/form.html', webhook=webhook, mode='edit')
@bp.route('/<int:webhook_id>/logs')
@login_required
def webhook_logs(webhook_id):
"""
Webhook delivery logs page.
Args:
webhook_id: Webhook ID
Returns:
Rendered webhook logs template or 404
"""
webhook = current_app.db_session.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
flash('Webhook not found', 'error')
return redirect(url_for('webhooks.list_webhooks'))
return render_template('webhooks/logs.html', webhook=webhook)

View File

@@ -0,0 +1,521 @@
"""
Alert Service Module
Handles alert evaluation, rule processing, and notification triggering
for SneakyScan Phase 5.
"""
import logging
from datetime import datetime, timezone
from typing import List, Dict, Optional, Any
from sqlalchemy.orm import Session
from ..models import (
Alert, AlertRule, Scan, ScanPort, ScanIP, ScanService as ScanServiceModel,
ScanCertificate, ScanTLSVersion
)
from .scan_service import ScanService
logger = logging.getLogger(__name__)
class AlertService:
"""
Service for evaluating alert rules and generating alerts based on scan results.
Supports two main alert types:
1. Unexpected Port Detection - Alerts when ports marked as unexpected are found open
2. Drift Detection - Alerts when scan results differ from previous scan
"""
def __init__(self, db_session: Session):
self.db = db_session
self.scan_service = ScanService(db_session)
def evaluate_alert_rules(self, scan_id: int) -> List[Alert]:
"""
Main entry point for alert evaluation after scan completion.
Args:
scan_id: ID of the completed scan to evaluate
Returns:
List of Alert objects that were created
"""
logger.info(f"Starting alert evaluation for scan {scan_id}")
# Get the scan
scan = self.db.query(Scan).filter(Scan.id == scan_id).first()
if not scan:
logger.error(f"Scan {scan_id} not found")
return []
# Get all enabled alert rules
rules = self.db.query(AlertRule).filter(AlertRule.enabled == True).all()
logger.info(f"Found {len(rules)} enabled alert rules to evaluate")
alerts_created = []
for rule in rules:
try:
# Check if rule applies to this scan's config
if rule.config_id and scan.config_id != rule.config_id:
logger.debug(f"Skipping rule {rule.id} - config mismatch")
continue
# Evaluate based on rule type
alert_data = []
if rule.rule_type == 'unexpected_port':
alert_data = self.check_unexpected_ports(scan, rule)
elif rule.rule_type == 'drift_detection':
alert_data = self.check_drift_from_previous(scan, rule)
elif rule.rule_type == 'cert_expiry':
alert_data = self.check_certificate_expiry(scan, rule)
elif rule.rule_type == 'weak_tls':
alert_data = self.check_weak_tls(scan, rule)
elif rule.rule_type == 'ping_failed':
alert_data = self.check_ping_failures(scan, rule)
else:
logger.warning(f"Unknown rule type: {rule.rule_type}")
continue
# Create alerts for any findings
for alert_info in alert_data:
alert = self.create_alert(scan_id, rule, alert_info)
if alert:
alerts_created.append(alert)
# Trigger notifications if configured
if rule.email_enabled or rule.webhook_enabled:
self.trigger_notifications(alert, rule)
logger.info(f"Rule {rule.name or rule.id} generated {len(alert_data)} alerts")
except Exception as e:
logger.error(f"Error evaluating rule {rule.id}: {str(e)}")
continue
logger.info(f"Alert evaluation complete. Created {len(alerts_created)} alerts")
return alerts_created
def check_unexpected_ports(self, scan: Scan, rule: AlertRule) -> List[Dict[str, Any]]:
"""
Detect ports that are open but not in the expected_ports list.
Args:
scan: The scan to check
rule: The alert rule configuration
Returns:
List of alert data dictionaries
"""
alerts_to_create = []
# Get all ports where expected=False
unexpected_ports = (
self.db.query(ScanPort, ScanIP)
.join(ScanIP, ScanPort.ip_id == ScanIP.id)
.filter(ScanPort.scan_id == scan.id)
.filter(ScanPort.expected == False) # Not in config's expected_ports
.filter(ScanPort.state == 'open')
.all()
)
# High-risk ports that should trigger critical alerts
high_risk_ports = {
22, # SSH
23, # Telnet
135, # Windows RPC
139, # NetBIOS
445, # SMB
1433, # SQL Server
3306, # MySQL
3389, # RDP
5432, # PostgreSQL
5900, # VNC
6379, # Redis
9200, # Elasticsearch
27017, # MongoDB
}
for port, ip in unexpected_ports:
# Determine severity based on port number
severity = rule.severity or ('critical' if port.port in high_risk_ports else 'warning')
# Get service info if available
service = (
self.db.query(ScanServiceModel)
.filter(ScanServiceModel.port_id == port.id)
.first()
)
service_info = ""
if service:
product = service.product or "Unknown"
version = service.version or ""
service_info = f" (Service: {service.service_name}: {product} {version}".strip() + ")"
alerts_to_create.append({
'alert_type': 'unexpected_port',
'severity': severity,
'message': f"Unexpected port open on {ip.ip_address}:{port.port}/{port.protocol}{service_info}",
'ip_address': ip.ip_address,
'port': port.port
})
return alerts_to_create
def check_drift_from_previous(self, scan: Scan, rule: AlertRule) -> List[Dict[str, Any]]:
"""
Compare current scan to the last scan with the same config.
Args:
scan: The current scan
rule: The alert rule configuration
Returns:
List of alert data dictionaries
"""
alerts_to_create = []
# Find previous scan with same config_id
previous_scan = (
self.db.query(Scan)
.filter(Scan.config_id == scan.config_id)
.filter(Scan.id < scan.id)
.filter(Scan.status == 'completed')
.order_by(Scan.started_at.desc() if Scan.started_at else Scan.timestamp.desc())
.first()
)
if not previous_scan:
logger.info(f"No previous scan found for config_id {scan.config_id}")
return []
try:
# Use existing comparison logic from scan_service
comparison = self.scan_service.compare_scans(previous_scan.id, scan.id)
# Alert on new ports
for port_data in comparison.get('ports', {}).get('added', []):
severity = rule.severity or 'warning'
alerts_to_create.append({
'alert_type': 'drift_new_port',
'severity': severity,
'message': f"New port detected: {port_data['ip']}:{port_data['port']}/{port_data['protocol']}",
'ip_address': port_data['ip'],
'port': port_data['port']
})
# Alert on removed ports
for port_data in comparison.get('ports', {}).get('removed', []):
severity = rule.severity or 'info'
alerts_to_create.append({
'alert_type': 'drift_missing_port',
'severity': severity,
'message': f"Port no longer open: {port_data['ip']}:{port_data['port']}/{port_data['protocol']}",
'ip_address': port_data['ip'],
'port': port_data['port']
})
# Alert on service changes
for svc_data in comparison.get('services', {}).get('changed', []):
old_svc = svc_data.get('old', {})
new_svc = svc_data.get('new', {})
old_desc = f"{old_svc.get('product', 'Unknown')} {old_svc.get('version', '')}".strip()
new_desc = f"{new_svc.get('product', 'Unknown')} {new_svc.get('version', '')}".strip()
severity = rule.severity or 'info'
alerts_to_create.append({
'alert_type': 'drift_service_change',
'severity': severity,
'message': f"Service changed on {svc_data['ip']}:{svc_data['port']}: {old_desc}{new_desc}",
'ip_address': svc_data['ip'],
'port': svc_data['port']
})
# Alert on certificate changes
for cert_data in comparison.get('certificates', {}).get('changed', []):
old_cert = cert_data.get('old', {})
new_cert = cert_data.get('new', {})
severity = rule.severity or 'warning'
alerts_to_create.append({
'alert_type': 'drift_cert_change',
'severity': severity,
'message': f"Certificate changed on {cert_data['ip']}:{cert_data['port']} - "
f"Subject: {old_cert.get('subject', 'Unknown')}{new_cert.get('subject', 'Unknown')}",
'ip_address': cert_data['ip'],
'port': cert_data['port']
})
# Check drift score threshold if configured
if rule.threshold and comparison.get('drift_score', 0) * 100 >= rule.threshold:
alerts_to_create.append({
'alert_type': 'drift_threshold_exceeded',
'severity': rule.severity or 'warning',
'message': f"Drift score {comparison['drift_score']*100:.1f}% exceeds threshold {rule.threshold}%",
'ip_address': None,
'port': None
})
except Exception as e:
logger.error(f"Error comparing scans: {str(e)}")
return alerts_to_create
def check_certificate_expiry(self, scan: Scan, rule: AlertRule) -> List[Dict[str, Any]]:
"""
Check for certificates expiring within the threshold days.
Args:
scan: The scan to check
rule: The alert rule configuration
Returns:
List of alert data dictionaries
"""
alerts_to_create = []
threshold_days = rule.threshold or 30 # Default 30 days
# Get all certificates from the scan
certificates = (
self.db.query(ScanCertificate, ScanPort, ScanIP)
.join(ScanServiceModel, ScanCertificate.service_id == ScanServiceModel.id)
.join(ScanPort, ScanServiceModel.port_id == ScanPort.id)
.join(ScanIP, ScanPort.ip_id == ScanIP.id)
.filter(ScanPort.scan_id == scan.id)
.all()
)
for cert, port, ip in certificates:
if cert.days_until_expiry is not None and cert.days_until_expiry <= threshold_days:
if cert.days_until_expiry <= 0:
severity = 'critical'
message = f"Certificate EXPIRED on {ip.ip_address}:{port.port}"
elif cert.days_until_expiry <= 7:
severity = 'critical'
message = f"Certificate expires in {cert.days_until_expiry} days on {ip.ip_address}:{port.port}"
elif cert.days_until_expiry <= 14:
severity = 'warning'
message = f"Certificate expires in {cert.days_until_expiry} days on {ip.ip_address}:{port.port}"
else:
severity = 'info'
message = f"Certificate expires in {cert.days_until_expiry} days on {ip.ip_address}:{port.port}"
alerts_to_create.append({
'alert_type': 'cert_expiry',
'severity': severity,
'message': message,
'ip_address': ip.ip_address,
'port': port.port
})
return alerts_to_create
def check_weak_tls(self, scan: Scan, rule: AlertRule) -> List[Dict[str, Any]]:
"""
Check for weak TLS versions (1.0, 1.1).
Args:
scan: The scan to check
rule: The alert rule configuration
Returns:
List of alert data dictionaries
"""
alerts_to_create = []
# Get all TLS version data from the scan
tls_versions = (
self.db.query(ScanTLSVersion, ScanPort, ScanIP)
.join(ScanCertificate, ScanTLSVersion.certificate_id == ScanCertificate.id)
.join(ScanServiceModel, ScanCertificate.service_id == ScanServiceModel.id)
.join(ScanPort, ScanServiceModel.port_id == ScanPort.id)
.join(ScanIP, ScanPort.ip_id == ScanIP.id)
.filter(ScanPort.scan_id == scan.id)
.all()
)
# Group TLS versions by port/IP to create one alert per host
tls_by_host = {}
for tls, port, ip in tls_versions:
# Only alert on weak TLS versions that are supported
if tls.supported and tls.tls_version in ['TLS 1.0', 'TLS 1.1']:
key = (ip.ip_address, port.port)
if key not in tls_by_host:
tls_by_host[key] = {'ip': ip.ip_address, 'port': port.port, 'versions': []}
tls_by_host[key]['versions'].append(tls.tls_version)
# Create alerts for hosts with weak TLS
for host_key, host_data in tls_by_host.items():
severity = rule.severity or 'warning'
alerts_to_create.append({
'alert_type': 'weak_tls',
'severity': severity,
'message': f"Weak TLS versions supported on {host_data['ip']}:{host_data['port']}: {', '.join(host_data['versions'])}",
'ip_address': host_data['ip'],
'port': host_data['port']
})
return alerts_to_create
def check_ping_failures(self, scan: Scan, rule: AlertRule) -> List[Dict[str, Any]]:
"""
Check for hosts that were expected to respond to ping but didn't.
Args:
scan: The scan to check
rule: The alert rule configuration
Returns:
List of alert data dictionaries
"""
alerts_to_create = []
# Get all IPs where ping was expected but failed
failed_pings = (
self.db.query(ScanIP)
.filter(ScanIP.scan_id == scan.id)
.filter(ScanIP.ping_expected == True)
.filter(ScanIP.ping_actual == False)
.all()
)
for ip in failed_pings:
severity = rule.severity or 'warning'
alerts_to_create.append({
'alert_type': 'ping_failed',
'severity': severity,
'message': f"Host {ip.ip_address} did not respond to ping (expected to be up)",
'ip_address': ip.ip_address,
'port': None
})
return alerts_to_create
def create_alert(self, scan_id: int, rule: AlertRule, alert_data: Dict[str, Any]) -> Optional[Alert]:
"""
Create an alert record in the database.
Args:
scan_id: ID of the scan that triggered the alert
rule: The alert rule that was triggered
alert_data: Dictionary with alert details
Returns:
Created Alert object or None if creation failed
"""
try:
alert = Alert(
scan_id=scan_id,
rule_id=rule.id,
alert_type=alert_data['alert_type'],
severity=alert_data['severity'],
message=alert_data['message'],
ip_address=alert_data.get('ip_address'),
port=alert_data.get('port'),
created_at=datetime.now(timezone.utc)
)
self.db.add(alert)
self.db.commit()
logger.info(f"Created alert: {alert.message}")
return alert
except Exception as e:
logger.error(f"Failed to create alert: {str(e)}")
self.db.rollback()
return None
def trigger_notifications(self, alert: Alert, rule: AlertRule):
"""
Send notifications for an alert based on rule configuration.
Args:
alert: The alert to send notifications for
rule: The rule that specifies notification settings
"""
# Email notification will be implemented in email_service.py
if rule.email_enabled:
logger.info(f"Email notification would be sent for alert {alert.id}")
# TODO: Call email service
# Webhook notification - queue for delivery
if rule.webhook_enabled:
try:
from flask import current_app
from .webhook_service import WebhookService
webhook_service = WebhookService(self.db)
# Get matching webhooks for this alert
matching_webhooks = webhook_service.get_matching_webhooks(alert)
if matching_webhooks:
# Get scheduler from app context
scheduler = getattr(current_app, 'scheduler', None)
# Queue delivery for each matching webhook
for webhook in matching_webhooks:
webhook_service.queue_webhook_delivery(
webhook.id,
alert.id,
scheduler_service=scheduler
)
logger.info(f"Queued webhook {webhook.id} ({webhook.name}) for alert {alert.id}")
else:
logger.debug(f"No matching webhooks found for alert {alert.id}")
except Exception as e:
logger.error(f"Failed to queue webhook notifications for alert {alert.id}: {e}", exc_info=True)
# Don't fail alert creation if webhook queueing fails
def acknowledge_alert(self, alert_id: int, acknowledged_by: str = "system") -> bool:
"""
Acknowledge an alert.
Args:
alert_id: ID of the alert to acknowledge
acknowledged_by: Username or system identifier
Returns:
True if successful, False otherwise
"""
try:
alert = self.db.query(Alert).filter(Alert.id == alert_id).first()
if not alert:
logger.error(f"Alert {alert_id} not found")
return False
alert.acknowledged = True
alert.acknowledged_at = datetime.now(timezone.utc)
alert.acknowledged_by = acknowledged_by
self.db.commit()
logger.info(f"Alert {alert_id} acknowledged by {acknowledged_by}")
return True
except Exception as e:
logger.error(f"Failed to acknowledge alert {alert_id}: {str(e)}")
self.db.rollback()
return False
def get_alerts_for_scan(self, scan_id: int) -> List[Alert]:
"""
Get all alerts for a specific scan.
Args:
scan_id: ID of the scan
Returns:
List of Alert objects
"""
return (
self.db.query(Alert)
.filter(Alert.scan_id == scan_id)
.order_by(Alert.severity.desc(), Alert.created_at.desc())
.all()
)

View File

@@ -0,0 +1,339 @@
"""
Config Service - Business logic for config management
This service handles all operations related to scan configurations,
both database-stored (primary) and file-based (deprecated).
"""
import os
from typing import Dict, List, Any, Optional
from datetime import datetime
from sqlalchemy.orm import Session
class ConfigService:
"""Business logic for config management"""
def __init__(self, db_session: Session = None, configs_dir: str = '/app/configs'):
"""
Initialize the config service.
Args:
db_session: SQLAlchemy database session (for database operations)
configs_dir: Directory where legacy config files are stored
"""
self.db = db_session
self.configs_dir = configs_dir
# Ensure configs directory exists (for legacy YAML configs)
os.makedirs(self.configs_dir, exist_ok=True)
# ============================================================================
# Database-based Config Operations (Primary)
# ============================================================================
def create_config(self, title: str, description: Optional[str], site_ids: List[int]) -> Dict[str, Any]:
"""
Create a new scan configuration in the database.
Args:
title: Configuration title
description: Optional configuration description
site_ids: List of site IDs to include in this config
Returns:
Created config as dictionary:
{
"id": 1,
"title": "Production Scan",
"description": "...",
"site_count": 3,
"sites": [...],
"created_at": "2025-11-19T10:30:00Z",
"updated_at": "2025-11-19T10:30:00Z"
}
Raises:
ValueError: If validation fails or sites don't exist
"""
if not title or not title.strip():
raise ValueError("Title is required")
if not site_ids or len(site_ids) == 0:
raise ValueError("At least one site must be selected")
# Import models here to avoid circular imports
from web.models import ScanConfig, ScanConfigSite, Site
# Verify all sites exist
existing_sites = self.db.query(Site).filter(Site.id.in_(site_ids)).all()
if len(existing_sites) != len(site_ids):
found_ids = {s.id for s in existing_sites}
missing_ids = set(site_ids) - found_ids
raise ValueError(f"Sites not found: {missing_ids}")
# Create config
config = ScanConfig(
title=title.strip(),
description=description.strip() if description else None,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
)
self.db.add(config)
self.db.flush() # Get the config ID
# Create associations
for site_id in site_ids:
assoc = ScanConfigSite(
config_id=config.id,
site_id=site_id,
created_at=datetime.utcnow()
)
self.db.add(assoc)
self.db.commit()
return self.get_config_by_id(config.id)
def get_config_by_id(self, config_id: int) -> Dict[str, Any]:
"""
Get a scan configuration by ID.
Args:
config_id: Configuration ID
Returns:
Config as dictionary with sites
Raises:
ValueError: If config not found
"""
from web.models import ScanConfig
config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not config:
raise ValueError(f"Config with ID {config_id} not found")
# Get associated sites
sites = []
for assoc in config.site_associations:
site = assoc.site
sites.append({
'id': site.id,
'name': site.name,
'description': site.description,
'ip_count': len(site.ips)
})
return {
'id': config.id,
'title': config.title,
'description': config.description,
'site_count': len(sites),
'sites': sites,
'created_at': config.created_at.isoformat() + 'Z' if config.created_at else None,
'updated_at': config.updated_at.isoformat() + 'Z' if config.updated_at else None
}
def list_configs_db(self) -> List[Dict[str, Any]]:
"""
List all scan configurations from database.
Returns:
List of config dictionaries with metadata
"""
from web.models import ScanConfig
configs = self.db.query(ScanConfig).order_by(ScanConfig.updated_at.desc()).all()
result = []
for config in configs:
sites = []
for assoc in config.site_associations:
site = assoc.site
sites.append({
'id': site.id,
'name': site.name
})
result.append({
'id': config.id,
'title': config.title,
'description': config.description,
'site_count': len(sites),
'sites': sites,
'created_at': config.created_at.isoformat() + 'Z' if config.created_at else None,
'updated_at': config.updated_at.isoformat() + 'Z' if config.updated_at else None
})
return result
def update_config(self, config_id: int, title: Optional[str], description: Optional[str], site_ids: Optional[List[int]]) -> Dict[str, Any]:
"""
Update a scan configuration.
Args:
config_id: Configuration ID to update
title: New title (optional)
description: New description (optional)
site_ids: New list of site IDs (optional, replaces existing)
Returns:
Updated config dictionary
Raises:
ValueError: If config not found or validation fails
"""
from web.models import ScanConfig, ScanConfigSite, Site
config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not config:
raise ValueError(f"Config with ID {config_id} not found")
# Update fields if provided
if title is not None:
if not title.strip():
raise ValueError("Title cannot be empty")
config.title = title.strip()
if description is not None:
config.description = description.strip() if description.strip() else None
# Update sites if provided
if site_ids is not None:
if len(site_ids) == 0:
raise ValueError("At least one site must be selected")
# Verify all sites exist
existing_sites = self.db.query(Site).filter(Site.id.in_(site_ids)).all()
if len(existing_sites) != len(site_ids):
found_ids = {s.id for s in existing_sites}
missing_ids = set(site_ids) - found_ids
raise ValueError(f"Sites not found: {missing_ids}")
# Remove existing associations
self.db.query(ScanConfigSite).filter_by(config_id=config_id).delete()
# Create new associations
for site_id in site_ids:
assoc = ScanConfigSite(
config_id=config_id,
site_id=site_id,
created_at=datetime.utcnow()
)
self.db.add(assoc)
config.updated_at = datetime.utcnow()
self.db.commit()
return self.get_config_by_id(config_id)
def delete_config(self, config_id: int) -> None:
"""
Delete a scan configuration from database.
This will cascade delete associated ScanConfigSite records.
Schedules and scans referencing this config will have their
config_id set to NULL.
Args:
config_id: Configuration ID to delete
Raises:
ValueError: If config not found
"""
from web.models import ScanConfig
config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not config:
raise ValueError(f"Config with ID {config_id} not found")
self.db.delete(config)
self.db.commit()
def add_site_to_config(self, config_id: int, site_id: int) -> Dict[str, Any]:
"""
Add a site to an existing config.
Args:
config_id: Configuration ID
site_id: Site ID to add
Returns:
Updated config dictionary
Raises:
ValueError: If config or site not found, or association already exists
"""
from web.models import ScanConfig, Site, ScanConfigSite
config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not config:
raise ValueError(f"Config with ID {config_id} not found")
site = self.db.query(Site).filter_by(id=site_id).first()
if not site:
raise ValueError(f"Site with ID {site_id} not found")
# Check if association already exists
existing = self.db.query(ScanConfigSite).filter_by(
config_id=config_id, site_id=site_id
).first()
if existing:
raise ValueError(f"Site '{site.name}' is already in this config")
# Create association
assoc = ScanConfigSite(
config_id=config_id,
site_id=site_id,
created_at=datetime.utcnow()
)
self.db.add(assoc)
config.updated_at = datetime.utcnow()
self.db.commit()
return self.get_config_by_id(config_id)
def remove_site_from_config(self, config_id: int, site_id: int) -> Dict[str, Any]:
"""
Remove a site from a config.
Args:
config_id: Configuration ID
site_id: Site ID to remove
Returns:
Updated config dictionary
Raises:
ValueError: If config not found, or removing would leave config empty
"""
from web.models import ScanConfig, ScanConfigSite
config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not config:
raise ValueError(f"Config with ID {config_id} not found")
# Check if this would leave the config empty
current_site_count = len(config.site_associations)
if current_site_count <= 1:
raise ValueError("Cannot remove last site from config. Delete the config instead.")
# Remove association
deleted = self.db.query(ScanConfigSite).filter_by(
config_id=config_id, site_id=site_id
).delete()
if deleted == 0:
raise ValueError(f"Site with ID {site_id} is not in this config")
config.updated_at = datetime.utcnow()
self.db.commit()
return self.get_config_by_id(config_id)

View File

@@ -16,10 +16,10 @@ from sqlalchemy.orm import Session, joinedload
from web.models import (
Scan, ScanSite, ScanIP, ScanPort, ScanService as ScanServiceModel,
ScanCertificate, ScanTLSVersion
ScanCertificate, ScanTLSVersion, Site, ScanSiteAssociation, SiteIP
)
from web.utils.pagination import paginate, PaginatedResult
from web.utils.validators import validate_config_file, validate_scan_status
from web.utils.validators import validate_scan_status
logger = logging.getLogger(__name__)
@@ -41,8 +41,9 @@ class ScanService:
"""
self.db = db_session
def trigger_scan(self, config_file: str, triggered_by: str = 'manual',
schedule_id: Optional[int] = None, scheduler=None) -> int:
def trigger_scan(self, config_id: int,
triggered_by: str = 'manual', schedule_id: Optional[int] = None,
scheduler=None) -> int:
"""
Trigger a new scan.
@@ -50,7 +51,7 @@ class ScanService:
queues the scan for background execution.
Args:
config_file: Path to YAML configuration file
config_id: Database config ID
triggered_by: Source that triggered scan (manual, scheduled, api)
schedule_id: Optional schedule ID if triggered by schedule
scheduler: Optional SchedulerService instance for queuing background jobs
@@ -59,30 +60,21 @@ class ScanService:
Scan ID of the created scan
Raises:
ValueError: If config file is invalid
ValueError: If config is invalid
"""
# Validate config file
is_valid, error_msg = validate_config_file(config_file)
if not is_valid:
raise ValueError(f"Invalid config file: {error_msg}")
from web.models import ScanConfig
# Convert config_file to full path if it's just a filename
if not config_file.startswith('/'):
config_path = f'/app/configs/{config_file}'
else:
config_path = config_file
# Validate config exists
db_config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not db_config:
raise ValueError(f"Config with ID {config_id} not found")
# Load config to get title
import yaml
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
# Create scan record
# Create scan record with config_id
scan = Scan(
timestamp=datetime.utcnow(),
status='running',
config_file=config_file,
title=config.get('title', 'Untitled Scan'),
config_id=config_id,
title=db_config.title,
triggered_by=triggered_by,
schedule_id=schedule_id,
created_at=datetime.utcnow()
@@ -92,12 +84,12 @@ class ScanService:
self.db.commit()
self.db.refresh(scan)
logger.info(f"Scan {scan.id} triggered via {triggered_by}")
logger.info(f"Scan {scan.id} triggered via {triggered_by} with config_id={config_id}")
# Queue background job if scheduler provided
if scheduler:
try:
job_id = scheduler.queue_scan(scan.id, config_file)
job_id = scheduler.queue_scan(scan.id, config_id=config_id)
logger.info(f"Scan {scan.id} queued for background execution (job_id={job_id})")
except Exception as e:
logger.error(f"Failed to queue scan {scan.id}: {str(e)}")
@@ -265,58 +257,128 @@ class ScanService:
elif scan.status == 'failed':
status_info['progress'] = 'Failed'
status_info['error_message'] = scan.error_message
elif scan.status == 'cancelled':
status_info['progress'] = 'Cancelled'
status_info['error_message'] = scan.error_message
return status_info
def cleanup_orphaned_scans(self) -> int:
def get_scans_by_ip(self, ip_address: str, limit: int = 10) -> List[Dict[str, Any]]:
"""
Clean up orphaned scans that are stuck in 'running' status.
Get the last N scans containing a specific IP address.
Args:
ip_address: IP address to search for
limit: Maximum number of scans to return (default: 10)
Returns:
List of scan summary dictionaries, most recent first
"""
scans = (
self.db.query(Scan)
.join(ScanIP, Scan.id == ScanIP.scan_id)
.filter(ScanIP.ip_address == ip_address)
.filter(Scan.status == 'completed')
.order_by(Scan.timestamp.desc())
.limit(limit)
.all()
)
return [self._scan_to_summary_dict(scan) for scan in scans]
def cleanup_orphaned_scans(self) -> dict:
"""
Clean up orphaned scans with smart recovery.
For scans stuck in 'running' or 'finalizing' status:
- If output files exist: mark as 'completed' (smart recovery)
- If no output files: mark as 'failed'
This should be called on application startup to handle scans that
were running when the system crashed or was restarted.
Scans in 'running' status are marked as 'failed' with an appropriate
error message indicating they were orphaned.
Returns:
Number of orphaned scans cleaned up
Dictionary with cleanup results: {'recovered': N, 'failed': N, 'total': N}
"""
# Find all scans with status='running'
orphaned_scans = self.db.query(Scan).filter(Scan.status == 'running').all()
# Find all scans with status='running' or 'finalizing'
orphaned_scans = self.db.query(Scan).filter(
Scan.status.in_(['running', 'finalizing'])
).all()
if not orphaned_scans:
logger.info("No orphaned scans found")
return 0
return {'recovered': 0, 'failed': 0, 'total': 0}
count = len(orphaned_scans)
logger.warning(f"Found {count} orphaned scan(s) in 'running' status, marking as failed")
logger.warning(f"Found {count} orphaned scan(s), attempting smart recovery")
recovered_count = 0
failed_count = 0
output_dir = Path('/app/output')
# Mark each orphaned scan as failed
for scan in orphaned_scans:
# Check for existing output files
output_exists = False
output_files_found = []
# Check paths stored in database
if scan.json_path and Path(scan.json_path).exists():
output_exists = True
output_files_found.append('json')
if scan.html_path and Path(scan.html_path).exists():
output_files_found.append('html')
if scan.zip_path and Path(scan.zip_path).exists():
output_files_found.append('zip')
# Also check by timestamp pattern if paths not stored yet
if not output_exists and scan.started_at and output_dir.exists():
timestamp_pattern = scan.started_at.strftime('%Y%m%d')
for json_file in output_dir.glob(f'scan_report_{timestamp_pattern}*.json'):
output_exists = True
output_files_found.append('json')
# Update scan record with found paths
scan.json_path = str(json_file)
html_file = json_file.with_suffix('.html')
if html_file.exists():
scan.html_path = str(html_file)
output_files_found.append('html')
zip_file = json_file.with_suffix('.zip')
if zip_file.exists():
scan.zip_path = str(zip_file)
output_files_found.append('zip')
break
if output_exists:
# Smart recovery: outputs exist, mark as completed
scan.status = 'completed'
scan.error_message = f'Recovered from orphaned state (output files found: {", ".join(output_files_found)})'
recovered_count += 1
logger.info(f"Recovered orphaned scan {scan.id} as completed (files: {output_files_found})")
else:
# No outputs: mark as failed
scan.status = 'failed'
scan.completed_at = datetime.utcnow()
scan.error_message = (
"Scan was interrupted by system shutdown or crash. "
"The scan was running but did not complete normally."
"No output files were generated."
)
failed_count += 1
logger.info(f"Marked orphaned scan {scan.id} as failed (no output files)")
# Calculate duration if we have a started_at time
scan.completed_at = datetime.utcnow()
if scan.started_at:
duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.duration = duration
logger.info(
f"Marked orphaned scan {scan.id} as failed "
f"(started: {scan.started_at.isoformat() if scan.started_at else 'unknown'})"
)
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
self.db.commit()
logger.info(f"Cleaned up {count} orphaned scan(s)")
logger.info(f"Cleaned up {count} orphaned scan(s): {recovered_count} recovered, {failed_count} failed")
return count
return {
'recovered': recovered_count,
'failed': failed_count,
'total': count
}
def _save_scan_to_db(self, report: Dict[str, Any], scan_id: int,
status: str = 'completed') -> None:
status: str = 'completed', output_paths: Dict = None) -> None:
"""
Save scan results to database.
@@ -327,6 +389,7 @@ class ScanService:
report: Scan report dictionary from scanner
scan_id: Scan ID to update
status: Final scan status (completed or failed)
output_paths: Dictionary with paths to generated files {'json': Path, 'html': Path, 'zip': Path}
"""
scan = self.db.query(Scan).filter(Scan.id == scan_id).first()
if not scan:
@@ -337,6 +400,17 @@ class ScanService:
scan.duration = report.get('scan_duration')
scan.completed_at = datetime.utcnow()
# Save output file paths
if output_paths:
if 'json' in output_paths:
scan.json_path = str(output_paths['json'])
if 'html' in output_paths:
scan.html_path = str(output_paths['html'])
if 'zip' in output_paths:
scan.zip_path = str(output_paths['zip'])
if 'screenshots' in output_paths:
scan.screenshot_dir = str(output_paths['screenshots'])
# Map report data to database models
self._map_report_to_models(report, scan)
@@ -366,6 +440,34 @@ class ScanService:
self.db.add(site)
self.db.flush() # Get site.id for foreign key
# Create ScanSiteAssociation if this site exists in the database
# This links the scan to reusable site definitions
master_site = (
self.db.query(Site)
.filter(Site.name == site_data['name'])
.first()
)
if master_site:
# Check if association already exists (avoid duplicates)
existing_assoc = (
self.db.query(ScanSiteAssociation)
.filter(
ScanSiteAssociation.scan_id == scan_obj.id,
ScanSiteAssociation.site_id == master_site.id
)
.first()
)
if not existing_assoc:
assoc = ScanSiteAssociation(
scan_id=scan_obj.id,
site_id=master_site.id,
created_at=datetime.utcnow()
)
self.db.add(assoc)
logger.debug(f"Created association between scan {scan_obj.id} and site '{master_site.name}' (id={master_site.id})")
# Process each IP in this site
for ip_data in site_data.get('ips', []):
# Create ScanIP record
@@ -419,9 +521,10 @@ class ScanService:
# Process certificate and TLS info if present
http_info = service_data.get('http_info', {})
if http_info.get('certificate'):
ssl_tls = http_info.get('ssl_tls', {})
if ssl_tls.get('certificate'):
self._process_certificate(
http_info['certificate'],
ssl_tls,
scan_obj.id,
service.id
)
@@ -459,16 +562,19 @@ class ScanService:
return service
return None
def _process_certificate(self, cert_data: Dict[str, Any], scan_id: int,
def _process_certificate(self, ssl_tls_data: Dict[str, Any], scan_id: int,
service_id: int) -> None:
"""
Process certificate and TLS version data.
Args:
cert_data: Certificate data dictionary
ssl_tls_data: SSL/TLS data dictionary containing 'certificate' and 'tls_versions'
scan_id: Scan ID
service_id: Service ID
"""
# Extract certificate data from ssl_tls structure
cert_data = ssl_tls_data.get('certificate', {})
# Create ScanCertificate record
cert = ScanCertificate(
scan_id=scan_id,
@@ -486,7 +592,7 @@ class ScanService:
self.db.flush()
# Process TLS versions
tls_versions = cert_data.get('tls_versions', {})
tls_versions = ssl_tls_data.get('tls_versions', {})
for version, version_data in tls_versions.items():
tls = ScanTLSVersion(
scan_id=scan_id,
@@ -535,7 +641,7 @@ class ScanService:
'duration': scan.duration,
'status': scan.status,
'title': scan.title,
'config_file': scan.config_file,
'config_id': scan.config_id,
'json_path': scan.json_path,
'html_path': scan.html_path,
'zip_path': scan.zip_path,
@@ -561,23 +667,54 @@ class ScanService:
'duration': scan.duration,
'status': scan.status,
'title': scan.title,
'config_id': scan.config_id,
'triggered_by': scan.triggered_by,
'created_at': scan.created_at.isoformat() if scan.created_at else None
}
def _site_to_dict(self, site: ScanSite) -> Dict[str, Any]:
"""Convert ScanSite to dictionary."""
# Look up the master Site ID from ScanSiteAssociation
master_site_id = None
assoc = (
self.db.query(ScanSiteAssociation)
.filter(
ScanSiteAssociation.scan_id == site.scan_id,
)
.join(Site)
.filter(Site.name == site.site_name)
.first()
)
if assoc:
master_site_id = assoc.site_id
return {
'id': site.id,
'name': site.site_name,
'ips': [self._ip_to_dict(ip) for ip in site.ips]
'site_id': master_site_id, # The actual Site ID for config updates
'ips': [self._ip_to_dict(ip, master_site_id) for ip in site.ips]
}
def _ip_to_dict(self, ip: ScanIP) -> Dict[str, Any]:
def _ip_to_dict(self, ip: ScanIP, site_id: Optional[int] = None) -> Dict[str, Any]:
"""Convert ScanIP to dictionary."""
# Look up the SiteIP ID for this IP address in the master Site
site_ip_id = None
if site_id:
site_ip = (
self.db.query(SiteIP)
.filter(
SiteIP.site_id == site_id,
SiteIP.ip_address == ip.ip_address
)
.first()
)
if site_ip:
site_ip_id = site_ip.id
return {
'id': ip.id,
'address': ip.ip_address,
'site_ip_id': site_ip_id, # The actual SiteIP ID for config updates
'ping_expected': ip.ping_expected,
'ping_actual': ip.ping_actual,
'ports': [self._port_to_dict(port) for port in ip.ports]
@@ -675,6 +812,8 @@ class ScanService:
{
'scan1': {...}, # Scan 1 summary
'scan2': {...}, # Scan 2 summary
'same_config': bool, # Whether both scans used the same config
'config_warning': str | None, # Warning message if configs differ
'ports': {
'added': [...],
'removed': [...],
@@ -700,6 +839,22 @@ class ScanService:
if not scan1 or not scan2:
return None
# Check if scans use the same configuration
config1 = scan1.get('config_id')
config2 = scan2.get('config_id')
same_config = (config1 == config2) and (config1 is not None)
# Generate warning message if configs differ
config_warning = None
if not same_config:
config_warning = (
f"These scans use different configurations. "
f"Scan #{scan1_id} used config_id={config1 or 'unknown'} and "
f"Scan #{scan2_id} used config_id={config2 or 'unknown'}. "
f"The comparison may show all changes as additions/removals if the scans "
f"cover different IP ranges or infrastructure."
)
# Extract port data
ports1 = self._extract_ports_from_scan(scan1)
ports2 = self._extract_ports_from_scan(scan2)
@@ -733,14 +888,18 @@ class ScanService:
'id': scan1['id'],
'timestamp': scan1['timestamp'],
'title': scan1['title'],
'status': scan1['status']
'status': scan1['status'],
'config_id': config1
},
'scan2': {
'id': scan2['id'],
'timestamp': scan2['timestamp'],
'title': scan2['title'],
'status': scan2['status']
'status': scan2['status'],
'config_id': config2
},
'same_config': same_config,
'config_warning': config_warning,
'ports': ports_comparison,
'services': services_comparison,
'certificates': certificates_comparison,

View File

@@ -6,14 +6,13 @@ scheduled scans with cron expressions.
"""
import logging
import os
from datetime import datetime
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from croniter import croniter
from sqlalchemy.orm import Session
from web.models import Schedule, Scan
from web.models import Schedule, Scan, ScanConfig
from web.utils.pagination import paginate, PaginatedResult
logger = logging.getLogger(__name__)
@@ -39,7 +38,7 @@ class ScheduleService:
def create_schedule(
self,
name: str,
config_file: str,
config_id: int,
cron_expression: str,
enabled: bool = True
) -> int:
@@ -48,7 +47,7 @@ class ScheduleService:
Args:
name: Human-readable schedule name
config_file: Path to YAML configuration file
config_id: Database config ID
cron_expression: Cron expression (e.g., '0 2 * * *')
enabled: Whether schedule is active
@@ -56,36 +55,32 @@ class ScheduleService:
Schedule ID of the created schedule
Raises:
ValueError: If cron expression is invalid or config file doesn't exist
ValueError: If cron expression is invalid or config doesn't exist
"""
# Validate cron expression
is_valid, error_msg = self.validate_cron_expression(cron_expression)
if not is_valid:
raise ValueError(f"Invalid cron expression: {error_msg}")
# Validate config file exists
# If config_file is just a filename, prepend the configs directory
if not config_file.startswith('/'):
config_file_path = os.path.join('/app/configs', config_file)
else:
config_file_path = config_file
if not os.path.isfile(config_file_path):
raise ValueError(f"Config file not found: {config_file}")
# Validate config exists
db_config = self.db.query(ScanConfig).filter_by(id=config_id).first()
if not db_config:
raise ValueError(f"Config with ID {config_id} not found")
# Calculate next run time
next_run = self.calculate_next_run(cron_expression) if enabled else None
# Create schedule record
now_utc = datetime.now(timezone.utc)
schedule = Schedule(
name=name,
config_file=config_file,
config_id=config_id,
cron_expression=cron_expression,
enabled=enabled,
last_run=None,
next_run=next_run,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
created_at=now_utc,
updated_at=now_utc
)
self.db.add(schedule)
@@ -109,7 +104,14 @@ class ScheduleService:
Raises:
ValueError: If schedule not found
"""
schedule = self.db.query(Schedule).filter(Schedule.id == schedule_id).first()
from sqlalchemy.orm import joinedload
schedule = (
self.db.query(Schedule)
.options(joinedload(Schedule.config))
.filter(Schedule.id == schedule_id)
.first()
)
if not schedule:
raise ValueError(f"Schedule {schedule_id} not found")
@@ -144,8 +146,10 @@ class ScheduleService:
'pages': int
}
"""
# Build query
query = self.db.query(Schedule)
from sqlalchemy.orm import joinedload
# Build query and eagerly load config relationship
query = self.db.query(Schedule).options(joinedload(Schedule.config))
# Apply filter
if enabled_filter is not None:
@@ -200,17 +204,11 @@ class ScheduleService:
if schedule.enabled or updates.get('enabled', False):
updates['next_run'] = self.calculate_next_run(updates['cron_expression'])
# Validate config file if being updated
if 'config_file' in updates:
config_file = updates['config_file']
# If config_file is just a filename, prepend the configs directory
if not config_file.startswith('/'):
config_file_path = os.path.join('/app/configs', config_file)
else:
config_file_path = config_file
if not os.path.isfile(config_file_path):
raise ValueError(f"Config file not found: {updates['config_file']}")
# Validate config_id if being updated
if 'config_id' in updates:
db_config = self.db.query(ScanConfig).filter_by(id=updates['config_id']).first()
if not db_config:
raise ValueError(f"Config with ID {updates['config_id']} not found")
# Handle enabled toggle
if 'enabled' in updates:
@@ -227,7 +225,7 @@ class ScheduleService:
if hasattr(schedule, key):
setattr(schedule, key, value)
schedule.updated_at = datetime.utcnow()
schedule.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(schedule)
@@ -310,7 +308,7 @@ class ScheduleService:
schedule.last_run = last_run
schedule.next_run = next_run
schedule.updated_at = datetime.utcnow()
schedule.updated_at = datetime.now(timezone.utc)
self.db.commit()
@@ -323,23 +321,43 @@ class ScheduleService:
Validate a cron expression.
Args:
cron_expr: Cron expression to validate
cron_expr: Cron expression to validate in standard crontab format
Format: minute hour day month day_of_week
Day of week: 0=Sunday, 1=Monday, ..., 6=Saturday
(APScheduler will convert this to its internal format automatically)
Returns:
Tuple of (is_valid, error_message)
- (True, None) if valid
- (False, error_message) if invalid
Note:
This validates using croniter which uses standard crontab format.
APScheduler's from_crontab() will handle the conversion when the
schedule is registered with the scheduler.
"""
try:
# Try to create a croniter instance
base_time = datetime.utcnow()
# croniter uses standard crontab format (Sunday=0)
from datetime import timezone
base_time = datetime.now(timezone.utc)
cron = croniter(cron_expr, base_time)
# Try to get the next run time (validates the expression)
cron.get_next(datetime)
# Validate basic format (5 fields)
fields = cron_expr.split()
if len(fields) != 5:
return (False, f"Cron expression must have 5 fields (minute hour day month day_of_week), got {len(fields)}")
return (True, None)
except (ValueError, KeyError) as e:
error_msg = str(e)
# Add helpful hint for day_of_week errors
if "day" in error_msg.lower() and len(cron_expr.split()) >= 5:
hint = "\nNote: Use standard crontab format where 0=Sunday, 1=Monday, ..., 6=Saturday"
return (False, f"{error_msg}{hint}")
return (False, str(e))
except Exception as e:
return (False, f"Unexpected error: {str(e)}")
@@ -357,17 +375,24 @@ class ScheduleService:
from_time: Base time (defaults to now UTC)
Returns:
Next run datetime (UTC)
Next run datetime (UTC, timezone-aware)
Raises:
ValueError: If cron expression is invalid
"""
if from_time is None:
from_time = datetime.utcnow()
from_time = datetime.now(timezone.utc)
try:
cron = croniter(cron_expr, from_time)
return cron.get_next(datetime)
next_run = cron.get_next(datetime)
# croniter returns naive datetime, so we need to add timezone info
# Since we're using UTC for all calculations, add UTC timezone
if next_run.tzinfo is None:
next_run = next_run.replace(tzinfo=timezone.utc)
return next_run
except Exception as e:
raise ValueError(f"Invalid cron expression '{cron_expr}': {str(e)}")
@@ -400,7 +425,7 @@ class ScheduleService:
'timestamp': scan.timestamp.isoformat() if scan.timestamp else None,
'status': scan.status,
'title': scan.title,
'config_file': scan.config_file
'config_id': scan.config_id
}
for scan in scans
]
@@ -415,10 +440,16 @@ class ScheduleService:
Returns:
Dictionary representation
"""
# Get config title if relationship is loaded
config_name = None
if schedule.config:
config_name = schedule.config.title
return {
'id': schedule.id,
'name': schedule.name,
'config_file': schedule.config_file,
'config_id': schedule.config_id,
'config_name': config_name,
'cron_expression': schedule.cron_expression,
'enabled': schedule.enabled,
'last_run': schedule.last_run.isoformat() if schedule.last_run else None,
@@ -433,7 +464,7 @@ class ScheduleService:
Format datetime as relative time.
Args:
dt: Datetime to format (UTC)
dt: Datetime to format (UTC, can be naive or aware)
Returns:
Human-readable relative time (e.g., "in 2 hours", "yesterday")
@@ -441,7 +472,13 @@ class ScheduleService:
if dt is None:
return None
now = datetime.utcnow()
# Ensure both datetimes are timezone-aware for comparison
now = datetime.now(timezone.utc)
# If dt is naive, assume it's UTC and add timezone info
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
diff = dt - now
# Future times

View File

@@ -131,7 +131,7 @@ class SchedulerService:
try:
self.add_scheduled_scan(
schedule_id=schedule.id,
config_file=schedule.config_file,
config_id=schedule.config_id,
cron_expression=schedule.cron_expression
)
logger.info(f"Loaded schedule {schedule.id}: '{schedule.name}'")
@@ -149,13 +149,58 @@ class SchedulerService:
except Exception as e:
logger.error(f"Error loading schedules on startup: {str(e)}", exc_info=True)
def queue_scan(self, scan_id: int, config_file: str) -> str:
@staticmethod
def validate_cron_expression(cron_expression: str) -> tuple[bool, str]:
"""
Validate a cron expression and provide helpful feedback.
Args:
cron_expression: Cron expression to validate
Returns:
Tuple of (is_valid: bool, message: str)
- If valid: (True, "Valid cron expression")
- If invalid: (False, "Error message with details")
Note:
Standard crontab format: minute hour day month day_of_week
Day of week: 0=Sunday, 1=Monday, ..., 6=Saturday (or 7=Sunday)
"""
from apscheduler.triggers.cron import CronTrigger
try:
# Try to parse the expression
trigger = CronTrigger.from_crontab(cron_expression)
# Validate basic format (5 fields)
fields = cron_expression.split()
if len(fields) != 5:
return False, f"Cron expression must have 5 fields (minute hour day month day_of_week), got {len(fields)}"
return True, "Valid cron expression"
except (ValueError, KeyError) as e:
error_msg = str(e)
# Provide helpful hints for common errors
if "day_of_week" in error_msg.lower() or (len(cron_expression.split()) >= 5):
# Check if day_of_week field might be using APScheduler format by mistake
fields = cron_expression.split()
if len(fields) == 5:
dow_field = fields[4]
if dow_field.isdigit() and int(dow_field) >= 0:
hint = "\nNote: Use standard crontab format where 0=Sunday, 1=Monday, ..., 6=Saturday"
return False, f"Invalid cron expression: {error_msg}{hint}"
return False, f"Invalid cron expression: {error_msg}"
def queue_scan(self, scan_id: int, config_id: int) -> str:
"""
Queue a scan for immediate background execution.
Args:
scan_id: Database ID of the scan
config_file: Path to YAML configuration file
config_id: Database config ID
Returns:
Job ID from APScheduler
@@ -169,7 +214,7 @@ class SchedulerService:
# Add job to run immediately
job = self.scheduler.add_job(
func=execute_scan,
args=[scan_id, config_file, self.db_url],
kwargs={'scan_id': scan_id, 'config_id': config_id, 'db_url': self.db_url},
id=f'scan_{scan_id}',
name=f'Scan {scan_id}',
replace_existing=True,
@@ -179,15 +224,19 @@ class SchedulerService:
logger.info(f"Queued scan {scan_id} for background execution (job_id={job.id})")
return job.id
def add_scheduled_scan(self, schedule_id: int, config_file: str,
def add_scheduled_scan(self, schedule_id: int, config_id: int,
cron_expression: str) -> str:
"""
Add a recurring scheduled scan.
Args:
schedule_id: Database ID of the schedule
config_file: Path to YAML configuration file
config_id: Database config ID
cron_expression: Cron expression (e.g., "0 2 * * *" for 2am daily)
IMPORTANT: Use standard crontab format where:
- Day of week: 0 = Sunday, 1 = Monday, ..., 6 = Saturday
- APScheduler automatically converts to its internal format
- from_crontab() handles the conversion properly
Returns:
Job ID from APScheduler
@@ -195,18 +244,29 @@ class SchedulerService:
Raises:
RuntimeError: If scheduler not initialized
ValueError: If cron expression is invalid
Note:
APScheduler internally uses Monday=0, but from_crontab() accepts
standard crontab format (Sunday=0) and converts it automatically.
"""
if not self.scheduler:
raise RuntimeError("Scheduler not initialized. Call init_scheduler() first.")
from apscheduler.triggers.cron import CronTrigger
# Validate cron expression first to provide helpful error messages
is_valid, message = self.validate_cron_expression(cron_expression)
if not is_valid:
raise ValueError(message)
# Create cron trigger from expression using local timezone
# This allows users to specify times in their local timezone
# from_crontab() parses standard crontab format (Sunday=0)
# and converts to APScheduler's internal format (Monday=0) automatically
try:
trigger = CronTrigger.from_crontab(cron_expression)
# timezone defaults to local system timezone
except (ValueError, KeyError) as e:
# This should not happen due to validation above, but catch anyway
raise ValueError(f"Invalid cron expression '{cron_expression}': {str(e)}")
# Add cron job
@@ -283,22 +343,27 @@ class SchedulerService:
# Create and trigger scan
scan_service = ScanService(session)
scan_id = scan_service.trigger_scan(
config_file=schedule['config_file'],
config_id=schedule['config_id'],
triggered_by='scheduled',
schedule_id=schedule_id,
scheduler=None # Don't pass scheduler to avoid recursion
)
# Queue the scan for execution
self.queue_scan(scan_id, schedule['config_file'])
self.queue_scan(scan_id, schedule['config_id'])
# Update schedule's last_run and next_run
from croniter import croniter
next_run = croniter(schedule['cron_expression'], datetime.utcnow()).get_next(datetime)
now_utc = datetime.now(timezone.utc)
next_run = croniter(schedule['cron_expression'], now_utc).get_next(datetime)
# croniter returns naive datetime, add UTC timezone
if next_run.tzinfo is None:
next_run = next_run.replace(tzinfo=timezone.utc)
schedule_service.update_run_times(
schedule_id=schedule_id,
last_run=datetime.utcnow(),
last_run=now_utc,
next_run=next_run
)

View File

@@ -0,0 +1,683 @@
"""
Site service for managing reusable site definitions.
This service handles the business logic for creating, updating, and managing
sites with their associated CIDR ranges and IP-level overrides.
"""
import ipaddress
import json
import logging
from datetime import datetime
from typing import Any, Dict, List, Optional
from sqlalchemy import func
from sqlalchemy.orm import Session, joinedload
from web.models import (
Site, SiteIP, ScanSiteAssociation
)
from web.utils.pagination import paginate, PaginatedResult
logger = logging.getLogger(__name__)
class SiteService:
"""
Service for managing reusable site definitions.
Handles site lifecycle: creation, updates, deletion (with safety checks),
CIDR management, and IP-level overrides.
"""
def __init__(self, db_session: Session):
"""
Initialize site service.
Args:
db_session: SQLAlchemy database session
"""
self.db = db_session
def create_site(self, name: str, description: Optional[str] = None) -> Dict[str, Any]:
"""
Create a new site.
Args:
name: Unique site name
description: Optional site description
Returns:
Dictionary with created site data
Raises:
ValueError: If site name already exists
"""
# Validate site name is unique
existing = self.db.query(Site).filter(Site.name == name).first()
if existing:
raise ValueError(f"Site with name '{name}' already exists")
# Create site (can be empty, IPs added separately)
site = Site(
name=name,
description=description,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
)
self.db.add(site)
self.db.commit()
self.db.refresh(site)
logger.info(f"Created site '{name}' (id={site.id})")
return self._site_to_dict(site)
def update_site(self, site_id: int, name: Optional[str] = None,
description: Optional[str] = None) -> Dict[str, Any]:
"""
Update site metadata (name and/or description).
Args:
site_id: Site ID to update
name: New site name (must be unique)
description: New description
Returns:
Dictionary with updated site data
Raises:
ValueError: If site not found or name already exists
"""
site = self.db.query(Site).filter(Site.id == site_id).first()
if not site:
raise ValueError(f"Site with id {site_id} not found")
# Update name if provided
if name is not None and name != site.name:
# Check uniqueness
existing = self.db.query(Site).filter(
Site.name == name,
Site.id != site_id
).first()
if existing:
raise ValueError(f"Site with name '{name}' already exists")
site.name = name
# Update description if provided
if description is not None:
site.description = description
site.updated_at = datetime.utcnow()
self.db.commit()
self.db.refresh(site)
logger.info(f"Updated site {site_id} ('{site.name}')")
return self._site_to_dict(site)
def delete_site(self, site_id: int) -> None:
"""
Delete a site.
Prevents deletion if the site is used in any scan (per user requirement).
Args:
site_id: Site ID to delete
Raises:
ValueError: If site not found or is used in scans
"""
site = self.db.query(Site).filter(Site.id == site_id).first()
if not site:
raise ValueError(f"Site with id {site_id} not found")
# Check if site is used in any scans
usage_count = (
self.db.query(func.count(ScanSiteAssociation.id))
.filter(ScanSiteAssociation.site_id == site_id)
.scalar()
)
if usage_count > 0:
raise ValueError(
f"Cannot delete site '{site.name}': it is used in {usage_count} scan(s). "
f"Sites that have been used in scans cannot be deleted."
)
# Safe to delete
self.db.delete(site)
self.db.commit()
logger.info(f"Deleted site {site_id} ('{site.name}')")
def get_site(self, site_id: int) -> Optional[Dict[str, Any]]:
"""
Get site details.
Args:
site_id: Site ID to retrieve
Returns:
Dictionary with site data, or None if not found
"""
site = (
self.db.query(Site)
.filter(Site.id == site_id)
.first()
)
if not site:
return None
return self._site_to_dict(site)
def get_site_by_name(self, name: str) -> Optional[Dict[str, Any]]:
"""
Get site details by name.
Args:
name: Site name to retrieve
Returns:
Dictionary with site data, or None if not found
"""
site = (
self.db.query(Site)
.filter(Site.name == name)
.first()
)
if not site:
return None
return self._site_to_dict(site)
def list_sites(self, page: int = 1, per_page: int = 20) -> PaginatedResult:
"""
List all sites with pagination.
Args:
page: Page number (1-indexed)
per_page: Number of items per page
Returns:
PaginatedResult with site data
"""
query = (
self.db.query(Site)
.order_by(Site.name)
)
return paginate(query, page, per_page, self._site_to_dict)
def list_all_sites(self) -> List[Dict[str, Any]]:
"""
List all sites without pagination (for dropdowns, etc.).
Returns:
List of site dictionaries
"""
sites = (
self.db.query(Site)
.order_by(Site.name)
.all()
)
return [self._site_to_dict(site) for site in sites]
def get_global_ip_stats(self) -> Dict[str, int]:
"""
Get global IP statistics across all sites.
Returns:
Dictionary with:
- total_ips: Total count of IP entries (including duplicates)
- unique_ips: Count of distinct IP addresses
- duplicate_ips: Number of duplicate entries (total - unique)
"""
# Total IP entries
total_ips = (
self.db.query(func.count(SiteIP.id))
.scalar() or 0
)
# Unique IP addresses
unique_ips = (
self.db.query(func.count(func.distinct(SiteIP.ip_address)))
.scalar() or 0
)
return {
'total_ips': total_ips,
'unique_ips': unique_ips,
'duplicate_ips': total_ips - unique_ips
}
def bulk_add_ips_from_cidr(self, site_id: int, cidr: str,
expected_ping: Optional[bool] = None,
expected_tcp_ports: Optional[List[int]] = None,
expected_udp_ports: Optional[List[int]] = None) -> Dict[str, Any]:
"""
Expand a CIDR range and add all IPs to a site.
CIDRs are NOT stored - they are just used to generate IP records.
Args:
site_id: Site ID
cidr: CIDR notation (e.g., "10.0.0.0/24")
expected_ping: Expected ping response for all IPs
expected_tcp_ports: List of expected TCP ports for all IPs
expected_udp_ports: List of expected UDP ports for all IPs
Returns:
Dictionary with:
- cidr: The CIDR that was expanded
- ip_count: Number of IPs created
- ips_added: List of IP addresses created
- ips_skipped: List of IPs that already existed
Raises:
ValueError: If site not found or CIDR is invalid/too large
"""
site = self.db.query(Site).filter(Site.id == site_id).first()
if not site:
raise ValueError(f"Site with id {site_id} not found")
# Validate CIDR format and size
try:
network = ipaddress.ip_network(cidr, strict=False)
except ValueError as e:
raise ValueError(f"Invalid CIDR notation '{cidr}': {str(e)}")
# Enforce CIDR size limits (max /24 for IPv4, /64 for IPv6)
if isinstance(network, ipaddress.IPv4Network) and network.prefixlen < 24:
raise ValueError(
f"CIDR '{cidr}' is too large ({network.num_addresses} IPs). "
f"Maximum allowed is /24 (256 IPs) for IPv4."
)
elif isinstance(network, ipaddress.IPv6Network) and network.prefixlen < 64:
raise ValueError(
f"CIDR '{cidr}' is too large. "
f"Maximum allowed is /64 for IPv6."
)
# Expand CIDR to individual IPs (no cidr_id since we're not storing CIDR)
ip_count, ips_added, ips_skipped = self._expand_cidr_to_ips(
site_id=site_id,
network=network,
expected_ping=expected_ping,
expected_tcp_ports=expected_tcp_ports or [],
expected_udp_ports=expected_udp_ports or []
)
site.updated_at = datetime.utcnow()
self.db.commit()
logger.info(
f"Expanded CIDR '{cidr}' for site {site_id} ('{site.name}'): "
f"added {ip_count} IPs, skipped {len(ips_skipped)} duplicates"
)
return {
'cidr': cidr,
'ip_count': ip_count,
'ips_added': ips_added,
'ips_skipped': ips_skipped
}
def bulk_add_ips_from_list(self, site_id: int, ip_list: List[str],
expected_ping: Optional[bool] = None,
expected_tcp_ports: Optional[List[int]] = None,
expected_udp_ports: Optional[List[int]] = None) -> Dict[str, Any]:
"""
Add multiple IPs from a list (e.g., from CSV/text import).
Args:
site_id: Site ID
ip_list: List of IP addresses as strings
expected_ping: Expected ping response for all IPs
expected_tcp_ports: List of expected TCP ports for all IPs
expected_udp_ports: List of expected UDP ports for all IPs
Returns:
Dictionary with:
- ip_count: Number of IPs successfully created
- ips_added: List of IP addresses created
- ips_skipped: List of IPs that already existed
- errors: List of validation errors {ip: error_message}
Raises:
ValueError: If site not found
"""
site = self.db.query(Site).filter(Site.id == site_id).first()
if not site:
raise ValueError(f"Site with id {site_id} not found")
ips_added = []
ips_skipped = []
errors = []
for ip_str in ip_list:
ip_str = ip_str.strip()
if not ip_str:
continue # Skip empty lines
# Validate IP format
try:
ipaddress.ip_address(ip_str)
except ValueError as e:
errors.append({'ip': ip_str, 'error': f"Invalid IP address: {str(e)}"})
continue
# Check for duplicate (across all IPs in the site)
existing = (
self.db.query(SiteIP)
.filter(SiteIP.site_id == site_id, SiteIP.ip_address == ip_str)
.first()
)
if existing:
ips_skipped.append(ip_str)
continue
# Create IP record
try:
ip_obj = SiteIP(
site_id=site_id,
ip_address=ip_str,
expected_ping=expected_ping,
expected_tcp_ports=json.dumps(expected_tcp_ports or []),
expected_udp_ports=json.dumps(expected_udp_ports or []),
created_at=datetime.utcnow()
)
self.db.add(ip_obj)
ips_added.append(ip_str)
except Exception as e:
errors.append({'ip': ip_str, 'error': f"Database error: {str(e)}"})
site.updated_at = datetime.utcnow()
self.db.commit()
logger.info(
f"Bulk added {len(ips_added)} IPs to site {site_id} ('{site.name}'), "
f"skipped {len(ips_skipped)} duplicates, {len(errors)} errors"
)
return {
'ip_count': len(ips_added),
'ips_added': ips_added,
'ips_skipped': ips_skipped,
'errors': errors
}
def add_standalone_ip(self, site_id: int, ip_address: str,
expected_ping: Optional[bool] = None,
expected_tcp_ports: Optional[List[int]] = None,
expected_udp_ports: Optional[List[int]] = None) -> Dict[str, Any]:
"""
Add a standalone IP (without a CIDR parent) to a site.
Args:
site_id: Site ID
ip_address: IP address to add
expected_ping: Expected ping response
expected_tcp_ports: List of expected TCP ports
expected_udp_ports: List of expected UDP ports
Returns:
Dictionary with IP data
Raises:
ValueError: If site not found, IP is invalid, or already exists
"""
site = self.db.query(Site).filter(Site.id == site_id).first()
if not site:
raise ValueError(f"Site with id {site_id} not found")
# Validate IP format
try:
ipaddress.ip_address(ip_address)
except ValueError as e:
raise ValueError(f"Invalid IP address '{ip_address}': {str(e)}")
# Check for duplicate (across all IPs in the site)
existing = (
self.db.query(SiteIP)
.filter(SiteIP.site_id == site_id, SiteIP.ip_address == ip_address)
.first()
)
if existing:
raise ValueError(f"IP '{ip_address}' already exists in this site")
# Create IP
ip_obj = SiteIP(
site_id=site_id,
ip_address=ip_address,
expected_ping=expected_ping,
expected_tcp_ports=json.dumps(expected_tcp_ports or []),
expected_udp_ports=json.dumps(expected_udp_ports or []),
created_at=datetime.utcnow()
)
self.db.add(ip_obj)
site.updated_at = datetime.utcnow()
self.db.commit()
self.db.refresh(ip_obj)
logger.info(f"Added IP '{ip_address}' to site {site_id} ('{site.name}')")
return self._ip_to_dict(ip_obj)
def update_ip_settings(self, site_id: int, ip_id: int,
expected_ping: Optional[bool] = None,
expected_tcp_ports: Optional[List[int]] = None,
expected_udp_ports: Optional[List[int]] = None) -> Dict[str, Any]:
"""
Update settings for an individual IP.
Args:
site_id: Site ID
ip_id: IP ID to update
expected_ping: New ping expectation (if provided)
expected_tcp_ports: New TCP ports expectation (if provided)
expected_udp_ports: New UDP ports expectation (if provided)
Returns:
Dictionary with updated IP data
Raises:
ValueError: If IP not found
"""
ip_obj = (
self.db.query(SiteIP)
.filter(SiteIP.id == ip_id, SiteIP.site_id == site_id)
.first()
)
if not ip_obj:
raise ValueError(f"IP with id {ip_id} not found for site {site_id}")
# Update settings if provided
if expected_ping is not None:
ip_obj.expected_ping = expected_ping
if expected_tcp_ports is not None:
ip_obj.expected_tcp_ports = json.dumps(expected_tcp_ports)
if expected_udp_ports is not None:
ip_obj.expected_udp_ports = json.dumps(expected_udp_ports)
self.db.commit()
self.db.refresh(ip_obj)
logger.info(f"Updated settings for IP '{ip_obj.ip_address}' in site {site_id}")
return self._ip_to_dict(ip_obj)
def remove_ip(self, site_id: int, ip_id: int) -> None:
"""
Remove an IP from a site.
Args:
site_id: Site ID
ip_id: IP ID to remove
Raises:
ValueError: If IP not found
"""
ip_obj = (
self.db.query(SiteIP)
.filter(SiteIP.id == ip_id, SiteIP.site_id == site_id)
.first()
)
if not ip_obj:
raise ValueError(f"IP with id {ip_id} not found for site {site_id}")
ip_address = ip_obj.ip_address
self.db.delete(ip_obj)
self.db.commit()
logger.info(f"Removed IP '{ip_address}' from site {site_id}")
def list_ips(self, site_id: int, page: int = 1, per_page: int = 50) -> PaginatedResult:
"""
List IPs in a site with pagination.
Args:
site_id: Site ID
page: Page number (1-indexed)
per_page: Number of items per page
Returns:
PaginatedResult with IP data
"""
query = (
self.db.query(SiteIP)
.filter(SiteIP.site_id == site_id)
.order_by(SiteIP.ip_address)
)
return paginate(query, page, per_page, self._ip_to_dict)
def get_scan_usage(self, site_id: int) -> List[Dict[str, Any]]:
"""
Get list of scans that use this site.
Args:
site_id: Site ID
Returns:
List of scan dictionaries
"""
from web.models import Scan # Import here to avoid circular dependency
associations = (
self.db.query(ScanSiteAssociation)
.options(joinedload(ScanSiteAssociation.scan))
.filter(ScanSiteAssociation.site_id == site_id)
.all()
)
return [
{
'id': assoc.scan.id,
'title': assoc.scan.title,
'timestamp': assoc.scan.timestamp.isoformat() if assoc.scan.timestamp else None,
'status': assoc.scan.status
}
for assoc in associations
]
# Private helper methods
def _expand_cidr_to_ips(self, site_id: int,
network: ipaddress.IPv4Network | ipaddress.IPv6Network,
expected_ping: Optional[bool],
expected_tcp_ports: List[int],
expected_udp_ports: List[int]) -> tuple[int, List[str], List[str]]:
"""
Expand a CIDR to individual IP addresses.
Args:
site_id: Site ID
network: ipaddress network object
expected_ping: Default ping setting for all IPs
expected_tcp_ports: Default TCP ports for all IPs
expected_udp_ports: Default UDP ports for all IPs
Returns:
Tuple of (count of IPs created, list of IPs added, list of IPs skipped)
"""
ip_count = 0
ips_added = []
ips_skipped = []
# For /32 or /128 (single host), use the network address
# For larger ranges, use hosts() to exclude network/broadcast addresses
if network.num_addresses == 1:
ip_list = [network.network_address]
elif network.num_addresses == 2:
# For /31 networks (point-to-point), both addresses are usable
ip_list = [network.network_address, network.broadcast_address]
else:
# Use hosts() to get usable IPs (excludes network and broadcast)
ip_list = list(network.hosts())
for ip in ip_list:
ip_str = str(ip)
# Check for duplicate
existing = (
self.db.query(SiteIP)
.filter(SiteIP.site_id == site_id, SiteIP.ip_address == ip_str)
.first()
)
if existing:
ips_skipped.append(ip_str)
continue
# Create SiteIP entry
ip_obj = SiteIP(
site_id=site_id,
ip_address=ip_str,
expected_ping=expected_ping,
expected_tcp_ports=json.dumps(expected_tcp_ports),
expected_udp_ports=json.dumps(expected_udp_ports),
created_at=datetime.utcnow()
)
self.db.add(ip_obj)
ips_added.append(ip_str)
ip_count += 1
return ip_count, ips_added, ips_skipped
def _site_to_dict(self, site: Site) -> Dict[str, Any]:
"""Convert Site model to dictionary."""
# Count IPs for this site
ip_count = (
self.db.query(func.count(SiteIP.id))
.filter(SiteIP.site_id == site.id)
.scalar() or 0
)
return {
'id': site.id,
'name': site.name,
'description': site.description,
'created_at': site.created_at.isoformat() if site.created_at else None,
'updated_at': site.updated_at.isoformat() if site.updated_at else None,
'ip_count': ip_count
}
def _ip_to_dict(self, ip: SiteIP) -> Dict[str, Any]:
"""Convert SiteIP model to dictionary."""
return {
'id': ip.id,
'site_id': ip.site_id,
'ip_address': ip.ip_address,
'expected_ping': ip.expected_ping,
'expected_tcp_ports': json.loads(ip.expected_tcp_ports) if ip.expected_tcp_ports else [],
'expected_udp_ports': json.loads(ip.expected_udp_ports) if ip.expected_udp_ports else [],
'created_at': ip.created_at.isoformat() if ip.created_at else None
}

View File

@@ -0,0 +1,294 @@
"""
Webhook Template Service
Provides Jinja2 template rendering for webhook payloads with a sandboxed
environment and comprehensive context building from scan/alert/rule data.
"""
from jinja2 import Environment, BaseLoader, TemplateError, meta
from jinja2.sandbox import SandboxedEnvironment
import json
from typing import Dict, Any, Optional, Tuple
from datetime import datetime
class TemplateService:
"""
Service for rendering webhook templates safely using Jinja2.
Features:
- Sandboxed Jinja2 environment to prevent code execution
- Rich context with alert, scan, rule, service, cert data
- Support for both JSON and text output formats
- Template validation and error handling
"""
def __init__(self):
"""Initialize the sandboxed Jinja2 environment."""
self.env = SandboxedEnvironment(
loader=BaseLoader(),
autoescape=False, # We control the output format
trim_blocks=True,
lstrip_blocks=True
)
# Add custom filters
self.env.filters['isoformat'] = self._isoformat_filter
def _isoformat_filter(self, value):
"""Custom filter to convert datetime to ISO format."""
if isinstance(value, datetime):
return value.isoformat()
return str(value)
def build_context(
self,
alert,
scan,
rule,
app_name: str = "SneakyScanner",
app_version: str = "1.0.0",
app_url: str = "https://github.com/sneakygeek/SneakyScan"
) -> Dict[str, Any]:
"""
Build the template context from alert, scan, and rule objects.
Args:
alert: Alert model instance
scan: Scan model instance
rule: AlertRule model instance
app_name: Application name
app_version: Application version
app_url: Application repository URL
Returns:
Dictionary with all available template variables
"""
context = {
"alert": {
"id": alert.id,
"type": alert.alert_type,
"severity": alert.severity,
"message": alert.message,
"ip_address": alert.ip_address,
"port": alert.port,
"acknowledged": alert.acknowledged,
"acknowledged_at": alert.acknowledged_at,
"acknowledged_by": alert.acknowledged_by,
"created_at": alert.created_at,
"email_sent": alert.email_sent,
"email_sent_at": alert.email_sent_at,
"webhook_sent": alert.webhook_sent,
"webhook_sent_at": alert.webhook_sent_at,
},
"scan": {
"id": scan.id,
"title": scan.title,
"timestamp": scan.timestamp,
"duration": scan.duration,
"status": scan.status,
"config_id": scan.config_id,
"triggered_by": scan.triggered_by,
"started_at": scan.started_at,
"completed_at": scan.completed_at,
"error_message": scan.error_message,
},
"rule": {
"id": rule.id,
"name": rule.name,
"type": rule.rule_type,
"threshold": rule.threshold,
"severity": rule.severity,
"enabled": rule.enabled,
},
"app": {
"name": app_name,
"version": app_version,
"url": app_url,
},
"timestamp": datetime.utcnow(),
}
# Add service information if available (for service-related alerts)
# This would require additional context from the caller
# For now, we'll add placeholder support
context["service"] = None
context["cert"] = None
context["tls"] = None
return context
def render(
self,
template_string: str,
context: Dict[str, Any],
template_format: str = 'json'
) -> Tuple[str, Optional[str]]:
"""
Render a template with the given context.
Args:
template_string: The Jinja2 template string
context: Template context dictionary
template_format: Output format ('json' or 'text')
Returns:
Tuple of (rendered_output, error_message)
- If successful: (rendered_string, None)
- If failed: (None, error_message)
"""
try:
template = self.env.from_string(template_string)
rendered = template.render(context)
# For JSON format, validate that the output is valid JSON
if template_format == 'json':
try:
# Parse to validate JSON structure
json.loads(rendered)
except json.JSONDecodeError as e:
return None, f"Template rendered invalid JSON: {str(e)}"
return rendered, None
except TemplateError as e:
return None, f"Template rendering error: {str(e)}"
except Exception as e:
return None, f"Unexpected error rendering template: {str(e)}"
def validate_template(
self,
template_string: str,
template_format: str = 'json'
) -> Tuple[bool, Optional[str]]:
"""
Validate a template without rendering it.
Args:
template_string: The Jinja2 template string to validate
template_format: Expected output format ('json' or 'text')
Returns:
Tuple of (is_valid, error_message)
- If valid: (True, None)
- If invalid: (False, error_message)
"""
try:
# Parse the template to check syntax
self.env.parse(template_string)
# For JSON templates, check if it looks like valid JSON structure
# (this is a basic check - full validation happens during render)
if template_format == 'json':
# Just check for basic JSON structure markers
stripped = template_string.strip()
if not (stripped.startswith('{') or stripped.startswith('[')):
return False, "JSON template must start with { or ["
return True, None
except TemplateError as e:
return False, f"Template syntax error: {str(e)}"
except Exception as e:
return False, f"Template validation error: {str(e)}"
def get_template_variables(self, template_string: str) -> set:
"""
Extract all variables used in a template.
Args:
template_string: The Jinja2 template string
Returns:
Set of variable names used in the template
"""
try:
ast = self.env.parse(template_string)
return meta.find_undeclared_variables(ast)
except Exception:
return set()
def render_test_payload(
self,
template_string: str,
template_format: str = 'json'
) -> Tuple[str, Optional[str]]:
"""
Render a template with sample/test data for preview purposes.
Args:
template_string: The Jinja2 template string
template_format: Output format ('json' or 'text')
Returns:
Tuple of (rendered_output, error_message)
"""
# Create sample context data
sample_context = {
"alert": {
"id": 123,
"type": "unexpected_port",
"severity": "warning",
"message": "Unexpected port 8080 found open on 192.168.1.100",
"ip_address": "192.168.1.100",
"port": 8080,
"acknowledged": False,
"acknowledged_at": None,
"acknowledged_by": None,
"created_at": datetime.utcnow(),
"email_sent": False,
"email_sent_at": None,
"webhook_sent": False,
"webhook_sent_at": None,
},
"scan": {
"id": 456,
"title": "Production Infrastructure Scan",
"timestamp": datetime.utcnow(),
"duration": 125.5,
"status": "completed",
"config_id": 1,
"triggered_by": "schedule",
"started_at": datetime.utcnow(),
"completed_at": datetime.utcnow(),
"error_message": None,
},
"rule": {
"id": 789,
"name": "Unexpected Port Detection",
"type": "unexpected_port",
"threshold": None,
"severity": "warning",
"enabled": True,
},
"service": {
"name": "http",
"product": "nginx",
"version": "1.20.0",
},
"cert": {
"subject": "CN=example.com",
"issuer": "CN=Let's Encrypt Authority X3",
"days_until_expiry": 15,
},
"app": {
"name": "SneakyScanner",
"version": "1.0.0-phase5",
"url": "https://github.com/sneakygeek/SneakyScan",
},
"timestamp": datetime.utcnow(),
}
return self.render(template_string, sample_context, template_format)
# Singleton instance
_template_service = None
def get_template_service() -> TemplateService:
"""Get the singleton TemplateService instance."""
global _template_service
if _template_service is None:
_template_service = TemplateService()
return _template_service

View File

@@ -0,0 +1,566 @@
"""
Webhook Service Module
Handles webhook delivery for alert notifications with retry logic,
authentication support, and comprehensive logging.
"""
import json
import logging
import time
from datetime import datetime, timezone
from typing import List, Dict, Optional, Any, Tuple
from sqlalchemy.orm import Session
import requests
from requests.auth import HTTPBasicAuth
from cryptography.fernet import Fernet
import os
from ..models import Webhook, WebhookDeliveryLog, Alert, AlertRule, Scan
from .template_service import get_template_service
from ..config import APP_NAME, APP_VERSION, REPO_URL
logger = logging.getLogger(__name__)
class WebhookService:
"""
Service for webhook delivery and management.
Handles queuing webhook deliveries, executing HTTP requests with
authentication, retry logic, and logging delivery attempts.
"""
def __init__(self, db_session: Session, encryption_key: Optional[bytes] = None):
"""
Initialize webhook service.
Args:
db_session: SQLAlchemy database session
encryption_key: Fernet encryption key for auth_token encryption
"""
self.db = db_session
self._encryption_key = encryption_key or self._get_encryption_key()
self._cipher = Fernet(self._encryption_key) if self._encryption_key else None
def _get_encryption_key(self) -> Optional[bytes]:
"""
Get encryption key from environment or database.
Returns:
Fernet encryption key or None if not available
"""
# Try environment variable first
key_str = os.environ.get('SNEAKYSCANNER_ENCRYPTION_KEY')
if key_str:
return key_str.encode()
# Try to get from settings (would need to query Setting table)
# For now, generate a temporary key if none exists
try:
return Fernet.generate_key()
except Exception as e:
logger.warning(f"Could not generate encryption key: {e}")
return None
def _encrypt_value(self, value: str) -> str:
"""Encrypt a string value."""
if not self._cipher:
return value # Return plain text if encryption not available
return self._cipher.encrypt(value.encode()).decode()
def _decrypt_value(self, encrypted_value: str) -> str:
"""Decrypt an encrypted string value."""
if not self._cipher:
return encrypted_value # Return as-is if encryption not available
try:
return self._cipher.decrypt(encrypted_value.encode()).decode()
except Exception as e:
logger.error(f"Failed to decrypt value: {e}")
return encrypted_value
def get_matching_webhooks(self, alert: Alert) -> List[Webhook]:
"""
Get all enabled webhooks that match an alert's type and severity.
Args:
alert: Alert object to match against
Returns:
List of matching Webhook objects
"""
# Get all enabled webhooks
webhooks = self.db.query(Webhook).filter(Webhook.enabled == True).all()
matching_webhooks = []
for webhook in webhooks:
# Check if webhook matches alert type filter
if webhook.alert_types:
try:
alert_types = json.loads(webhook.alert_types)
if alert.alert_type not in alert_types:
continue # Skip if alert type doesn't match
except json.JSONDecodeError:
logger.warning(f"Invalid alert_types JSON for webhook {webhook.id}")
continue
# Check if webhook matches severity filter
if webhook.severity_filter:
try:
severity_filter = json.loads(webhook.severity_filter)
if alert.severity not in severity_filter:
continue # Skip if severity doesn't match
except json.JSONDecodeError:
logger.warning(f"Invalid severity_filter JSON for webhook {webhook.id}")
continue
matching_webhooks.append(webhook)
logger.info(f"Found {len(matching_webhooks)} matching webhooks for alert {alert.id}")
return matching_webhooks
def queue_webhook_delivery(self, webhook_id: int, alert_id: int, scheduler_service=None) -> bool:
"""
Queue a webhook delivery for async execution via APScheduler.
Args:
webhook_id: ID of webhook to deliver
alert_id: ID of alert to send
scheduler_service: SchedulerService instance (if None, deliver synchronously)
Returns:
True if queued successfully, False otherwise
"""
if scheduler_service and scheduler_service.scheduler:
try:
# Import here to avoid circular dependency
from web.jobs.webhook_job import execute_webhook_delivery
# Schedule immediate execution
scheduler_service.scheduler.add_job(
execute_webhook_delivery,
args=[webhook_id, alert_id, scheduler_service.db_url],
id=f"webhook_{webhook_id}_{alert_id}_{int(time.time())}",
replace_existing=False
)
logger.info(f"Queued webhook {webhook_id} for alert {alert_id}")
return True
except Exception as e:
logger.error(f"Failed to queue webhook delivery: {e}")
# Fall back to synchronous delivery
return self.deliver_webhook(webhook_id, alert_id)
else:
# No scheduler available, deliver synchronously
logger.info(f"No scheduler available, delivering webhook {webhook_id} synchronously")
return self.deliver_webhook(webhook_id, alert_id)
def deliver_webhook(self, webhook_id: int, alert_id: int, attempt_number: int = 1) -> bool:
"""
Deliver a webhook with retry logic.
Args:
webhook_id: ID of webhook to deliver
alert_id: ID of alert to send
attempt_number: Current attempt number (for retries)
Returns:
True if delivered successfully, False otherwise
"""
# Get webhook and alert
webhook = self.db.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
logger.error(f"Webhook {webhook_id} not found")
return False
alert = self.db.query(Alert).filter(Alert.id == alert_id).first()
if not alert:
logger.error(f"Alert {alert_id} not found")
return False
logger.info(f"Delivering webhook {webhook_id} for alert {alert_id} (attempt {attempt_number}/{webhook.retry_count})")
# Build payload with template support
payload, content_type = self._build_payload(webhook, alert)
# Prepare headers
headers = {'Content-Type': content_type}
# Add custom headers if provided
if webhook.custom_headers:
try:
custom_headers = json.loads(webhook.custom_headers)
headers.update(custom_headers)
except json.JSONDecodeError:
logger.warning(f"Invalid custom_headers JSON for webhook {webhook_id}")
# Prepare authentication
auth = None
if webhook.auth_type == 'bearer' and webhook.auth_token:
decrypted_token = self._decrypt_value(webhook.auth_token)
headers['Authorization'] = f'Bearer {decrypted_token}'
elif webhook.auth_type == 'basic' and webhook.auth_token:
# Expecting format: "username:password"
decrypted_token = self._decrypt_value(webhook.auth_token)
if ':' in decrypted_token:
username, password = decrypted_token.split(':', 1)
auth = HTTPBasicAuth(username, password)
# Execute HTTP request
try:
timeout = webhook.timeout or 10
# Use appropriate parameter based on payload type
if isinstance(payload, dict):
# JSON payload
response = requests.post(
webhook.url,
json=payload,
headers=headers,
auth=auth,
timeout=timeout
)
else:
# Text payload
response = requests.post(
webhook.url,
data=payload,
headers=headers,
auth=auth,
timeout=timeout
)
# Log delivery attempt
log_entry = WebhookDeliveryLog(
webhook_id=webhook_id,
alert_id=alert_id,
status='success' if response.status_code < 400 else 'failed',
response_code=response.status_code,
response_body=response.text[:1000], # Limit to 1000 chars
error_message=None if response.status_code < 400 else f"HTTP {response.status_code}",
attempt_number=attempt_number,
delivered_at=datetime.now(timezone.utc)
)
self.db.add(log_entry)
# Update alert webhook status if successful
if response.status_code < 400:
alert.webhook_sent = True
alert.webhook_sent_at = datetime.now(timezone.utc)
logger.info(f"Webhook {webhook_id} delivered successfully (HTTP {response.status_code})")
self.db.commit()
return True
else:
# Failed but got a response
logger.warning(f"Webhook {webhook_id} failed with HTTP {response.status_code}")
self.db.commit()
# Retry if attempts remaining
if attempt_number < webhook.retry_count:
delay = self._calculate_retry_delay(attempt_number)
logger.info(f"Retrying webhook {webhook_id} in {delay} seconds")
time.sleep(delay)
return self.deliver_webhook(webhook_id, alert_id, attempt_number + 1)
return False
except requests.exceptions.Timeout:
error_msg = f"Request timeout after {timeout} seconds"
logger.error(f"Webhook {webhook_id} timeout: {error_msg}")
self._log_delivery_failure(webhook_id, alert_id, error_msg, attempt_number)
except requests.exceptions.ConnectionError as e:
error_msg = f"Connection error: {str(e)}"
logger.error(f"Webhook {webhook_id} connection error: {error_msg}")
self._log_delivery_failure(webhook_id, alert_id, error_msg, attempt_number)
except requests.exceptions.RequestException as e:
error_msg = f"Request error: {str(e)}"
logger.error(f"Webhook {webhook_id} request error: {error_msg}")
self._log_delivery_failure(webhook_id, alert_id, error_msg, attempt_number)
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logger.error(f"Webhook {webhook_id} unexpected error: {error_msg}")
self._log_delivery_failure(webhook_id, alert_id, error_msg, attempt_number)
# Retry if attempts remaining
if attempt_number < webhook.retry_count:
delay = self._calculate_retry_delay(attempt_number)
logger.info(f"Retrying webhook {webhook_id} in {delay} seconds")
time.sleep(delay)
return self.deliver_webhook(webhook_id, alert_id, attempt_number + 1)
return False
def _log_delivery_failure(self, webhook_id: int, alert_id: int, error_message: str, attempt_number: int):
"""Log a failed delivery attempt."""
log_entry = WebhookDeliveryLog(
webhook_id=webhook_id,
alert_id=alert_id,
status='failed',
response_code=None,
response_body=None,
error_message=error_message[:500], # Limit error message length
attempt_number=attempt_number,
delivered_at=datetime.now(timezone.utc)
)
self.db.add(log_entry)
self.db.commit()
def _calculate_retry_delay(self, attempt_number: int) -> int:
"""
Calculate exponential backoff delay for retries.
Args:
attempt_number: Current attempt number
Returns:
Delay in seconds
"""
# Exponential backoff: 2^attempt seconds (2, 4, 8, 16...)
return min(2 ** attempt_number, 60) # Cap at 60 seconds
def _build_payload(self, webhook: Webhook, alert: Alert) -> Tuple[Any, str]:
"""
Build payload for webhook delivery using template if configured.
Args:
webhook: Webhook object with optional template configuration
alert: Alert object
Returns:
Tuple of (payload, content_type):
- payload: Rendered payload (dict for JSON, string for text)
- content_type: Content-Type header value
"""
# Get related scan
scan = self.db.query(Scan).filter(Scan.id == alert.scan_id).first()
# Get related rule
rule = self.db.query(AlertRule).filter(AlertRule.id == alert.rule_id).first()
# If webhook has a custom template, use it
if webhook.template:
template_service = get_template_service()
context = template_service.build_context(
alert=alert,
scan=scan,
rule=rule,
app_name=APP_NAME,
app_version=APP_VERSION,
app_url=REPO_URL
)
rendered, error = template_service.render(
webhook.template,
context,
webhook.template_format or 'json'
)
if error:
logger.error(f"Template rendering error for webhook {webhook.id}: {error}")
# Fall back to default payload
return self._build_default_payload(alert, scan, rule), 'application/json'
# Determine content type
if webhook.content_type_override:
content_type = webhook.content_type_override
elif webhook.template_format == 'text':
content_type = 'text/plain'
else:
content_type = 'application/json'
# For JSON format, parse the rendered string back to a dict
# For text format, return as string
if webhook.template_format == 'json':
try:
payload = json.loads(rendered)
except json.JSONDecodeError:
logger.error(f"Failed to parse rendered JSON template for webhook {webhook.id}")
return self._build_default_payload(alert, scan, rule), 'application/json'
else:
payload = rendered
return payload, content_type
# No template - use default payload
return self._build_default_payload(alert, scan, rule), 'application/json'
def _build_default_payload(self, alert: Alert, scan: Optional[Scan], rule: Optional[AlertRule]) -> Dict[str, Any]:
"""
Build default JSON payload for webhook delivery.
Args:
alert: Alert object
scan: Scan object (optional)
rule: AlertRule object (optional)
Returns:
Dict containing alert details in generic JSON format
"""
payload = {
"event": "alert.created",
"alert": {
"id": alert.id,
"type": alert.alert_type,
"severity": alert.severity,
"message": alert.message,
"ip_address": alert.ip_address,
"port": alert.port,
"acknowledged": alert.acknowledged,
"created_at": alert.created_at.isoformat() if alert.created_at else None
},
"scan": {
"id": scan.id if scan else None,
"title": scan.title if scan else None,
"timestamp": scan.timestamp.isoformat() if scan and scan.timestamp else None,
"status": scan.status if scan else None
},
"rule": {
"id": rule.id if rule else None,
"name": rule.name if rule else None,
"type": rule.rule_type if rule else None,
"threshold": rule.threshold if rule else None
}
}
return payload
def test_webhook(self, webhook_id: int) -> Dict[str, Any]:
"""
Send a test payload to a webhook.
Args:
webhook_id: ID of webhook to test
Returns:
Dict with test result details
"""
webhook = self.db.query(Webhook).filter(Webhook.id == webhook_id).first()
if not webhook:
return {
'success': False,
'message': 'Webhook not found',
'status_code': None
}
# Build test payload - use template if configured
if webhook.template:
template_service = get_template_service()
rendered, error = template_service.render_test_payload(
webhook.template,
webhook.template_format or 'json'
)
if error:
return {
'success': False,
'message': f'Template error: {error}',
'status_code': None
}
# Determine content type
if webhook.content_type_override:
content_type = webhook.content_type_override
elif webhook.template_format == 'text':
content_type = 'text/plain'
else:
content_type = 'application/json'
# Parse JSON template
if webhook.template_format == 'json':
try:
payload = json.loads(rendered)
except json.JSONDecodeError:
return {
'success': False,
'message': 'Template rendered invalid JSON',
'status_code': None
}
else:
payload = rendered
else:
# Default test payload
payload = {
"event": "webhook.test",
"message": "This is a test webhook from SneakyScanner",
"timestamp": datetime.now(timezone.utc).isoformat(),
"webhook": {
"id": webhook.id,
"name": webhook.name
}
}
content_type = 'application/json'
# Prepare headers
headers = {'Content-Type': content_type}
if webhook.custom_headers:
try:
custom_headers = json.loads(webhook.custom_headers)
headers.update(custom_headers)
except json.JSONDecodeError:
pass
# Prepare authentication
auth = None
if webhook.auth_type == 'bearer' and webhook.auth_token:
decrypted_token = self._decrypt_value(webhook.auth_token)
headers['Authorization'] = f'Bearer {decrypted_token}'
elif webhook.auth_type == 'basic' and webhook.auth_token:
decrypted_token = self._decrypt_value(webhook.auth_token)
if ':' in decrypted_token:
username, password = decrypted_token.split(':', 1)
auth = HTTPBasicAuth(username, password)
# Execute test request
try:
timeout = webhook.timeout or 10
# Use appropriate parameter based on payload type
if isinstance(payload, dict):
# JSON payload
response = requests.post(
webhook.url,
json=payload,
headers=headers,
auth=auth,
timeout=timeout
)
else:
# Text payload
response = requests.post(
webhook.url,
data=payload,
headers=headers,
auth=auth,
timeout=timeout
)
return {
'success': response.status_code < 400,
'message': f'HTTP {response.status_code}',
'status_code': response.status_code,
'response_body': response.text[:500]
}
except requests.exceptions.Timeout:
return {
'success': False,
'message': f'Request timeout after {timeout} seconds',
'status_code': None
}
except requests.exceptions.ConnectionError as e:
return {
'success': False,
'message': f'Connection error: {str(e)}',
'status_code': None
}
except Exception as e:
return {
'success': False,
'message': f'Error: {str(e)}',
'status_code': None
}

View File

@@ -0,0 +1,507 @@
/**
* Config Manager Styles
* Phase 4: Config Creator - CSS styling for config management UI
*/
/* ============================================
Dropzone Styling
============================================ */
.dropzone {
border: 2px dashed #6c757d;
border-radius: 8px;
padding: 40px 20px;
text-align: center;
cursor: pointer;
transition: all 0.3s ease;
background-color: #1e293b;
min-height: 200px;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
}
.dropzone:hover {
border-color: #0d6efd;
background-color: #2d3748;
}
.dropzone.dragover {
border-color: #0d6efd;
background-color: #1a365d;
border-width: 3px;
}
.dropzone i {
font-size: 48px;
color: #94a3b8;
margin-bottom: 16px;
display: block;
}
.dropzone p {
color: #cbd5e0;
margin: 0;
font-size: 1rem;
}
.dropzone:hover i {
color: #0d6efd;
}
/* ============================================
Preview Pane Styling
============================================ */
#yaml-preview {
background-color: #1e293b;
border-radius: 8px;
padding: 16px;
}
#yaml-preview pre {
background-color: #0f172a;
border: 1px solid #334155;
border-radius: 6px;
padding: 16px;
max-height: 500px;
overflow-y: auto;
margin: 0;
}
#yaml-preview pre code {
color: #e2e8f0;
font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', monospace;
font-size: 0.9rem;
line-height: 1.6;
white-space: pre;
}
#preview-placeholder {
background-color: #1e293b;
border: 2px dashed #475569;
border-radius: 8px;
padding: 60px 20px;
text-align: center;
color: #94a3b8;
}
#preview-placeholder i {
font-size: 3rem;
margin-bottom: 1rem;
display: block;
opacity: 0.5;
}
/* ============================================
Config Table Styling
============================================ */
#configs-table {
background-color: #1e293b;
border-radius: 8px;
overflow: hidden;
}
#configs-table thead {
background-color: #0f172a;
border-bottom: 2px solid #334155;
}
#configs-table thead th {
color: #cbd5e0;
font-weight: 600;
padding: 12px 16px;
border: none;
}
#configs-table tbody tr {
border-bottom: 1px solid #334155;
transition: background-color 0.2s ease;
}
#configs-table tbody tr:hover {
background-color: #2d3748;
}
#configs-table tbody td {
padding: 12px 16px;
color: #e2e8f0;
vertical-align: middle;
border: none;
}
#configs-table tbody td code {
background-color: #0f172a;
padding: 2px 6px;
border-radius: 4px;
color: #60a5fa;
font-size: 0.9rem;
}
/* ============================================
Action Buttons
============================================ */
.config-actions {
white-space: nowrap;
}
.config-actions .btn {
margin-right: 4px;
padding: 4px 8px;
font-size: 0.875rem;
}
.config-actions .btn:last-child {
margin-right: 0;
}
.config-actions .btn i {
font-size: 1rem;
}
/* Disabled button styling */
.config-actions .btn:disabled {
opacity: 0.4;
cursor: not-allowed;
}
/* ============================================
Schedule Badge
============================================ */
.schedule-badge {
display: inline-block;
background-color: #3b82f6;
color: white;
padding: 4px 10px;
border-radius: 12px;
font-size: 0.8rem;
font-weight: 600;
min-width: 24px;
text-align: center;
cursor: help;
}
.schedule-badge:hover {
background-color: #2563eb;
}
/* ============================================
Search Box
============================================ */
#search {
background-color: #1e293b;
border: 1px solid #475569;
color: #e2e8f0;
padding: 8px 12px;
border-radius: 6px;
transition: border-color 0.2s ease;
}
#search:focus {
background-color: #0f172a;
border-color: #3b82f6;
color: #e2e8f0;
outline: none;
box-shadow: 0 0 0 3px rgba(59, 130, 246, 0.1);
}
#search::placeholder {
color: #64748b;
}
/* ============================================
Alert Messages
============================================ */
.alert {
border-radius: 8px;
padding: 12px 16px;
margin-bottom: 16px;
}
.alert-danger {
background-color: #7f1d1d;
border: 1px solid #991b1b;
color: #fecaca;
}
.alert-success {
background-color: #14532d;
border: 1px solid #166534;
color: #86efac;
}
.alert i {
margin-right: 8px;
}
/* ============================================
Card Styling
============================================ */
.card {
background-color: #1e293b;
border: 1px solid #334155;
border-radius: 8px;
margin-bottom: 20px;
}
.card-body {
padding: 20px;
}
.card h5 {
color: #cbd5e0;
margin-bottom: 16px;
}
.card .text-muted {
color: #94a3b8 !important;
}
/* ============================================
Tab Navigation
============================================ */
.nav-tabs {
border-bottom: 2px solid #334155;
}
.nav-tabs .nav-link {
color: #94a3b8;
border: none;
border-bottom: 2px solid transparent;
padding: 12px 20px;
transition: all 0.2s ease;
}
.nav-tabs .nav-link:hover {
color: #cbd5e0;
background-color: #2d3748;
border-color: transparent;
}
.nav-tabs .nav-link.active {
color: #60a5fa;
background-color: transparent;
border-color: transparent transparent #60a5fa transparent;
}
/* ============================================
Buttons
============================================ */
.btn {
border-radius: 6px;
padding: 8px 16px;
font-weight: 500;
transition: all 0.2s ease;
}
.btn-primary {
background-color: #3b82f6;
border-color: #3b82f6;
}
.btn-primary:hover {
background-color: #2563eb;
border-color: #2563eb;
}
.btn-success {
background-color: #22c55e;
border-color: #22c55e;
}
.btn-success:hover {
background-color: #16a34a;
border-color: #16a34a;
}
.btn-outline-secondary {
color: #94a3b8;
border-color: #475569;
}
.btn-outline-secondary:hover {
background-color: #475569;
border-color: #475569;
color: #e2e8f0;
}
.btn-outline-primary {
color: #60a5fa;
border-color: #3b82f6;
}
.btn-outline-primary:hover {
background-color: #3b82f6;
border-color: #3b82f6;
color: white;
}
.btn-outline-danger {
color: #f87171;
border-color: #dc2626;
}
.btn-outline-danger:hover {
background-color: #dc2626;
border-color: #dc2626;
color: white;
}
/* ============================================
Modal Styling
============================================ */
.modal-content {
background-color: #1e293b;
border: 1px solid #334155;
color: #e2e8f0;
}
.modal-header {
border-bottom: 1px solid #334155;
}
.modal-footer {
border-top: 1px solid #334155;
}
.modal-title {
color: #cbd5e0;
}
.btn-close {
filter: invert(1);
}
/* ============================================
Spinner/Loading
============================================ */
.spinner-border {
color: #3b82f6;
}
/* ============================================
Responsive Adjustments
============================================ */
@media (max-width: 768px) {
#configs-table {
font-size: 0.875rem;
}
#configs-table thead th,
#configs-table tbody td {
padding: 8px 12px;
}
.config-actions .btn {
padding: 2px 6px;
margin-right: 2px;
}
.config-actions .btn i {
font-size: 0.9rem;
}
.dropzone {
padding: 30px 15px;
min-height: 150px;
}
.dropzone i {
font-size: 36px;
}
#yaml-preview pre {
max-height: 300px;
font-size: 0.8rem;
}
}
@media (max-width: 576px) {
/* Stack table cells on very small screens */
#configs-table thead {
display: none;
}
#configs-table tbody tr {
display: block;
margin-bottom: 16px;
border: 1px solid #334155;
border-radius: 8px;
padding: 12px;
}
#configs-table tbody td {
display: block;
text-align: left;
padding: 6px 0;
border: none;
}
#configs-table tbody td:before {
content: attr(data-label);
font-weight: 600;
color: #94a3b8;
display: inline-block;
width: 100px;
}
.config-actions {
margin-top: 8px;
}
}
/* ============================================
Utility Classes
============================================ */
.text-center {
text-align: center;
}
.py-4 {
padding-top: 1.5rem;
padding-bottom: 1.5rem;
}
.py-5 {
padding-top: 3rem;
padding-bottom: 3rem;
}
.mt-2 {
margin-top: 0.5rem;
}
.mt-3 {
margin-top: 1rem;
}
.mb-3 {
margin-bottom: 1rem;
}
.mb-4 {
margin-bottom: 1.5rem;
}
/* ============================================
Result Count Display
============================================ */
#result-count {
color: #94a3b8;
font-size: 0.9rem;
font-weight: 500;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,633 @@
/**
* Config Manager - Handles configuration file upload, management, and display
* Phase 4: Config Creator
*/
class ConfigManager {
constructor() {
this.apiBase = '/api/configs';
this.currentPreview = null;
this.currentFilename = null;
}
/**
* Load all configurations and populate the table
*/
async loadConfigs() {
try {
const response = await fetch(this.apiBase);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const data = await response.json();
this.renderConfigsTable(data.configs || []);
return data.configs;
} catch (error) {
console.error('Error loading configs:', error);
this.showError('Failed to load configurations: ' + error.message);
return [];
}
}
/**
* Get a specific configuration file
*/
async getConfig(filename) {
try {
const response = await fetch(`${this.apiBase}/${encodeURIComponent(filename)}`);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
} catch (error) {
console.error('Error getting config:', error);
this.showError('Failed to load configuration: ' + error.message);
throw error;
}
}
/**
* Upload CSV file and convert to YAML
*/
async uploadCSV(file) {
const formData = new FormData();
formData.append('file', file);
try {
const response = await fetch(`${this.apiBase}/upload-csv`, {
method: 'POST',
body: formData
});
const data = await response.json();
if (!response.ok) {
throw new Error(data.message || `HTTP ${response.status}: ${response.statusText}`);
}
return data;
} catch (error) {
console.error('Error uploading CSV:', error);
throw error;
}
}
/**
* Upload YAML file directly
*/
async uploadYAML(file, filename = null) {
const formData = new FormData();
formData.append('file', file);
if (filename) {
formData.append('filename', filename);
}
try {
const response = await fetch(`${this.apiBase}/upload-yaml`, {
method: 'POST',
body: formData
});
const data = await response.json();
if (!response.ok) {
throw new Error(data.message || `HTTP ${response.status}: ${response.statusText}`);
}
return data;
} catch (error) {
console.error('Error uploading YAML:', error);
throw error;
}
}
/**
* Delete a configuration file
*/
async deleteConfig(filename) {
try {
const response = await fetch(`${this.apiBase}/${encodeURIComponent(filename)}`, {
method: 'DELETE'
});
const data = await response.json();
if (!response.ok) {
throw new Error(data.message || `HTTP ${response.status}: ${response.statusText}`);
}
return data;
} catch (error) {
console.error('Error deleting config:', error);
throw error;
}
}
/**
* Download CSV template
*/
downloadTemplate() {
window.location.href = `${this.apiBase}/template`;
}
/**
* Download a specific config file
*/
downloadConfig(filename) {
window.location.href = `${this.apiBase}/${encodeURIComponent(filename)}/download`;
}
/**
* Show YAML preview in the preview pane
*/
showPreview(yamlContent, filename = null) {
this.currentPreview = yamlContent;
this.currentFilename = filename;
const previewElement = document.getElementById('yaml-preview');
const contentElement = document.getElementById('yaml-content');
const placeholderElement = document.getElementById('preview-placeholder');
if (contentElement) {
contentElement.textContent = yamlContent;
}
if (previewElement) {
previewElement.style.display = 'block';
}
if (placeholderElement) {
placeholderElement.style.display = 'none';
}
// Enable save button
const saveBtn = document.getElementById('save-config-btn');
if (saveBtn) {
saveBtn.disabled = false;
}
}
/**
* Hide YAML preview
*/
hidePreview() {
this.currentPreview = null;
this.currentFilename = null;
const previewElement = document.getElementById('yaml-preview');
const placeholderElement = document.getElementById('preview-placeholder');
if (previewElement) {
previewElement.style.display = 'none';
}
if (placeholderElement) {
placeholderElement.style.display = 'block';
}
// Disable save button
const saveBtn = document.getElementById('save-config-btn');
if (saveBtn) {
saveBtn.disabled = true;
}
}
/**
* Render configurations table
*/
renderConfigsTable(configs) {
const tbody = document.querySelector('#configs-table tbody');
if (!tbody) {
console.warn('Configs table body not found');
return;
}
// Clear existing rows
tbody.innerHTML = '';
if (configs.length === 0) {
tbody.innerHTML = `
<tr>
<td colspan="6" class="text-center text-muted py-4">
<i class="bi bi-inbox" style="font-size: 2rem;"></i>
<p class="mt-2">No configuration files found. Create your first config!</p>
</td>
</tr>
`;
return;
}
// Populate table
configs.forEach(config => {
const row = document.createElement('tr');
row.dataset.filename = config.filename;
// Format date
const createdDate = config.created_at ?
new Date(config.created_at).toLocaleDateString('en-US', {
year: 'numeric',
month: 'short',
day: 'numeric'
}) : 'Unknown';
// Format file size
const fileSize = config.size_bytes ?
this.formatFileSize(config.size_bytes) : 'Unknown';
// Schedule usage badge
const scheduleCount = config.used_by_schedules ? config.used_by_schedules.length : 0;
const scheduleBadge = scheduleCount > 0 ?
`<span class="schedule-badge" title="${config.used_by_schedules.join(', ')}">${scheduleCount}</span>` :
'<span class="text-muted">None</span>';
row.innerHTML = `
<td><code>${this.escapeHtml(config.filename)}</code></td>
<td>${this.escapeHtml(config.title || 'Untitled')}</td>
<td>${createdDate}</td>
<td>${fileSize}</td>
<td>${scheduleBadge}</td>
<td class="config-actions">
<button class="btn btn-sm btn-outline-secondary"
onclick="configManager.viewConfig('${this.escapeHtml(config.filename)}')"
title="View config">
<i class="bi bi-eye"></i>
</button>
<button class="btn btn-sm btn-outline-primary"
onclick="configManager.downloadConfig('${this.escapeHtml(config.filename)}')"
title="Download config">
<i class="bi bi-download"></i>
</button>
<button class="btn btn-sm btn-outline-danger"
onclick="configManager.confirmDelete('${this.escapeHtml(config.filename)}', ${scheduleCount})"
title="Delete config"
${scheduleCount > 0 ? 'disabled' : ''}>
<i class="bi bi-trash"></i>
</button>
</td>
`;
tbody.appendChild(row);
});
// Update result count
this.updateResultCount(configs.length);
}
/**
* View/preview a configuration file
*/
async viewConfig(filename) {
try {
const config = await this.getConfig(filename);
// Show modal with config content
const modalHtml = `
<div class="modal fade" id="viewConfigModal" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">${this.escapeHtml(filename)}</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<pre><code class="language-yaml">${this.escapeHtml(config.content)}</code></pre>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary"
onclick="configManager.downloadConfig('${this.escapeHtml(filename)}')">
<i class="bi bi-download"></i> Download
</button>
</div>
</div>
</div>
</div>
`;
// Remove existing modal if any
const existingModal = document.getElementById('viewConfigModal');
if (existingModal) {
existingModal.remove();
}
// Add modal to page
document.body.insertAdjacentHTML('beforeend', modalHtml);
// Show modal
const modal = new bootstrap.Modal(document.getElementById('viewConfigModal'));
modal.show();
// Clean up on close
document.getElementById('viewConfigModal').addEventListener('hidden.bs.modal', function() {
this.remove();
});
} catch (error) {
this.showError('Failed to view configuration: ' + error.message);
}
}
/**
* Confirm deletion of a configuration
*/
confirmDelete(filename, scheduleCount) {
if (scheduleCount > 0) {
this.showError(`Cannot delete "${filename}" - it is used by ${scheduleCount} schedule(s)`);
return;
}
if (confirm(`Are you sure you want to delete "${filename}"?\n\nThis action cannot be undone.`)) {
this.performDelete(filename);
}
}
/**
* Perform the actual deletion
*/
async performDelete(filename) {
try {
await this.deleteConfig(filename);
this.showSuccess(`Configuration "${filename}" deleted successfully`);
// Reload configs table
await this.loadConfigs();
} catch (error) {
this.showError('Failed to delete configuration: ' + error.message);
}
}
/**
* Filter configs table by search term
*/
filterConfigs(searchTerm) {
const term = searchTerm.toLowerCase().trim();
const rows = document.querySelectorAll('#configs-table tbody tr');
let visibleCount = 0;
rows.forEach(row => {
// Skip empty state row
if (row.querySelector('td[colspan]')) {
return;
}
const filename = row.cells[0]?.textContent.toLowerCase() || '';
const title = row.cells[1]?.textContent.toLowerCase() || '';
const matches = filename.includes(term) || title.includes(term);
row.style.display = matches ? '' : 'none';
if (matches) visibleCount++;
});
this.updateResultCount(visibleCount);
}
/**
* Update result count display
*/
updateResultCount(count) {
const countElement = document.getElementById('result-count');
if (countElement) {
countElement.textContent = `${count} config${count !== 1 ? 's' : ''}`;
}
}
/**
* Show error message
*/
showError(message, elementId = 'error-display') {
const errorElement = document.getElementById(elementId);
if (errorElement) {
errorElement.innerHTML = `
<div class="alert alert-danger alert-dismissible fade show" role="alert">
<i class="bi bi-exclamation-triangle"></i> ${this.escapeHtml(message)}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
</div>
`;
errorElement.scrollIntoView({ behavior: 'smooth', block: 'nearest' });
} else {
console.error('Error:', message);
alert('Error: ' + message);
}
}
/**
* Show success message
*/
showSuccess(message, elementId = 'success-display') {
const successElement = document.getElementById(elementId);
if (successElement) {
successElement.innerHTML = `
<div class="alert alert-success alert-dismissible fade show" role="alert">
<i class="bi bi-check-circle"></i> ${this.escapeHtml(message)}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
</div>
`;
successElement.scrollIntoView({ behavior: 'smooth', block: 'nearest' });
} else {
console.log('Success:', message);
}
}
/**
* Clear all messages
*/
clearMessages() {
const elements = ['error-display', 'success-display', 'csv-errors', 'yaml-errors'];
elements.forEach(id => {
const element = document.getElementById(id);
if (element) {
element.innerHTML = '';
}
});
}
/**
* Format file size for display
*/
formatFileSize(bytes) {
if (bytes === 0) return '0 Bytes';
const k = 1024;
const sizes = ['Bytes', 'KB', 'MB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return Math.round(bytes / Math.pow(k, i) * 100) / 100 + ' ' + sizes[i];
}
/**
* Escape HTML to prevent XSS
*/
escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
}
// Initialize global config manager instance
const configManager = new ConfigManager();
/**
* Setup drag-and-drop zone for file uploads
*/
function setupDropzone(dropzoneId, fileInputId, fileType, onUploadCallback) {
const dropzone = document.getElementById(dropzoneId);
const fileInput = document.getElementById(fileInputId);
if (!dropzone || !fileInput) {
console.warn(`Dropzone setup failed: missing elements (${dropzoneId}, ${fileInputId})`);
return;
}
// Click to browse
dropzone.addEventListener('click', () => {
fileInput.click();
});
// Drag over
dropzone.addEventListener('dragover', (e) => {
e.preventDefault();
e.stopPropagation();
dropzone.classList.add('dragover');
});
// Drag leave
dropzone.addEventListener('dragleave', (e) => {
e.preventDefault();
e.stopPropagation();
dropzone.classList.remove('dragover');
});
// Drop
dropzone.addEventListener('drop', (e) => {
e.preventDefault();
e.stopPropagation();
dropzone.classList.remove('dragover');
const files = e.dataTransfer.files;
if (files.length > 0) {
handleFileUpload(files[0], fileType, onUploadCallback);
}
});
// File input change
fileInput.addEventListener('change', (e) => {
const files = e.target.files;
if (files.length > 0) {
handleFileUpload(files[0], fileType, onUploadCallback);
}
});
}
/**
* Handle file upload (CSV or YAML)
*/
async function handleFileUpload(file, fileType, callback) {
configManager.clearMessages();
// Validate file type
const extension = file.name.split('.').pop().toLowerCase();
if (fileType === 'csv' && extension !== 'csv') {
configManager.showError('Please upload a CSV file (.csv)', 'csv-errors');
return;
}
if (fileType === 'yaml' && !['yaml', 'yml'].includes(extension)) {
configManager.showError('Please upload a YAML file (.yaml or .yml)', 'yaml-errors');
return;
}
// Validate file size (2MB limit for configs)
const maxSize = 2 * 1024 * 1024; // 2MB
if (file.size > maxSize) {
const errorId = fileType === 'csv' ? 'csv-errors' : 'yaml-errors';
configManager.showError(`File too large (${configManager.formatFileSize(file.size)}). Maximum size is 2MB.`, errorId);
return;
}
// Call the provided callback
if (callback) {
try {
await callback(file);
} catch (error) {
const errorId = fileType === 'csv' ? 'csv-errors' : 'yaml-errors';
configManager.showError(error.message, errorId);
}
}
}
/**
* Handle CSV upload and preview
*/
async function handleCSVUpload(file) {
try {
// Show loading state
const previewPlaceholder = document.getElementById('preview-placeholder');
if (previewPlaceholder) {
previewPlaceholder.innerHTML = '<div class="spinner-border" role="status"><span class="visually-hidden">Loading...</span></div>';
}
// Upload CSV
const result = await configManager.uploadCSV(file);
// Show preview
configManager.showPreview(result.preview, result.filename);
// Show success message
configManager.showSuccess(`CSV uploaded successfully! Preview the generated YAML below.`, 'csv-errors');
} catch (error) {
configManager.hidePreview();
throw error;
}
}
/**
* Handle YAML upload
*/
async function handleYAMLUpload(file) {
try {
// Upload YAML
const result = await configManager.uploadYAML(file);
// Show success and redirect
configManager.showSuccess(`Configuration "${result.filename}" uploaded successfully!`, 'yaml-errors');
// Redirect to configs list after 2 seconds
setTimeout(() => {
window.location.href = '/configs';
}, 2000);
} catch (error) {
throw error;
}
}
/**
* Save the previewed configuration (after CSV upload)
*/
async function savePreviewedConfig() {
if (!configManager.currentPreview || !configManager.currentFilename) {
configManager.showError('No configuration to save', 'csv-errors');
return;
}
try {
// The config is already saved during CSV upload, just redirect
configManager.showSuccess(`Configuration "${configManager.currentFilename}" saved successfully!`, 'csv-errors');
// Redirect to configs list after 2 seconds
setTimeout(() => {
window.location.href = '/configs';
}, 2000);
} catch (error) {
configManager.showError('Failed to save configuration: ' + error.message, 'csv-errors');
}
}

View File

@@ -0,0 +1,505 @@
{% extends "base.html" %}
{% block title %}Alert Rules - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12 d-flex justify-content-between align-items-center mb-4">
<h1>Alert Rules</h1>
<div>
<a href="{{ url_for('main.alerts') }}" class="btn btn-outline-primary me-2">
<i class="bi bi-bell"></i> View Alerts
</a>
<button class="btn btn-primary" onclick="showCreateRuleModal()">
<i class="bi bi-plus-circle"></i> Create Rule
</button>
</div>
</div>
</div>
<!-- Rule Statistics -->
<div class="row mb-4">
<div class="col-md-6 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Total Rules</h6>
<h3 class="mb-0">{{ rules | length }}</h3>
</div>
</div>
</div>
<div class="col-md-6 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Active Rules</h6>
<h3 class="mb-0 text-success">{{ rules | selectattr('enabled') | list | length }}</h3>
</div>
</div>
</div>
</div>
<!-- Rules Table -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Alert Rules Configuration</h5>
</div>
<div class="card-body">
{% if rules %}
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Severity</th>
<th>Threshold</th>
<th>Config</th>
<th>Notifications</th>
<th>Status</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
{% for rule in rules %}
<tr>
<td>
<strong>{{ rule.name or 'Unnamed Rule' }}</strong>
<br>
<small class="text-muted">ID: {{ rule.id }}</small>
</td>
<td>
<span class="badge bg-secondary">
{{ rule.rule_type.replace('_', ' ').title() }}
</span>
</td>
<td>
{% if rule.severity == 'critical' %}
<span class="badge bg-danger">Critical</span>
{% elif rule.severity == 'warning' %}
<span class="badge bg-warning">Warning</span>
{% else %}
<span class="badge bg-info">{{ rule.severity or 'Info' }}</span>
{% endif %}
</td>
<td>
{% if rule.threshold %}
{% if rule.rule_type == 'cert_expiry' %}
{{ rule.threshold }} days
{% elif rule.rule_type == 'drift_detection' %}
{{ rule.threshold }}%
{% else %}
{{ rule.threshold }}
{% endif %}
{% else %}
<span class="text-muted">-</span>
{% endif %}
</td>
<td>
{% if rule.config %}
<small class="text-muted">{{ rule.config.title }}</small>
{% else %}
<span class="badge bg-primary">All Configs</span>
{% endif %}
</td>
<td>
{% if rule.email_enabled %}
<i class="bi bi-envelope-fill text-primary" title="Email enabled"></i>
{% endif %}
{% if rule.webhook_enabled %}
<i class="bi bi-send-fill text-primary" title="Webhook enabled"></i>
{% endif %}
{% if not rule.email_enabled and not rule.webhook_enabled %}
<span class="text-muted">None</span>
{% endif %}
</td>
<td>
<div class="form-check form-switch">
<input class="form-check-input" type="checkbox"
id="rule-enabled-{{ rule.id }}"
{% if rule.enabled %}checked{% endif %}
onchange="toggleRule({{ rule.id }}, this.checked)">
<label class="form-check-label" for="rule-enabled-{{ rule.id }}">
{% if rule.enabled %}
<span class="text-success ms-2">Active</span>
{% else %}
<span class="text-muted ms-2">Inactive</span>
{% endif %}
</label>
</div>
</td>
<td>
<button class="btn btn-sm btn-outline-primary" onclick="editRule({{ rule.id }})">
<i class="bi bi-pencil"></i>
</button>
<button class="btn btn-sm btn-outline-danger" onclick="deleteRule({{ rule.id }}, '{{ rule.name }}')">
<i class="bi bi-trash"></i>
</button>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<div class="text-center py-5 text-muted">
<i class="bi bi-bell-slash" style="font-size: 3rem;"></i>
<h5 class="mt-3">No alert rules configured</h5>
<p>Create alert rules to be notified of important scan findings.</p>
<button class="btn btn-primary mt-3" onclick="showCreateRuleModal()">
<i class="bi bi-plus-circle"></i> Create Your First Rule
</button>
</div>
{% endif %}
</div>
</div>
</div>
</div>
<!-- Create/Edit Rule Modal -->
<div class="modal fade" id="ruleModal" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="ruleModalTitle">Create Alert Rule</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<form id="ruleForm">
<input type="hidden" id="rule-id">
<div class="row">
<div class="col-md-6 mb-3">
<label for="rule-name" class="form-label">Rule Name</label>
<input type="text" class="form-control" id="rule-name" required>
</div>
<div class="col-md-6 mb-3">
<label for="rule-type" class="form-label">Rule Type</label>
<select class="form-select" id="rule-type" required onchange="updateThresholdLabel()">
<option value="">Select a type...</option>
<option value="unexpected_port">Unexpected Port Detection</option>
<option value="drift_detection">Drift Detection</option>
<option value="cert_expiry">Certificate Expiry</option>
<option value="weak_tls">Weak TLS Version</option>
<option value="ping_failed">Ping Failed</option>
</select>
</div>
</div>
<div class="row">
<div class="col-md-6 mb-3">
<label for="rule-severity" class="form-label">Severity</label>
<select class="form-select" id="rule-severity" required>
<option value="info">Info</option>
<option value="warning" selected>Warning</option>
<option value="critical">Critical</option>
</select>
</div>
<div class="col-md-6 mb-3">
<label for="rule-threshold" class="form-label" id="threshold-label">Threshold</label>
<input type="number" class="form-control" id="rule-threshold">
<small class="form-text text-muted" id="threshold-help">
Numeric value that triggers the alert (varies by rule type)
</small>
</div>
</div>
<div class="row">
<div class="col-md-12 mb-3">
<label for="rule-config" class="form-label">Apply to Config (optional)</label>
<select class="form-select" id="rule-config">
<option value="">All Configs (Apply to all scans)</option>
</select>
<small class="form-text text-muted" id="config-help-text">
Select a specific config to limit this rule, or leave as "All Configs" to apply to all scans
</small>
</div>
</div>
<div class="row">
<div class="col-md-6">
<div class="form-check">
<input class="form-check-input" type="checkbox" id="rule-email">
<label class="form-check-label" for="rule-email">
Send Email Notifications
</label>
</div>
</div>
<div class="col-md-6">
<div class="form-check">
<input class="form-check-input" type="checkbox" id="rule-webhook">
<label class="form-check-label" for="rule-webhook">
Send Webhook Notifications
</label>
</div>
</div>
</div>
<div class="row mt-3">
<div class="col-12">
<div class="form-check">
<input class="form-check-input" type="checkbox" id="rule-enabled" checked>
<label class="form-check-label" for="rule-enabled">
Enable this rule immediately
</label>
</div>
</div>
</div>
</form>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" onclick="saveRule()">
<span id="save-rule-text">Create Rule</span>
<span id="save-rule-spinner" class="spinner-border spinner-border-sm ms-2" style="display: none;"></span>
</button>
</div>
</div>
</div>
</div>
<script>
let editingRuleId = null;
// Load available configs for the dropdown
async function loadConfigsForRule() {
const selectEl = document.getElementById('rule-config');
try {
const response = await fetch('/api/configs');
if (!response.ok) {
throw new Error('Failed to load configurations');
}
const data = await response.json();
const configs = data.configs || [];
// Preserve the "All Configs" option and current selection
const currentValue = selectEl.value;
selectEl.innerHTML = '<option value="">All Configs (Apply to all scans)</option>';
configs.forEach(config => {
const option = document.createElement('option');
// Store the config ID as the value
option.value = config.id;
const siteText = config.site_count === 1 ? 'site' : 'sites';
option.textContent = `${config.title} (${config.site_count} ${siteText})`;
selectEl.appendChild(option);
});
// Restore selection if it was set
if (currentValue) {
selectEl.value = currentValue;
}
} catch (error) {
console.error('Error loading configs:', error);
}
}
function showCreateRuleModal() {
editingRuleId = null;
document.getElementById('ruleModalTitle').textContent = 'Create Alert Rule';
document.getElementById('save-rule-text').textContent = 'Create Rule';
document.getElementById('ruleForm').reset();
document.getElementById('rule-enabled').checked = true;
// Load configs when modal is shown
loadConfigsForRule();
new bootstrap.Modal(document.getElementById('ruleModal')).show();
}
function editRule(ruleId) {
editingRuleId = ruleId;
document.getElementById('ruleModalTitle').textContent = 'Edit Alert Rule';
document.getElementById('save-rule-text').textContent = 'Update Rule';
// Load configs first, then fetch rule details
loadConfigsForRule().then(() => {
// Fetch rule details
fetch(`/api/alerts/rules`, {
headers: {
'X-API-Key': localStorage.getItem('api_key') || ''
}
})
.then(response => response.json())
.then(data => {
const rule = data.rules.find(r => r.id === ruleId);
if (rule) {
document.getElementById('rule-id').value = rule.id;
document.getElementById('rule-name').value = rule.name || '';
document.getElementById('rule-type').value = rule.rule_type;
document.getElementById('rule-severity').value = rule.severity || 'warning';
document.getElementById('rule-threshold').value = rule.threshold || '';
document.getElementById('rule-config').value = rule.config_id || '';
document.getElementById('rule-email').checked = rule.email_enabled;
document.getElementById('rule-webhook').checked = rule.webhook_enabled;
document.getElementById('rule-enabled').checked = rule.enabled;
updateThresholdLabel();
new bootstrap.Modal(document.getElementById('ruleModal')).show();
}
})
.catch(error => {
console.error('Error fetching rule:', error);
alert('Failed to load rule details');
});
});
}
function updateThresholdLabel() {
const ruleType = document.getElementById('rule-type').value;
const label = document.getElementById('threshold-label');
const help = document.getElementById('threshold-help');
switch(ruleType) {
case 'cert_expiry':
label.textContent = 'Days Before Expiry';
help.textContent = 'Alert when certificate expires within this many days (default: 30)';
break;
case 'drift_detection':
label.textContent = 'Drift Percentage';
help.textContent = 'Alert when drift exceeds this percentage (0-100, default: 5)';
break;
case 'unexpected_port':
label.textContent = 'Threshold (optional)';
help.textContent = 'Leave blank - this rule alerts on any port not in your config file';
break;
case 'weak_tls':
label.textContent = 'Threshold (optional)';
help.textContent = 'Leave blank - this rule alerts on TLS versions below 1.2';
break;
case 'ping_failed':
label.textContent = 'Threshold (optional)';
help.textContent = 'Leave blank - this rule alerts when a host fails to respond to ping';
break;
default:
label.textContent = 'Threshold';
help.textContent = 'Numeric value that triggers the alert (select a rule type for specific guidance)';
}
}
function saveRule() {
const name = document.getElementById('rule-name').value;
const ruleType = document.getElementById('rule-type').value;
const severity = document.getElementById('rule-severity').value;
const threshold = document.getElementById('rule-threshold').value;
const configId = document.getElementById('rule-config').value;
const emailEnabled = document.getElementById('rule-email').checked;
const webhookEnabled = document.getElementById('rule-webhook').checked;
const enabled = document.getElementById('rule-enabled').checked;
if (!name || !ruleType) {
alert('Please fill in required fields');
return;
}
const data = {
name: name,
rule_type: ruleType,
severity: severity,
threshold: threshold ? parseInt(threshold) : null,
config_id: configId ? parseInt(configId) : null,
email_enabled: emailEnabled,
webhook_enabled: webhookEnabled,
enabled: enabled
};
// Show spinner
document.getElementById('save-rule-text').style.display = 'none';
document.getElementById('save-rule-spinner').style.display = 'inline-block';
const url = editingRuleId
? `/api/alerts/rules/${editingRuleId}`
: '/api/alerts/rules';
const method = editingRuleId ? 'PUT' : 'POST';
fetch(url, {
method: method,
headers: {
'Content-Type': 'application/json',
'X-API-Key': localStorage.getItem('api_key') || ''
},
body: JSON.stringify(data)
})
.then(response => response.json())
.then(data => {
if (data.status === 'success') {
location.reload();
} else {
alert('Failed to save rule: ' + (data.message || 'Unknown error'));
// Hide spinner
document.getElementById('save-rule-text').style.display = 'inline';
document.getElementById('save-rule-spinner').style.display = 'none';
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to save rule');
// Hide spinner
document.getElementById('save-rule-text').style.display = 'inline';
document.getElementById('save-rule-spinner').style.display = 'none';
});
}
function toggleRule(ruleId, enabled) {
fetch(`/api/alerts/rules/${ruleId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
'X-API-Key': localStorage.getItem('api_key') || ''
},
body: JSON.stringify({ enabled: enabled })
})
.then(response => response.json())
.then(data => {
if (data.status !== 'success') {
alert('Failed to update rule status');
// Revert checkbox
document.getElementById(`rule-enabled-${ruleId}`).checked = !enabled;
} else {
// Update label
const label = document.querySelector(`label[for="rule-enabled-${ruleId}"] span`);
if (enabled) {
label.className = 'text-success';
label.textContent = 'Active';
} else {
label.className = 'text-muted';
label.textContent = 'Inactive';
}
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to update rule status');
// Revert checkbox
document.getElementById(`rule-enabled-${ruleId}`).checked = !enabled;
});
}
function deleteRule(ruleId, ruleName) {
if (!confirm(`Delete alert rule "${ruleName}"? This cannot be undone.`)) {
return;
}
fetch(`/api/alerts/rules/${ruleId}`, {
method: 'DELETE',
headers: {
'X-API-Key': localStorage.getItem('api_key') || ''
}
})
.then(response => response.json())
.then(data => {
if (data.status === 'success') {
location.reload();
} else {
alert('Failed to delete rule: ' + (data.message || 'Unknown error'));
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to delete rule');
});
}
</script>
{% endblock %}

View File

@@ -0,0 +1,303 @@
{% extends "base.html" %}
{% block title %}Alerts - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12 d-flex justify-content-between align-items-center mb-4">
<h1>Alert History</h1>
<div>
<button class="btn btn-success me-2" onclick="acknowledgeAllAlerts()">
<i class="bi bi-check-all"></i> Ack All
</button>
<a href="{{ url_for('main.alert_rules') }}" class="btn btn-primary">
<i class="bi bi-gear"></i> Manage Alert Rules
</a>
</div>
</div>
</div>
<!-- Alert Statistics -->
<div class="row mb-4">
<div class="col-md-3 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Total Alerts</h6>
<h3 class="mb-0">{{ pagination.total }}</h3>
</div>
</div>
</div>
<div class="col-md-3 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Critical</h6>
<h3 class="mb-0 text-danger">
{{ alerts | selectattr('severity', 'equalto', 'critical') | list | length }}
</h3>
</div>
</div>
</div>
<div class="col-md-3 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Warnings</h6>
<h3 class="mb-0 text-warning">
{{ alerts | selectattr('severity', 'equalto', 'warning') | list | length }}
</h3>
</div>
</div>
</div>
<div class="col-md-3 mb-3">
<div class="card">
<div class="card-body">
<h6 class="text-muted mb-2">Unacknowledged</h6>
<h3 class="mb-0 text-warning">
{{ alerts | rejectattr('acknowledged') | list | length }}
</h3>
</div>
</div>
</div>
</div>
<!-- Filters -->
<div class="row mb-4">
<div class="col-12">
<div class="card">
<div class="card-body">
<form method="get" action="{{ url_for('main.alerts') }}" class="row g-3">
<div class="col-md-3">
<label for="severity-filter" class="form-label">Severity</label>
<select class="form-select" id="severity-filter" name="severity">
<option value="">All Severities</option>
<option value="critical" {% if current_severity == 'critical' %}selected{% endif %}>Critical</option>
<option value="warning" {% if current_severity == 'warning' %}selected{% endif %}>Warning</option>
<option value="info" {% if current_severity == 'info' %}selected{% endif %}>Info</option>
</select>
</div>
<div class="col-md-3">
<label for="type-filter" class="form-label">Alert Type</label>
<select class="form-select" id="type-filter" name="alert_type">
<option value="">All Types</option>
{% for at in alert_types %}
<option value="{{ at }}" {% if current_alert_type == at %}selected{% endif %}>
{{ at.replace('_', ' ').title() }}
</option>
{% endfor %}
</select>
</div>
<div class="col-md-3">
<label for="ack-filter" class="form-label">Acknowledgment</label>
<select class="form-select" id="ack-filter" name="acknowledged">
<option value="">All</option>
<option value="false" {% if current_acknowledged == 'false' %}selected{% endif %}>Unacknowledged</option>
<option value="true" {% if current_acknowledged == 'true' %}selected{% endif %}>Acknowledged</option>
</select>
</div>
<div class="col-md-3 d-flex align-items-end">
<button type="submit" class="btn btn-primary w-100">
<i class="bi bi-funnel"></i> Apply Filters
</button>
</div>
</form>
</div>
</div>
</div>
</div>
<!-- Alerts Table -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Alerts</h5>
</div>
<div class="card-body">
{% if alerts %}
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th style="width: 100px;">Severity</th>
<th>Type</th>
<th>Message</th>
<th style="width: 120px;">Target</th>
<th style="width: 150px;">Scan</th>
<th style="width: 150px;">Created</th>
<th style="width: 100px;">Status</th>
<th style="width: 100px;">Actions</th>
</tr>
</thead>
<tbody>
{% for alert in alerts %}
<tr>
<td>
{% if alert.severity == 'critical' %}
<span class="badge bg-danger">Critical</span>
{% elif alert.severity == 'warning' %}
<span class="badge bg-warning">Warning</span>
{% else %}
<span class="badge bg-info">Info</span>
{% endif %}
</td>
<td>
<span class="text-muted">{{ alert.alert_type.replace('_', ' ').title() }}</span>
</td>
<td>
{{ alert.message }}
</td>
<td>
{% if alert.ip_address %}
<small class="text-muted">
{{ alert.ip_address }}{% if alert.port %}:{{ alert.port }}{% endif %}
</small>
{% else %}
<small class="text-muted">-</small>
{% endif %}
</td>
<td>
<a href="{{ url_for('main.scan_detail', scan_id=alert.scan_id) }}" class="text-decoration-none">
Scan #{{ alert.scan_id }}
</a>
</td>
<td>
<small class="text-muted">{{ alert.created_at.strftime('%Y-%m-%d %H:%M') }}</small>
</td>
<td>
{% if alert.acknowledged %}
<span class="badge bg-success">
<i class="bi bi-check-circle"></i> Ack'd
</span>
{% else %}
<span class="badge bg-secondary">New</span>
{% endif %}
{% if alert.email_sent %}
<i class="bi bi-envelope-fill text-muted" title="Email sent"></i>
{% endif %}
{% if alert.webhook_sent %}
<i class="bi bi-send-fill text-muted" title="Webhook sent"></i>
{% endif %}
</td>
<td>
{% if not alert.acknowledged %}
<button class="btn btn-sm btn-outline-success" onclick="acknowledgeAlert({{ alert.id }})">
<i class="bi bi-check"></i> Ack
</button>
{% else %}
<small class="text-muted" title="Acknowledged by {{ alert.acknowledged_by }} at {{ alert.acknowledged_at.strftime('%Y-%m-%d %H:%M') }}">
By: {{ alert.acknowledged_by }}
</small>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
<!-- Pagination -->
{% if pagination.pages > 1 %}
<nav aria-label="Alert pagination" class="mt-4">
<ul class="pagination justify-content-center">
<li class="page-item {% if not pagination.has_prev %}disabled{% endif %}">
<a class="page-link" href="{{ url_for('main.alerts', page=pagination.prev_num, severity=current_severity, alert_type=current_alert_type, acknowledged=current_acknowledged) }}">
Previous
</a>
</li>
{% for page_num in pagination.iter_pages(left_edge=1, left_current=1, right_current=2, right_edge=1) %}
{% if page_num %}
<li class="page-item {% if page_num == pagination.page %}active{% endif %}">
<a class="page-link" href="{{ url_for('main.alerts', page=page_num, severity=current_severity, alert_type=current_alert_type, acknowledged=current_acknowledged) }}">
{{ page_num }}
</a>
</li>
{% else %}
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
{% endif %}
{% endfor %}
<li class="page-item {% if not pagination.has_next %}disabled{% endif %}">
<a class="page-link" href="{{ url_for('main.alerts', page=pagination.next_num, severity=current_severity, alert_type=current_alert_type, acknowledged=current_acknowledged) }}">
Next
</a>
</li>
</ul>
</nav>
{% endif %}
{% else %}
<div class="text-center py-5 text-muted">
<i class="bi bi-bell-slash" style="font-size: 3rem;"></i>
<h5 class="mt-3">No alerts found</h5>
<p>Alerts will appear here when scan results trigger alert rules.</p>
<a href="{{ url_for('main.alert_rules') }}" class="btn btn-primary mt-3">
<i class="bi bi-plus-circle"></i> Configure Alert Rules
</a>
</div>
{% endif %}
</div>
</div>
</div>
</div>
<script>
function acknowledgeAlert(alertId) {
if (!confirm('Acknowledge this alert?')) {
return;
}
fetch(`/api/alerts/${alertId}/acknowledge`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': localStorage.getItem('api_key') || ''
},
body: JSON.stringify({
acknowledged_by: 'web_user'
})
})
.then(response => response.json())
.then(data => {
if (data.status === 'success') {
location.reload();
} else {
alert('Failed to acknowledge alert: ' + (data.message || 'Unknown error'));
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to acknowledge alert');
});
}
function acknowledgeAllAlerts() {
if (!confirm('Acknowledge all unacknowledged alerts?')) {
return;
}
fetch('/api/alerts/acknowledge-all', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': localStorage.getItem('api_key') || ''
},
body: JSON.stringify({
acknowledged_by: 'web_user'
})
})
.then(response => response.json())
.then(data => {
if (data.status === 'success') {
location.reload();
} else {
alert('Failed to acknowledge alerts: ' + (data.message || 'Unknown error'));
}
})
.catch(error => {
console.error('Error:', error);
alert('Failed to acknowledge alerts');
});
}
</script>
{% endblock %}

View File

@@ -3,7 +3,7 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}SneakyScanner{% endblock %}</title>
<title>{% block title %}{{ app_name }}{% endblock %}</title>
<!-- Bootstrap 5 CSS -->
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
@@ -34,7 +34,7 @@
<nav class="navbar navbar-expand-lg navbar-custom">
<div class="container-fluid">
<a class="navbar-brand" href="{{ url_for('main.dashboard') }}">
SneakyScanner
{{ app_name }}
</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav">
<span class="navbar-toggler-icon"></span>
@@ -45,6 +45,16 @@
<a class="nav-link {% if request.endpoint == 'main.dashboard' %}active{% endif %}"
href="{{ url_for('main.dashboard') }}">Dashboard</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle {% if request.endpoint and ('config' in request.endpoint or request.endpoint == 'main.sites') %}active{% endif %}"
href="#" id="configsDropdown" role="button" data-bs-toggle="dropdown">
Configs
</a>
<ul class="dropdown-menu" aria-labelledby="configsDropdown">
<li><a class="dropdown-item" href="{{ url_for('main.configs') }}">Scan Configs</a></li>
<li><a class="dropdown-item" href="{{ url_for('main.sites') }}">Sites</a></li>
</ul>
</li>
<li class="nav-item">
<a class="nav-link {% if request.endpoint == 'main.scans' %}active{% endif %}"
href="{{ url_for('main.scans') }}">Scans</a>
@@ -53,8 +63,33 @@
<a class="nav-link {% if request.endpoint and 'schedule' in request.endpoint %}active{% endif %}"
href="{{ url_for('main.schedules') }}">Schedules</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle {% if request.endpoint and ('alert' in request.endpoint or 'webhook' in request.endpoint) %}active{% endif %}"
href="#" id="alertsDropdown" role="button" data-bs-toggle="dropdown">
Alerts
</a>
<ul class="dropdown-menu" aria-labelledby="alertsDropdown">
<li><a class="dropdown-item" href="{{ url_for('main.alerts') }}">Alert History</a></li>
<li><a class="dropdown-item" href="{{ url_for('main.alert_rules') }}">Alert Rules</a></li>
<li><hr class="dropdown-divider"></li>
<li><a class="dropdown-item" href="{{ url_for('webhooks.list_webhooks') }}">Webhooks</a></li>
</ul>
</li>
</ul>
<form class="d-flex me-3" action="{{ url_for('main.search_ip') }}" method="GET">
<input class="form-control form-control-sm me-2" type="search" name="ip"
placeholder="Search IP..." aria-label="Search IP" style="width: 150px;">
<button class="btn btn-outline-primary btn-sm" type="submit">
<i class="bi bi-search"></i>
</button>
</form>
<ul class="navbar-nav">
<li class="nav-item">
<a class="nav-link {% if request.endpoint == 'main.help' %}active{% endif %}"
href="{{ url_for('main.help') }}">
<i class="bi bi-question-circle"></i> Help
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{ url_for('auth.logout') }}">Logout</a>
</li>
@@ -81,10 +116,13 @@
<div class="footer">
<div class="container-fluid">
SneakyScanner v1.0 - Phase 3 In Progress
<a href="{{ repo_url }}" target="_blank">{{ app_name }}</a> - v{{ app_version }}
</div>
</div>
<!-- Global notification container - always above modals -->
<div id="notification-container" style="position: fixed; top: 20px; right: 20px; z-index: 9999; min-width: 300px;"></div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
{% block scripts %}{% endblock %}
</body>

View File

@@ -0,0 +1,580 @@
{% extends "base.html" %}
{% block title %}Scan Configurations - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12">
<div class="d-flex justify-content-between align-items-center mb-4">
<h1>Scan Configurations</h1>
<div>
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#createConfigModal">
<i class="bi bi-plus-circle"></i> Create New Config
</button>
</div>
</div>
</div>
</div>
<!-- Summary Stats -->
<div class="row mb-4">
<div class="col-md-4">
<div class="stat-card">
<div class="stat-value" id="total-configs">-</div>
<div class="stat-label">Total Configs</div>
</div>
</div>
<div class="col-md-4">
<div class="stat-card">
<div class="stat-value" id="total-sites-used">-</div>
<div class="stat-label">Total Sites Referenced</div>
</div>
</div>
<div class="col-md-4">
<div class="stat-card">
<div class="stat-value" id="recent-updates">-</div>
<div class="stat-label">Updated This Week</div>
</div>
</div>
</div>
<!-- Configs Table -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<div class="d-flex justify-content-between align-items-center">
<h5 class="mb-0">All Configurations</h5>
<input type="text" id="search-input" class="form-control" style="max-width: 300px;"
placeholder="Search configs...">
</div>
</div>
<div class="card-body">
<div id="configs-loading" class="text-center py-5">
<div class="spinner-border text-primary" role="status">
<span class="visually-hidden">Loading...</span>
</div>
<p class="mt-3 text-muted">Loading configurations...</p>
</div>
<div id="configs-error" style="display: none;" class="alert alert-danger">
<strong>Error:</strong> <span id="error-message"></span>
</div>
<div id="configs-content" style="display: none;">
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th>Title</th>
<th>Description</th>
<th>Sites</th>
<th>Updated</th>
<th>Actions</th>
</tr>
</thead>
<tbody id="configs-tbody">
<!-- Populated by JavaScript -->
</tbody>
</table>
</div>
<div id="empty-state" style="display: none;" class="text-center py-5">
<i class="bi bi-gear" style="font-size: 3rem; color: #64748b;"></i>
<h5 class="mt-3 text-muted">No configurations defined</h5>
<p class="text-muted">Create your first scan configuration</p>
<button class="btn btn-primary mt-2" data-bs-toggle="modal" data-bs-target="#createConfigModal">
<i class="bi bi-plus-circle"></i> Create Config
</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Create Config Modal -->
<div class="modal fade" id="createConfigModal" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">
<i class="bi bi-plus-circle me-2"></i>Create New Configuration
</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<form id="create-config-form">
<div class="mb-3">
<label for="config-title" class="form-label">Title <span class="text-danger">*</span></label>
<input type="text" class="form-control" id="config-title" required
placeholder="e.g., Production Weekly Scan">
</div>
<div class="mb-3">
<label for="config-description" class="form-label">Description</label>
<textarea class="form-control" id="config-description" rows="3"
placeholder="Optional description of this configuration"></textarea>
</div>
<div class="mb-3">
<label class="form-label">Sites <span class="text-danger">*</span></label>
<div id="sites-loading-modal" class="text-center py-3">
<div class="spinner-border spinner-border-sm text-primary" role="status">
<span class="visually-hidden">Loading...</span>
</div>
<span class="ms-2 text-muted">Loading available sites...</span>
</div>
<div id="sites-list" style="display: none;">
<!-- Populated by JavaScript -->
</div>
<small class="form-text text-muted">Select at least one site to include in this configuration</small>
</div>
<div class="alert alert-danger" id="create-config-error" style="display: none;">
<span id="create-config-error-message"></span>
</div>
</form>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" id="create-config-btn">
<i class="bi bi-check-circle me-1"></i>Create Configuration
</button>
</div>
</div>
</div>
</div>
<!-- Edit Config Modal -->
<div class="modal fade" id="editConfigModal" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">
<i class="bi bi-pencil me-2"></i>Edit Configuration
</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<form id="edit-config-form">
<input type="hidden" id="edit-config-id">
<div class="mb-3">
<label for="edit-config-title" class="form-label">Title <span class="text-danger">*</span></label>
<input type="text" class="form-control" id="edit-config-title" required>
</div>
<div class="mb-3">
<label for="edit-config-description" class="form-label">Description</label>
<textarea class="form-control" id="edit-config-description" rows="3"></textarea>
</div>
<div class="mb-3">
<label class="form-label">Sites <span class="text-danger">*</span></label>
<div id="edit-sites-list">
<!-- Populated by JavaScript -->
</div>
</div>
<div class="alert alert-danger" id="edit-config-error" style="display: none;">
<span id="edit-config-error-message"></span>
</div>
</form>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" id="edit-config-btn">
<i class="bi bi-check-circle me-1"></i>Save Changes
</button>
</div>
</div>
</div>
</div>
<!-- View Config Modal -->
<div class="modal fade" id="viewConfigModal" tabindex="-1">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">
<i class="bi bi-eye me-2"></i>Configuration Details
</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<div id="view-config-content">
<!-- Populated by JavaScript -->
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
<!-- Delete Confirmation Modal -->
<div class="modal fade" id="deleteConfigModal" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title text-danger">
<i class="bi bi-trash me-2"></i>Delete Configuration
</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<p>Are you sure you want to delete configuration <strong id="delete-config-name"></strong>?</p>
<p class="text-warning"><i class="bi bi-exclamation-triangle"></i> This action cannot be undone.</p>
<input type="hidden" id="delete-config-id">
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-danger" id="confirm-delete-btn">
<i class="bi bi-trash me-1"></i>Delete
</button>
</div>
</div>
</div>
</div>
{% endblock %}
{% block scripts %}
<script>
// Global state
let allConfigs = [];
let allSites = [];
// Load data on page load
document.addEventListener('DOMContentLoaded', function() {
loadSites();
loadConfigs();
});
// Load all sites
async function loadSites() {
try {
const response = await fetch('/api/sites?all=true');
if (!response.ok) throw new Error('Failed to load sites');
const data = await response.json();
allSites = data.sites || [];
renderSitesCheckboxes();
} catch (error) {
console.error('Error loading sites:', error);
document.getElementById('sites-loading-modal').innerHTML =
'<div class="alert alert-danger">Failed to load sites</div>';
}
}
// Render sites checkboxes
function renderSitesCheckboxes(selectedIds = [], isEditMode = false) {
const container = isEditMode ? document.getElementById('edit-sites-list') : document.getElementById('sites-list');
if (!container) return;
if (allSites.length === 0) {
const message = '<div class="alert alert-info">No sites available. <a href="/sites">Create a site first</a>.</div>';
container.innerHTML = message;
if (!isEditMode) {
document.getElementById('sites-loading-modal').style.display = 'none';
container.style.display = 'block';
}
return;
}
const prefix = isEditMode ? 'edit-site' : 'site';
const checkboxClass = isEditMode ? 'edit-site-checkbox' : 'site-checkbox';
let html = '<div style="max-height: 300px; overflow-y: auto;">';
allSites.forEach(site => {
const isChecked = selectedIds.includes(site.id);
html += `
<div class="form-check">
<input class="form-check-input ${checkboxClass}" type="checkbox" value="${site.id}"
id="${prefix}-${site.id}" ${isChecked ? 'checked' : ''}>
<label class="form-check-label" for="${prefix}-${site.id}">
${escapeHtml(site.name)}
<small class="text-muted">(${site.ip_count || 0} IP${site.ip_count !== 1 ? 's' : ''})</small>
</label>
</div>
`;
});
html += '</div>';
container.innerHTML = html;
if (!isEditMode) {
document.getElementById('sites-loading-modal').style.display = 'none';
container.style.display = 'block';
}
}
// Load all configs
async function loadConfigs() {
try {
const response = await fetch('/api/configs');
if (!response.ok) throw new Error('Failed to load configs');
const data = await response.json();
allConfigs = data.configs || [];
renderConfigs();
updateStats();
document.getElementById('configs-loading').style.display = 'none';
document.getElementById('configs-content').style.display = 'block';
} catch (error) {
console.error('Error loading configs:', error);
document.getElementById('configs-loading').style.display = 'none';
document.getElementById('configs-error').style.display = 'block';
document.getElementById('error-message').textContent = error.message;
}
}
// Render configs table
function renderConfigs(filter = '') {
const tbody = document.getElementById('configs-tbody');
const emptyState = document.getElementById('empty-state');
const filteredConfigs = filter
? allConfigs.filter(c =>
c.title.toLowerCase().includes(filter.toLowerCase()) ||
(c.description && c.description.toLowerCase().includes(filter.toLowerCase()))
)
: allConfigs;
if (filteredConfigs.length === 0) {
tbody.innerHTML = '';
emptyState.style.display = 'block';
return;
}
emptyState.style.display = 'none';
tbody.innerHTML = filteredConfigs.map(config => `
<tr>
<td><strong>${escapeHtml(config.title)}</strong></td>
<td>${config.description ? escapeHtml(config.description) : '<span class="text-muted">-</span>'}</td>
<td>
<span class="badge bg-primary">${config.site_count} site${config.site_count !== 1 ? 's' : ''}</span>
</td>
<td>${formatDate(config.updated_at)}</td>
<td>
<button class="btn btn-sm btn-info" onclick="viewConfig(${config.id})" title="View">
<i class="bi bi-eye"></i>
</button>
<button class="btn btn-sm btn-warning" onclick="editConfig(${config.id})" title="Edit">
<i class="bi bi-pencil"></i>
</button>
<button class="btn btn-sm btn-danger" onclick="deleteConfig(${config.id}, '${escapeHtml(config.title).replace(/'/g, "\\'")}');" title="Delete">
<i class="bi bi-trash"></i>
</button>
</td>
</tr>
`).join('');
}
// Update stats
function updateStats() {
document.getElementById('total-configs').textContent = allConfigs.length;
const uniqueSites = new Set();
allConfigs.forEach(c => c.sites.forEach(s => uniqueSites.add(s.id)));
document.getElementById('total-sites-used').textContent = uniqueSites.size;
const oneWeekAgo = new Date();
oneWeekAgo.setDate(oneWeekAgo.getDate() - 7);
const recentUpdates = allConfigs.filter(c => new Date(c.updated_at) > oneWeekAgo).length;
document.getElementById('recent-updates').textContent = recentUpdates;
}
// Search functionality
document.getElementById('search-input').addEventListener('input', function(e) {
renderConfigs(e.target.value);
});
// Create config
document.getElementById('create-config-btn').addEventListener('click', async function() {
const title = document.getElementById('config-title').value.trim();
const description = document.getElementById('config-description').value.trim();
const siteCheckboxes = document.querySelectorAll('.site-checkbox:checked');
const siteIds = Array.from(siteCheckboxes).map(cb => parseInt(cb.value));
if (!title) {
showError('create-config-error', 'Title is required');
return;
}
if (siteIds.length === 0) {
showError('create-config-error', 'At least one site must be selected');
return;
}
try {
const response = await fetch('/api/configs', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ title, description: description || null, site_ids: siteIds })
});
if (!response.ok) {
const data = await response.json();
throw new Error(data.message || 'Failed to create config');
}
// Close modal and reload
bootstrap.Modal.getInstance(document.getElementById('createConfigModal')).hide();
document.getElementById('create-config-form').reset();
renderSitesCheckboxes(); // Reset checkboxes
await loadConfigs();
} catch (error) {
showError('create-config-error', error.message);
}
});
// View config
async function viewConfig(id) {
try {
const response = await fetch(`/api/configs/${id}`);
if (!response.ok) throw new Error('Failed to load config');
const config = await response.json();
let html = `
<div class="mb-3">
<strong>Title:</strong> ${escapeHtml(config.title)}
</div>
<div class="mb-3">
<strong>Description:</strong> ${config.description ? escapeHtml(config.description) : '<span class="text-muted">None</span>'}
</div>
<div class="mb-3">
<strong>Sites (${config.site_count}):</strong>
<ul class="mt-2">
${config.sites.map(site => `
<li>${escapeHtml(site.name)} <small class="text-muted">(${site.ip_count} IP${site.ip_count !== 1 ? 's' : ''})</small></li>
`).join('')}
</ul>
</div>
<div class="mb-3">
<strong>Created:</strong> ${formatDate(config.created_at)}
</div>
<div class="mb-3">
<strong>Last Updated:</strong> ${formatDate(config.updated_at)}
</div>
`;
document.getElementById('view-config-content').innerHTML = html;
new bootstrap.Modal(document.getElementById('viewConfigModal')).show();
} catch (error) {
alert('Error loading config: ' + error.message);
}
}
// Edit config
async function editConfig(id) {
try {
const response = await fetch(`/api/configs/${id}`);
if (!response.ok) throw new Error('Failed to load config');
const config = await response.json();
document.getElementById('edit-config-id').value = config.id;
document.getElementById('edit-config-title').value = config.title;
document.getElementById('edit-config-description').value = config.description || '';
const selectedIds = config.sites.map(s => s.id);
renderSitesCheckboxes(selectedIds, true); // true = isEditMode
new bootstrap.Modal(document.getElementById('editConfigModal')).show();
} catch (error) {
alert('Error loading config: ' + error.message);
}
}
// Save edited config
document.getElementById('edit-config-btn').addEventListener('click', async function() {
const id = document.getElementById('edit-config-id').value;
const title = document.getElementById('edit-config-title').value.trim();
const description = document.getElementById('edit-config-description').value.trim();
const siteCheckboxes = document.querySelectorAll('.edit-site-checkbox:checked');
const siteIds = Array.from(siteCheckboxes).map(cb => parseInt(cb.value));
if (!title) {
showError('edit-config-error', 'Title is required');
return;
}
if (siteIds.length === 0) {
showError('edit-config-error', 'At least one site must be selected');
return;
}
try {
const response = await fetch(`/api/configs/${id}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ title, description: description || null, site_ids: siteIds })
});
if (!response.ok) {
const data = await response.json();
throw new Error(data.message || 'Failed to update config');
}
// Close modal and reload
bootstrap.Modal.getInstance(document.getElementById('editConfigModal')).hide();
await loadConfigs();
} catch (error) {
showError('edit-config-error', error.message);
}
});
// Delete config
function deleteConfig(id, name) {
document.getElementById('delete-config-id').value = id;
document.getElementById('delete-config-name').textContent = name;
new bootstrap.Modal(document.getElementById('deleteConfigModal')).show();
}
// Confirm delete
document.getElementById('confirm-delete-btn').addEventListener('click', async function() {
const id = document.getElementById('delete-config-id').value;
try {
const response = await fetch(`/api/configs/${id}`, { method: 'DELETE' });
if (!response.ok) {
const data = await response.json();
throw new Error(data.message || 'Failed to delete config');
}
// Close modal and reload
bootstrap.Modal.getInstance(document.getElementById('deleteConfigModal')).hide();
await loadConfigs();
} catch (error) {
alert('Error deleting config: ' + error.message);
}
});
// Utility functions
function showError(elementId, message) {
const errorEl = document.getElementById(elementId);
const messageEl = document.getElementById(elementId + '-message');
messageEl.textContent = message;
errorEl.style.display = 'block';
}
function escapeHtml(text) {
if (!text) return '';
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function formatDate(dateStr) {
if (!dateStr) return '-';
const date = new Date(dateStr);
return date.toLocaleDateString() + ' ' + date.toLocaleTimeString();
}
</script>
{% endblock %}

View File

@@ -5,7 +5,7 @@
{% block content %}
<div class="row mt-4">
<div class="col-12">
<h1 class="mb-4" style="color: #60a5fa;">Dashboard</h1>
<h1 class="mb-4">Dashboard</h1>
</div>
</div>
@@ -42,7 +42,7 @@
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0" style="color: #60a5fa;">Quick Actions</h5>
<h5 class="mb-0">Quick Actions</h5>
</div>
<div class="card-body">
<button class="btn btn-primary btn-lg" onclick="showTriggerScanModal()">
@@ -63,7 +63,7 @@
<div class="col-md-8">
<div class="card">
<div class="card-header">
<h5 class="mb-0" style="color: #60a5fa;">Scan Activity (Last 30 Days)</h5>
<h5 class="mb-0">Scan Activity (Last 30 Days)</h5>
</div>
<div class="card-body">
<div id="chart-loading" class="text-center py-4">
@@ -80,7 +80,7 @@
<div class="col-md-4">
<div class="card h-100">
<div class="card-header d-flex justify-content-between align-items-center">
<h5 class="mb-0" style="color: #60a5fa;">Upcoming Schedules</h5>
<h5 class="mb-0">Upcoming Schedules</h5>
<a href="{{ url_for('main.schedules') }}" class="btn btn-sm btn-secondary">Manage</a>
</div>
<div class="card-body">
@@ -105,7 +105,7 @@
<div class="col-12">
<div class="card">
<div class="card-header d-flex justify-content-between align-items-center">
<h5 class="mb-0" style="color: #60a5fa;">Recent Scans</h5>
<h5 class="mb-0">Recent Scans</h5>
<button class="btn btn-sm btn-secondary" onclick="refreshScans()">
<span id="refresh-text">Refresh</span>
<span id="refresh-spinner" class="spinner-border spinner-border-sm ms-1" style="display: none;"></span>
@@ -145,35 +145,36 @@
<!-- Trigger Scan Modal -->
<div class="modal fade" id="triggerScanModal" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content" style="background-color: #1e293b; border: 1px solid #334155;">
<div class="modal-header" style="border-bottom: 1px solid #334155;">
<h5 class="modal-title" style="color: #60a5fa;">Trigger New Scan</h5>
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">Trigger New Scan</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<form id="trigger-scan-form">
<div class="mb-3">
<label for="config-file" class="form-label">Config File</label>
<select class="form-select" id="config-file" name="config_file" required>
<option value="">Select a config file...</option>
{% for config in config_files %}
<option value="{{ config }}">{{ config }}</option>
{% endfor %}
<label for="config-select" class="form-label">Scan Configuration</label>
<select class="form-select" id="config-select" name="config_id" required>
<option value="">Loading configurations...</option>
</select>
<div class="form-text text-muted">
{% if config_files %}
Select a scan configuration file
{% else %}
<span class="text-warning">No config files found in /app/configs/</span>
{% endif %}
<div class="form-text text-muted" id="config-help-text">
Select a scan configuration
</div>
<div id="no-configs-warning" class="alert alert-warning mt-2 mb-0" role="alert" style="display: none;">
<i class="bi bi-exclamation-triangle"></i>
<strong>No configurations available</strong>
<p class="mb-2 mt-2">You need to create a configuration before you can trigger a scan.</p>
<a href="{{ url_for('main.configs') }}" class="btn btn-sm btn-primary">
<i class="bi bi-plus-circle"></i> Create Configuration
</a>
</div>
</div>
<div id="trigger-error" class="alert alert-danger" style="display: none;"></div>
</form>
</div>
<div class="modal-footer" style="border-top: 1px solid #334155;">
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" onclick="triggerScan()">
<button type="button" class="btn btn-primary" id="trigger-scan-btn" onclick="triggerScan()">
<span id="modal-trigger-text">Trigger Scan</span>
<span id="modal-trigger-spinner" class="spinner-border spinner-border-sm ms-2" style="display: none;"></span>
</button>
@@ -316,23 +317,75 @@
});
}
// Load available configs
async function loadConfigs() {
const selectEl = document.getElementById('config-select');
const helpTextEl = document.getElementById('config-help-text');
const noConfigsWarning = document.getElementById('no-configs-warning');
const triggerBtn = document.getElementById('trigger-scan-btn');
try {
const response = await fetch('/api/configs');
if (!response.ok) {
throw new Error('Failed to load configurations');
}
const data = await response.json();
const configs = data.configs || [];
// Clear existing options
selectEl.innerHTML = '';
if (configs.length === 0) {
selectEl.innerHTML = '<option value="">No configurations available</option>';
selectEl.disabled = true;
triggerBtn.disabled = true;
helpTextEl.style.display = 'none';
noConfigsWarning.style.display = 'block';
} else {
selectEl.innerHTML = '<option value="">Select a configuration...</option>';
configs.forEach(config => {
const option = document.createElement('option');
option.value = config.id;
const siteText = config.site_count === 1 ? 'site' : 'sites';
option.textContent = `${config.title} (${config.site_count} ${siteText})`;
selectEl.appendChild(option);
});
selectEl.disabled = false;
triggerBtn.disabled = false;
helpTextEl.style.display = 'block';
noConfigsWarning.style.display = 'none';
}
} catch (error) {
console.error('Error loading configs:', error);
selectEl.innerHTML = '<option value="">Error loading configurations</option>';
selectEl.disabled = true;
triggerBtn.disabled = true;
helpTextEl.style.display = 'none';
}
}
// Show trigger scan modal
function showTriggerScanModal() {
const modal = new bootstrap.Modal(document.getElementById('triggerScanModal'));
document.getElementById('trigger-error').style.display = 'none';
document.getElementById('trigger-scan-form').reset();
// Load configs when modal is shown
loadConfigs();
modal.show();
}
// Trigger scan
async function triggerScan() {
const configFile = document.getElementById('config-file').value;
const configId = document.getElementById('config-select').value;
const errorEl = document.getElementById('trigger-error');
const btnText = document.getElementById('modal-trigger-text');
const btnSpinner = document.getElementById('modal-trigger-spinner');
if (!configFile) {
errorEl.textContent = 'Please enter a config file path.';
if (!configId) {
errorEl.textContent = 'Please select a configuration.';
errorEl.style.display = 'block';
return;
}
@@ -349,7 +402,7 @@
'Content-Type': 'application/json',
},
body: JSON.stringify({
config_file: configFile
config_id: parseInt(configId)
})
});
@@ -360,6 +413,9 @@
const data = await response.json();
// Hide error before closing modal to prevent flash
errorEl.style.display = 'none';
// Close modal
bootstrap.Modal.getInstance(document.getElementById('triggerScanModal')).hide();
@@ -370,7 +426,9 @@
Scan triggered successfully! (ID: ${data.scan_id})
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.querySelector('.container-fluid').insertBefore(alertDiv, document.querySelector('.row'));
// Insert at the beginning of container-fluid
const container = document.querySelector('.container-fluid');
container.insertBefore(alertDiv, container.firstChild);
// Refresh scans and stats
refreshScans();

375
app/web/templates/help.html Normal file
View File

@@ -0,0 +1,375 @@
{% extends "base.html" %}
{% block title %}Help - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12">
<h1 class="mb-4"><i class="bi bi-question-circle"></i> Help & Documentation</h1>
<p class="text-muted">Learn how to use SneakyScanner to manage your network scanning operations.</p>
</div>
</div>
<!-- Quick Navigation -->
<div class="row mb-4">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-compass"></i> Quick Navigation</h5>
</div>
<div class="card-body">
<div class="row g-2">
<div class="col-md-3 col-6">
<a href="#getting-started" class="btn btn-outline-primary w-100">Getting Started</a>
</div>
<div class="col-md-3 col-6">
<a href="#sites" class="btn btn-outline-primary w-100">Sites</a>
</div>
<div class="col-md-3 col-6">
<a href="#scan-configs" class="btn btn-outline-primary w-100">Scan Configs</a>
</div>
<div class="col-md-3 col-6">
<a href="#running-scans" class="btn btn-outline-primary w-100">Running Scans</a>
</div>
<div class="col-md-3 col-6">
<a href="#scheduling" class="btn btn-outline-primary w-100">Scheduling</a>
</div>
<div class="col-md-3 col-6">
<a href="#comparisons" class="btn btn-outline-primary w-100">Comparisons</a>
</div>
<div class="col-md-3 col-6">
<a href="#alerts" class="btn btn-outline-primary w-100">Alerts</a>
</div>
<div class="col-md-3 col-6">
<a href="#webhooks" class="btn btn-outline-primary w-100">Webhooks</a>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Getting Started -->
<div class="row mb-4" id="getting-started">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-rocket-takeoff"></i> Getting Started</h5>
</div>
<div class="card-body">
<p>SneakyScanner helps you perform network vulnerability scans and track changes over time. Here's the typical workflow:</p>
<div class="alert alert-info">
<strong>Basic Workflow:</strong>
<ol class="mb-0 mt-2">
<li><strong>Create a Site</strong> - Define a logical grouping for your targets</li>
<li><strong>Add IPs</strong> - Add IP addresses or ranges to your site</li>
<li><strong>Create a Scan Config</strong> - Configure how scans should run using your site</li>
<li><strong>Run a Scan</strong> - Execute scans manually or on a schedule</li>
<li><strong>Review Results</strong> - Analyze findings and compare scans over time</li>
</ol>
</div>
</div>
</div>
</div>
</div>
<!-- Sites -->
<div class="row mb-4" id="sites">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-globe"></i> Creating Sites & Adding IPs</h5>
</div>
<div class="card-body">
<h6>What is a Site?</h6>
<p>A Site is a logical grouping of IP addresses that you want to scan together. For example, you might create separate sites for "Production Servers", "Development Environment", or "Office Network".</p>
<h6>Creating a Site</h6>
<ol>
<li>Navigate to <strong>Configs → Sites</strong> in the navigation menu</li>
<li>Click the <strong>Create Site</strong> button</li>
<li>Enter a descriptive name for your site</li>
<li>Optionally add a description to help identify the site's purpose</li>
<li>Click <strong>Create</strong> to save the site</li>
</ol>
<h6>Adding IP Addresses</h6>
<p>After creating a site, you need to add the IP addresses you want to scan:</p>
<ol>
<li>Find your site in the Sites list</li>
<li>Click the <strong>Manage IPs</strong> button (or the site name)</li>
<li>Click <strong>Add IP</strong></li>
<li>Enter the IP address or CIDR range (e.g., <code>192.168.1.1</code> or <code>192.168.1.0/24</code>)</li>
<li>Click <strong>Add</strong> to save</li>
</ol>
<div class="alert alert-warning">
<i class="bi bi-exclamation-triangle"></i> <strong>Note:</strong> You can add individual IPs or CIDR notation ranges. Large ranges will result in longer scan times.
</div>
</div>
</div>
</div>
</div>
<!-- Scan Configs -->
<div class="row mb-4" id="scan-configs">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-gear"></i> Creating Scan Configurations</h5>
</div>
<div class="card-body">
<h6>What is a Scan Config?</h6>
<p>A Scan Configuration defines how a scan should be performed. It links to a Site and specifies scanning parameters like ports to scan, timing options, and other settings.</p>
<h6>Creating a Scan Config</h6>
<ol>
<li>Navigate to <strong>Configs → Scan Configs</strong> in the navigation menu</li>
<li>Click the <strong>Create Config</strong> button</li>
<li>Enter a name for the configuration</li>
<li>Select the <strong>Site</strong> to associate with this config</li>
<li>Configure scan parameters:
<ul>
<li><strong>Ports</strong> - Specify ports to scan (e.g., <code>22,80,443</code> or <code>1-1000</code>)</li>
<li><strong>Timing</strong> - Set scan speed/aggressiveness</li>
<li><strong>Additional Options</strong> - Configure other nmap parameters as needed</li>
</ul>
</li>
<li>Click <strong>Create</strong> to save the configuration</li>
</ol>
<div class="alert alert-info">
<i class="bi bi-info-circle"></i> <strong>Tip:</strong> Create different configs for different purposes - a quick config for daily checks and a thorough config for weekly deep scans.
</div>
</div>
</div>
</div>
</div>
<!-- Running Scans -->
<div class="row mb-4" id="running-scans">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-play-circle"></i> Running Scans</h5>
</div>
<div class="card-body">
<h6>Starting a Manual Scan</h6>
<ol>
<li>Navigate to <strong>Scans</strong> in the navigation menu</li>
<li>Click the <strong>New Scan</strong> button</li>
<li>Select the <strong>Scan Config</strong> you want to use</li>
<li>Click <strong>Start Scan</strong></li>
</ol>
<h6>Monitoring Scan Progress</h6>
<p>While a scan is running:</p>
<ul>
<li>The scan will appear in the Scans list with a <span class="badge badge-warning">Running</span> status</li>
<li>You can view live progress by clicking on the scan</li>
<li>The Dashboard also shows active scans</li>
</ul>
<h6>Viewing Scan Results</h6>
<ol>
<li>Once complete, click on a scan in the Scans list</li>
<li>View discovered hosts, open ports, and services</li>
<li>Export results or compare with previous scans</li>
</ol>
</div>
</div>
</div>
</div>
<!-- Scheduling -->
<div class="row mb-4" id="scheduling">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-calendar-check"></i> Scheduling Scans</h5>
</div>
<div class="card-body">
<h6>Why Schedule Scans?</h6>
<p>Scheduled scans allow you to automatically run scans at regular intervals, ensuring continuous monitoring of your network without manual intervention.</p>
<h6>Creating a Schedule</h6>
<ol>
<li>Navigate to <strong>Schedules</strong> in the navigation menu</li>
<li>Click the <strong>Create Schedule</strong> button</li>
<li>Enter a name for the schedule</li>
<li>Select the <strong>Scan Config</strong> to use</li>
<li>Configure the schedule:
<ul>
<li><strong>Frequency</strong> - How often to run (daily, weekly, monthly, custom cron)</li>
<li><strong>Time</strong> - When to start the scan</li>
<li><strong>Days</strong> - Which days to run (for weekly schedules)</li>
</ul>
</li>
<li>Enable/disable the schedule as needed</li>
<li>Click <strong>Create</strong> to save</li>
</ol>
<h6>Managing Schedules</h6>
<ul>
<li><strong>Enable/Disable</strong> - Toggle schedules on or off without deleting them</li>
<li><strong>Edit</strong> - Modify the schedule timing or associated config</li>
<li><strong>Delete</strong> - Remove schedules you no longer need</li>
<li><strong>View History</strong> - See past runs triggered by the schedule</li>
</ul>
<div class="alert alert-info">
<i class="bi bi-info-circle"></i> <strong>Tip:</strong> Schedule comprehensive scans during off-peak hours to minimize network impact.
</div>
</div>
</div>
</div>
</div>
<!-- Scan Comparisons -->
<div class="row mb-4" id="comparisons">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-arrow-left-right"></i> Scan Comparisons</h5>
</div>
<div class="card-body">
<h6>Why Compare Scans?</h6>
<p>Comparing scans helps you identify changes in your network over time - new hosts, closed ports, new services, or potential security issues.</p>
<h6>Comparing Two Scans</h6>
<ol>
<li>Navigate to <strong>Scans</strong> in the navigation menu</li>
<li>Find the scan you want to use as the baseline</li>
<li>Click on the scan to view its details</li>
<li>Click the <strong>Compare</strong> button</li>
<li>Select another scan to compare against</li>
<li>Review the comparison results</li>
</ol>
<h6>Understanding Comparison Results</h6>
<p>The comparison view shows:</p>
<ul>
<li><span class="badge badge-success">New</span> - Hosts or ports that appear in the newer scan but not the older one</li>
<li><span class="badge badge-danger">Removed</span> - Hosts or ports that were in the older scan but not the newer one</li>
<li><span class="badge badge-warning">Changed</span> - Services or states that differ between scans</li>
<li><span class="badge badge-info">Unchanged</span> - Items that remain the same</li>
</ul>
<div class="alert alert-warning">
<i class="bi bi-exclamation-triangle"></i> <strong>Security Note:</strong> Pay close attention to unexpected new open ports or services - these could indicate unauthorized changes or potential compromises.
</div>
</div>
</div>
</div>
</div>
<!-- Alerts -->
<div class="row mb-4" id="alerts">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-bell"></i> Alerts & Alert Rules</h5>
</div>
<div class="card-body">
<h6>Understanding Alerts</h6>
<p>Alerts notify you when scan results match certain conditions you define. This helps you stay informed about important changes without manually reviewing every scan.</p>
<h6>Viewing Alert History</h6>
<ol>
<li>Navigate to <strong>Alerts → Alert History</strong></li>
<li>View all triggered alerts with timestamps and details</li>
<li>Filter alerts by severity, date, or type</li>
<li>Click on an alert to see full details and the associated scan</li>
</ol>
<h6>Creating Alert Rules</h6>
<ol>
<li>Navigate to <strong>Alerts → Alert Rules</strong></li>
<li>Click <strong>Create Rule</strong></li>
<li>Configure the rule:
<ul>
<li><strong>Name</strong> - A descriptive name for the rule</li>
<li><strong>Condition</strong> - What triggers the alert (e.g., new open port, new host, specific service detected)</li>
<li><strong>Severity</strong> - How critical is this alert (Info, Warning, Critical)</li>
<li><strong>Scope</strong> - Which sites or configs this rule applies to</li>
</ul>
</li>
<li>Enable the rule</li>
<li>Click <strong>Create</strong> to save</li>
</ol>
<h6>Common Alert Rule Examples</h6>
<ul>
<li><strong>New Host Detected</strong> - Alert when a previously unknown host appears</li>
<li><strong>New Open Port</strong> - Alert when a new port opens on any host</li>
<li><strong>Critical Port Open</strong> - Alert for specific high-risk ports (e.g., 23/Telnet, 3389/RDP)</li>
<li><strong>Service Change</strong> - Alert when a service version changes</li>
<li><strong>Host Offline</strong> - Alert when an expected host stops responding</li>
</ul>
<div class="alert alert-info">
<i class="bi bi-info-circle"></i> <strong>Tip:</strong> Start with a few important rules and refine them over time to avoid alert fatigue.
</div>
</div>
</div>
</div>
</div>
<!-- Webhooks -->
<div class="row mb-4" id="webhooks">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0"><i class="bi bi-broadcast"></i> Webhooks</h5>
</div>
<div class="card-body">
<h6>What are Webhooks?</h6>
<p>Webhooks allow SneakyScanner to send notifications to external services when events occur, such as scan completion or alert triggers. This enables integration with tools like Slack, Discord, Microsoft Teams, or custom systems.</p>
<h6>Creating a Webhook</h6>
<ol>
<li>Navigate to <strong>Alerts → Webhooks</strong></li>
<li>Click <strong>Create Webhook</strong></li>
<li>Configure the webhook:
<ul>
<li><strong>Name</strong> - A descriptive name</li>
<li><strong>URL</strong> - The endpoint to send notifications to</li>
<li><strong>Events</strong> - Which events trigger this webhook</li>
<li><strong>Secret</strong> - Optional secret for request signing</li>
</ul>
</li>
<li>Test the webhook to verify it works</li>
<li>Click <strong>Create</strong> to save</li>
</ol>
<h6>Webhook Events</h6>
<ul>
<li><strong>Scan Started</strong> - When a scan begins</li>
<li><strong>Scan Completed</strong> - When a scan finishes</li>
<li><strong>Scan Failed</strong> - When a scan encounters an error</li>
<li><strong>Alert Triggered</strong> - When an alert rule matches</li>
</ul>
<h6>Integration Examples</h6>
<ul>
<li><strong>Slack</strong> - Use a Slack Incoming Webhook URL</li>
<li><strong>Discord</strong> - Use a Discord Webhook URL</li>
<li><strong>Microsoft Teams</strong> - Use a Teams Incoming Webhook</li>
<li><strong>Custom API</strong> - Send to your own endpoint for custom processing</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Back to Top -->
<div class="row mb-4">
<div class="col-12 text-center">
<a href="#" class="btn btn-outline-secondary">
<i class="bi bi-arrow-up"></i> Back to Top
</a>
</div>
</div>
{% endblock %}

View File

@@ -0,0 +1,175 @@
{% extends "base.html" %}
{% block title %}Search Results for {{ ip_address }} - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12 d-flex justify-content-between align-items-center mb-4">
<h1>
<i class="bi bi-search"></i>
Search Results
{% if ip_address %}
<small class="text-muted">for {{ ip_address }}</small>
{% endif %}
</h1>
<a href="{{ url_for('main.scans') }}" class="btn btn-secondary">
<i class="bi bi-arrow-left"></i> Back to Scans
</a>
</div>
</div>
{% if not ip_address %}
<!-- No IP provided -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-body text-center py-5">
<i class="bi bi-exclamation-circle text-warning" style="font-size: 3rem;"></i>
<h4 class="mt-3">No IP Address Provided</h4>
<p class="text-muted">Please enter an IP address in the search box to find related scans.</p>
</div>
</div>
</div>
</div>
{% else %}
<!-- Results Table -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Last 10 Scans Containing {{ ip_address }}</h5>
</div>
<div class="card-body">
<div id="results-loading" class="text-center py-5">
<div class="spinner-border" role="status">
<span class="visually-hidden">Loading...</span>
</div>
<p class="mt-3 text-muted">Searching for scans...</p>
</div>
<div id="results-error" class="alert alert-danger" style="display: none;"></div>
<div id="results-empty" class="text-center py-5 text-muted" style="display: none;">
<i class="bi bi-search" style="font-size: 3rem;"></i>
<h5 class="mt-3">No Scans Found</h5>
<p>No completed scans contain the IP address <strong>{{ ip_address }}</strong>.</p>
</div>
<div id="results-table-container" style="display: none;">
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th style="width: 80px;">ID</th>
<th>Title</th>
<th style="width: 200px;">Timestamp</th>
<th style="width: 100px;">Duration</th>
<th style="width: 120px;">Status</th>
<th style="width: 100px;">Actions</th>
</tr>
</thead>
<tbody id="results-tbody">
</tbody>
</table>
</div>
<div class="text-muted mt-3">
Found <span id="result-count">0</span> scan(s) containing this IP address.
</div>
</div>
</div>
</div>
</div>
</div>
{% endif %}
{% endblock %}
{% block scripts %}
<script>
const ipAddress = "{{ ip_address | e }}";
// Load results when page loads
document.addEventListener('DOMContentLoaded', function() {
if (ipAddress) {
loadResults();
}
});
// Load search results from API
async function loadResults() {
const loadingEl = document.getElementById('results-loading');
const errorEl = document.getElementById('results-error');
const emptyEl = document.getElementById('results-empty');
const tableEl = document.getElementById('results-table-container');
// Show loading state
loadingEl.style.display = 'block';
errorEl.style.display = 'none';
emptyEl.style.display = 'none';
tableEl.style.display = 'none';
try {
const response = await fetch(`/api/scans/by-ip/${encodeURIComponent(ipAddress)}`);
if (!response.ok) {
throw new Error('Failed to search for scans');
}
const data = await response.json();
const scans = data.scans || [];
loadingEl.style.display = 'none';
if (scans.length === 0) {
emptyEl.style.display = 'block';
} else {
tableEl.style.display = 'block';
renderResultsTable(scans);
document.getElementById('result-count').textContent = data.count;
}
} catch (error) {
console.error('Error searching for scans:', error);
loadingEl.style.display = 'none';
errorEl.textContent = 'Failed to search for scans. Please try again.';
errorEl.style.display = 'block';
}
}
// Render results table
function renderResultsTable(scans) {
const tbody = document.getElementById('results-tbody');
tbody.innerHTML = '';
scans.forEach(scan => {
const row = document.createElement('tr');
row.classList.add('scan-row');
// Format timestamp
const timestamp = new Date(scan.timestamp).toLocaleString();
// Format duration
const duration = scan.duration ? `${scan.duration.toFixed(1)}s` : '-';
// Status badge
let statusBadge = '';
if (scan.status === 'completed') {
statusBadge = '<span class="badge badge-success">Completed</span>';
} else if (scan.status === 'running') {
statusBadge = '<span class="badge badge-info">Running</span>';
} else if (scan.status === 'failed') {
statusBadge = '<span class="badge badge-danger">Failed</span>';
} else {
statusBadge = `<span class="badge badge-info">${scan.status}</span>`;
}
row.innerHTML = `
<td class="mono">${scan.id}</td>
<td>${scan.title || 'Untitled Scan'}</td>
<td class="text-muted">${timestamp}</td>
<td class="mono">${duration}</td>
<td>${statusBadge}</td>
<td>
<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>
</td>
`;
tbody.appendChild(row);
});
}
</script>
{% endblock %}

View File

@@ -2,57 +2,6 @@
{% block title %}Login - SneakyScanner{% endblock %}
{% block extra_styles %}
body {
height: 100vh;
display: flex;
align-items: center;
justify-content: center;
background: linear-gradient(135deg, #0f172a 0%, #1e293b 100%);
}
.container-fluid {
max-width: 450px;
padding: 0;
}
.login-card {
background-color: #1e293b;
border: 1px solid #334155;
border-radius: 12px;
padding: 3rem;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.3);
}
.brand-title {
color: #60a5fa;
font-weight: 600;
font-size: 2rem;
margin-bottom: 0.5rem;
}
.brand-subtitle {
color: #94a3b8;
font-size: 0.95rem;
}
.btn-primary {
padding: 0.75rem;
font-size: 1rem;
font-weight: 500;
}
.footer {
position: fixed;
bottom: 20px;
left: 0;
right: 0;
margin: 0;
padding: 0;
border: none;
}
{% endblock %}
{% set hide_nav = true %}
{% block content %}

View File

@@ -28,6 +28,13 @@
<!-- Error State -->
<div id="comparison-error" class="alert alert-danger" style="display: none;"></div>
<!-- Config Warning -->
<div id="config-warning" class="alert alert-warning" style="display: none;">
<i class="bi bi-exclamation-triangle"></i>
<strong>Different Configurations Detected</strong>
<p class="mb-0" id="config-warning-message"></p>
</div>
<!-- Comparison Content -->
<div id="comparison-content" style="display: none;">
<!-- Drift Score Card -->
@@ -52,14 +59,16 @@
<div class="mb-3">
<label class="form-label text-muted">Older Scan (#<span id="scan1-id"></span>)</label>
<div id="scan1-title" class="fw-bold">-</div>
<small class="text-muted" id="scan1-timestamp">-</small>
<small class="text-muted d-block" id="scan1-timestamp">-</small>
<small class="text-muted d-block"><i class="bi bi-file-earmark-text"></i> <span id="scan1-config">-</span></small>
</div>
</div>
<div class="col-md-4">
<div class="mb-3">
<label class="form-label text-muted">Newer Scan (#<span id="scan2-id"></span>)</label>
<div id="scan2-title" class="fw-bold">-</div>
<small class="text-muted" id="scan2-timestamp">-</small>
<small class="text-muted d-block" id="scan2-timestamp">-</small>
<small class="text-muted d-block"><i class="bi bi-file-earmark-text"></i> <span id="scan2-config">-</span></small>
</div>
</div>
<div class="col-md-4">
@@ -340,6 +349,14 @@
}
function populateComparison(data) {
// Show config warning if configs differ
if (data.config_warning) {
const warningDiv = document.getElementById('config-warning');
const warningMessage = document.getElementById('config-warning-message');
warningMessage.textContent = data.config_warning;
warningDiv.style.display = 'block';
}
// Drift score
const driftScore = data.drift_score || 0;
document.getElementById('drift-score').textContent = driftScore.toFixed(3);
@@ -358,10 +375,12 @@
document.getElementById('scan1-id').textContent = data.scan1.id;
document.getElementById('scan1-title').textContent = data.scan1.title || 'Untitled Scan';
document.getElementById('scan1-timestamp').textContent = new Date(data.scan1.timestamp).toLocaleString();
document.getElementById('scan1-config').textContent = data.scan1.config_id || 'Unknown';
document.getElementById('scan2-id').textContent = data.scan2.id;
document.getElementById('scan2-title').textContent = data.scan2.title || 'Untitled Scan';
document.getElementById('scan2-timestamp').textContent = new Date(data.scan2.timestamp).toLocaleString();
document.getElementById('scan2-config').textContent = data.scan2.config_id || 'Unknown';
// Ports comparison
populatePortsComparison(data.ports);

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@
{% block content %}
<div class="row mt-4">
<div class="col-12 d-flex justify-content-between align-items-center mb-4">
<h1 style="color: #60a5fa;">All Scans</h1>
<h1>All Scans</h1>
<button class="btn btn-primary" onclick="showTriggerScanModal()">
<span id="trigger-btn-text">Trigger New Scan</span>
<span id="trigger-btn-spinner" class="spinner-border spinner-border-sm ms-2" style="display: none;"></span>
@@ -26,6 +26,7 @@
<option value="running">Running</option>
<option value="completed">Completed</option>
<option value="failed">Failed</option>
<option value="cancelled">Cancelled</option>
</select>
</div>
<div class="col-md-4">
@@ -54,7 +55,7 @@
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0" style="color: #60a5fa;">Scan History</h5>
<h5 class="mb-0">Scan History</h5>
</div>
<div class="card-body">
<div id="scans-loading" class="text-center py-5">
@@ -105,35 +106,36 @@
<!-- Trigger Scan Modal -->
<div class="modal fade" id="triggerScanModal" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content" style="background-color: #1e293b; border: 1px solid #334155;">
<div class="modal-header" style="border-bottom: 1px solid #334155;">
<h5 class="modal-title" style="color: #60a5fa;">Trigger New Scan</h5>
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">Trigger New Scan</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<div class="modal-body">
<form id="trigger-scan-form">
<div class="mb-3">
<label for="config-file" class="form-label">Config File</label>
<select class="form-select" id="config-file" name="config_file" required>
<option value="">Select a config file...</option>
{% for config in config_files %}
<option value="{{ config }}">{{ config }}</option>
{% endfor %}
<label for="config-select" class="form-label">Scan Configuration</label>
<select class="form-select" id="config-select" name="config_id" required>
<option value="">Loading configurations...</option>
</select>
<div class="form-text text-muted">
{% if config_files %}
Select a scan configuration file
{% else %}
<span class="text-warning">No config files found in /app/configs/</span>
{% endif %}
<div class="form-text text-muted" id="config-help-text">
Select a scan configuration
</div>
<div id="no-configs-warning" class="alert alert-warning mt-2 mb-0" role="alert" style="display: none;">
<i class="bi bi-exclamation-triangle"></i>
<strong>No configurations available</strong>
<p class="mb-2 mt-2">You need to create a configuration before you can trigger a scan.</p>
<a href="{{ url_for('main.configs') }}" class="btn btn-sm btn-primary">
<i class="bi bi-plus-circle"></i> Create Configuration
</a>
</div>
</div>
<div id="trigger-error" class="alert alert-danger" style="display: none;"></div>
</form>
</div>
<div class="modal-footer" style="border-top: 1px solid #334155;">
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" onclick="triggerScan()">
<button type="button" class="btn btn-primary" id="trigger-scan-btn" onclick="triggerScan()">
<span id="modal-trigger-text">Trigger Scan</span>
<span id="modal-trigger-spinner" class="spinner-border spinner-border-sm ms-2" style="display: none;"></span>
</button>
@@ -150,6 +152,25 @@
let statusFilter = '';
let totalCount = 0;
// Show alert notification
function showAlert(type, message) {
const container = document.getElementById('notification-container');
const notification = document.createElement('div');
notification.className = `alert alert-${type} alert-dismissible fade show mb-2`;
notification.innerHTML = `
${message}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
container.appendChild(notification);
// Auto-dismiss after 5 seconds
setTimeout(() => {
notification.remove();
}, 5000);
}
// Load initial data when page loads
document.addEventListener('DOMContentLoaded', function() {
loadScans();
@@ -228,20 +249,27 @@
statusBadge = '<span class="badge badge-info">Running</span>';
} else if (scan.status === 'failed') {
statusBadge = '<span class="badge badge-danger">Failed</span>';
} else if (scan.status === 'cancelled') {
statusBadge = '<span class="badge badge-warning">Cancelled</span>';
} else {
statusBadge = `<span class="badge badge-info">${scan.status}</span>`;
}
// Action buttons
let actionButtons = `<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>`;
if (scan.status === 'running') {
actionButtons += `<button class="btn btn-sm btn-warning ms-1" onclick="stopScan(${scan.id})">Stop</button>`;
} else {
actionButtons += `<button class="btn btn-sm btn-danger ms-1" onclick="deleteScan(${scan.id})">Delete</button>`;
}
row.innerHTML = `
<td class="mono">${scan.id}</td>
<td>${scan.title || 'Untitled Scan'}</td>
<td class="text-muted">${timestamp}</td>
<td class="mono">${duration}</td>
<td>${statusBadge}</td>
<td>
<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>
${scan.status !== 'running' ? `<button class="btn btn-sm btn-danger ms-1" onclick="deleteScan(${scan.id})">Delete</button>` : ''}
</td>
<td>${actionButtons}</td>
`;
tbody.appendChild(row);
@@ -352,23 +380,75 @@
});
}
// Load available configs
async function loadConfigs() {
const selectEl = document.getElementById('config-select');
const helpTextEl = document.getElementById('config-help-text');
const noConfigsWarning = document.getElementById('no-configs-warning');
const triggerBtn = document.getElementById('trigger-scan-btn');
try {
const response = await fetch('/api/configs');
if (!response.ok) {
throw new Error('Failed to load configurations');
}
const data = await response.json();
const configs = data.configs || [];
// Clear existing options
selectEl.innerHTML = '';
if (configs.length === 0) {
selectEl.innerHTML = '<option value="">No configurations available</option>';
selectEl.disabled = true;
triggerBtn.disabled = true;
helpTextEl.style.display = 'none';
noConfigsWarning.style.display = 'block';
} else {
selectEl.innerHTML = '<option value="">Select a configuration...</option>';
configs.forEach(config => {
const option = document.createElement('option');
option.value = config.id;
const siteText = config.site_count === 1 ? 'site' : 'sites';
option.textContent = `${config.title} (${config.site_count} ${siteText})`;
selectEl.appendChild(option);
});
selectEl.disabled = false;
triggerBtn.disabled = false;
helpTextEl.style.display = 'block';
noConfigsWarning.style.display = 'none';
}
} catch (error) {
console.error('Error loading configs:', error);
selectEl.innerHTML = '<option value="">Error loading configurations</option>';
selectEl.disabled = true;
triggerBtn.disabled = true;
helpTextEl.style.display = 'none';
}
}
// Show trigger scan modal
function showTriggerScanModal() {
const modal = new bootstrap.Modal(document.getElementById('triggerScanModal'));
document.getElementById('trigger-error').style.display = 'none';
document.getElementById('trigger-scan-form').reset();
// Load configs when modal is shown
loadConfigs();
modal.show();
}
// Trigger scan
async function triggerScan() {
const configFile = document.getElementById('config-file').value;
const configId = document.getElementById('config-select').value;
const errorEl = document.getElementById('trigger-error');
const btnText = document.getElementById('modal-trigger-text');
const btnSpinner = document.getElementById('modal-trigger-spinner');
if (!configFile) {
errorEl.textContent = 'Please enter a config file path.';
if (!configId) {
errorEl.textContent = 'Please select a configuration.';
errorEl.style.display = 'block';
return;
}
@@ -385,28 +465,25 @@
'Content-Type': 'application/json',
},
body: JSON.stringify({
config_file: configFile
config_id: parseInt(configId)
})
});
if (!response.ok) {
const data = await response.json();
throw new Error(data.error || 'Failed to trigger scan');
throw new Error(data.message || data.error || 'Failed to trigger scan');
}
const data = await response.json();
// Hide error before closing modal to prevent flash
errorEl.style.display = 'none';
// Close modal
bootstrap.Modal.getInstance(document.getElementById('triggerScanModal')).hide();
// Show success message
const alertDiv = document.createElement('div');
alertDiv.className = 'alert alert-success alert-dismissible fade show mt-3';
alertDiv.innerHTML = `
Scan triggered successfully! (ID: ${data.scan_id})
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.querySelector('.container-fluid').insertBefore(alertDiv, document.querySelector('.row'));
showAlert('success', `Scan triggered successfully! (ID: ${data.scan_id})`);
// Refresh scans
loadScans();
@@ -420,6 +497,33 @@
}
}
// Stop scan
async function stopScan(scanId) {
if (!confirm(`Are you sure you want to stop scan ${scanId}?`)) {
return;
}
try {
const response = await fetch(`/api/scans/${scanId}/stop`, {
method: 'POST'
});
if (!response.ok) {
const data = await response.json();
throw new Error(data.message || 'Failed to stop scan');
}
// Show success message
showAlert('success', `Stop signal sent to scan ${scanId}.`);
// Refresh scans after a short delay
setTimeout(() => loadScans(), 1000);
} catch (error) {
console.error('Error stopping scan:', error);
showAlert('danger', `Failed to stop scan: ${error.message}`);
}
}
// Delete scan
async function deleteScan(scanId) {
if (!confirm(`Are you sure you want to delete scan ${scanId}?`)) {
@@ -432,44 +536,20 @@
});
if (!response.ok) {
throw new Error('Failed to delete scan');
const data = await response.json();
throw new Error(data.message || 'Failed to delete scan');
}
// Show success message
const alertDiv = document.createElement('div');
alertDiv.className = 'alert alert-success alert-dismissible fade show mt-3';
alertDiv.innerHTML = `
Scan ${scanId} deleted successfully.
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.querySelector('.container-fluid').insertBefore(alertDiv, document.querySelector('.row'));
showAlert('success', `Scan ${scanId} deleted successfully.`);
// Refresh scans
loadScans();
} catch (error) {
console.error('Error deleting scan:', error);
alert('Failed to delete scan. Please try again.');
showAlert('danger', `Failed to delete scan: ${error.message}`);
}
}
// Custom pagination styles
const style = document.createElement('style');
style.textContent = `
.pagination {
--bs-pagination-bg: #1e293b;
--bs-pagination-border-color: #334155;
--bs-pagination-hover-bg: #334155;
--bs-pagination-hover-border-color: #475569;
--bs-pagination-focus-bg: #334155;
--bs-pagination-active-bg: #3b82f6;
--bs-pagination-active-border-color: #3b82f6;
--bs-pagination-disabled-bg: #0f172a;
--bs-pagination-disabled-border-color: #334155;
--bs-pagination-color: #e2e8f0;
--bs-pagination-hover-color: #e2e8f0;
--bs-pagination-disabled-color: #64748b;
}
`;
document.head.appendChild(style);
</script>
{% endblock %}

View File

@@ -32,13 +32,13 @@
<small class="form-text text-muted">A descriptive name for this schedule</small>
</div>
<!-- Config File -->
<!-- Config -->
<div class="mb-3">
<label for="config-file" class="form-label">Configuration File <span class="text-danger">*</span></label>
<select class="form-select" id="config-file" name="config_file" required>
<option value="">Select a configuration file...</option>
{% for config in config_files %}
<option value="{{ config }}">{{ config }}</option>
<label for="config-id" class="form-label">Configuration <span class="text-danger">*</span></label>
<select class="form-select" id="config-id" name="config_id" required>
<option value="">Select a configuration...</option>
{% for config in configs %}
<option value="{{ config.id }}">{{ config.title }}</option>
{% endfor %}
</select>
<small class="form-text text-muted">The scan configuration to use for this schedule</small>
@@ -369,13 +369,13 @@ document.getElementById('create-schedule-form').addEventListener('submit', async
// Get form data
const formData = {
name: document.getElementById('schedule-name').value.trim(),
config_file: document.getElementById('config-file').value,
config_id: parseInt(document.getElementById('config-id').value),
cron_expression: document.getElementById('cron-expression').value.trim(),
enabled: document.getElementById('schedule-enabled').checked
};
// Validate
if (!formData.name || !formData.config_file || !formData.cron_expression) {
if (!formData.name || !formData.config_id || !formData.cron_expression) {
showNotification('Please fill in all required fields', 'warning');
return;
}
@@ -419,19 +419,16 @@ document.getElementById('create-schedule-form').addEventListener('submit', async
// Show notification
function showNotification(message, type = 'info') {
const container = document.getElementById('notification-container');
const notification = document.createElement('div');
notification.className = `alert alert-${type} alert-dismissible fade show position-fixed`;
notification.style.top = '20px';
notification.style.right = '20px';
notification.style.zIndex = '9999';
notification.style.minWidth = '300px';
notification.className = `alert alert-${type} alert-dismissible fade show mb-2`;
notification.innerHTML = `
${message}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.body.appendChild(notification);
container.appendChild(notification);
setTimeout(() => {
notification.remove();

View File

@@ -298,7 +298,11 @@ async function loadSchedule() {
function populateForm(schedule) {
document.getElementById('schedule-id').value = schedule.id;
document.getElementById('schedule-name').value = schedule.name;
document.getElementById('config-file').value = schedule.config_file;
// Display config name and ID in the readonly config-file field
const configDisplay = schedule.config_name
? `${schedule.config_name} (ID: ${schedule.config_id})`
: `Config ID: ${schedule.config_id}`;
document.getElementById('config-file').value = configDisplay;
document.getElementById('cron-expression').value = schedule.cron_expression;
document.getElementById('schedule-enabled').checked = schedule.enabled;
@@ -554,19 +558,16 @@ async function deleteSchedule() {
// Show notification
function showNotification(message, type = 'info') {
const container = document.getElementById('notification-container');
const notification = document.createElement('div');
notification.className = `alert alert-${type} alert-dismissible fade show position-fixed`;
notification.style.top = '20px';
notification.style.right = '20px';
notification.style.zIndex = '9999';
notification.style.minWidth = '300px';
notification.className = `alert alert-${type} alert-dismissible fade show mb-2`;
notification.innerHTML = `
${message}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.body.appendChild(notification);
container.appendChild(notification);
setTimeout(() => {
notification.remove();

View File

@@ -6,7 +6,7 @@
<div class="row mt-4">
<div class="col-12">
<div class="d-flex justify-content-between align-items-center mb-4">
<h1 style="color: #60a5fa;">Scheduled Scans</h1>
<h1>Scheduled Scans</h1>
<a href="{{ url_for('main.create_schedule') }}" class="btn btn-primary">
<i class="bi bi-plus-circle"></i> New Schedule
</a>
@@ -47,7 +47,7 @@
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0" style="color: #60a5fa;">All Schedules</h5>
<h5 class="mb-0">All Schedules</h5>
</div>
<div class="card-body">
<div id="schedules-loading" class="text-center py-5">
@@ -198,7 +198,7 @@ function renderSchedules() {
<td>
<strong>${escapeHtml(schedule.name)}</strong>
<br>
<small class="text-muted">${escapeHtml(schedule.config_file)}</small>
<small class="text-muted">Config ID: ${schedule.config_id || 'N/A'}</small>
</td>
<td class="mono"><code>${escapeHtml(schedule.cron_expression)}</code></td>
<td>${formatRelativeTime(schedule.next_run)}</td>
@@ -352,20 +352,16 @@ async function deleteSchedule(scheduleId) {
// Show notification
function showNotification(message, type = 'info') {
// Create notification element
const container = document.getElementById('notification-container');
const notification = document.createElement('div');
notification.className = `alert alert-${type} alert-dismissible fade show position-fixed`;
notification.style.top = '20px';
notification.style.right = '20px';
notification.style.zIndex = '9999';
notification.style.minWidth = '300px';
notification.className = `alert alert-${type} alert-dismissible fade show mb-2`;
notification.innerHTML = `
${message}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
`;
document.body.appendChild(notification);
container.appendChild(notification);
// Auto-remove after 5 seconds
setTimeout(() => {

View File

@@ -0,0 +1,60 @@
{% extends "base.html" %}
{% block title %}Setup - SneakyScanner{% endblock %}
{% set hide_nav = true %}
{% block content %}
<div class="login-card">
<div class="text-center mb-4">
<h1 class="brand-title">SneakyScanner</h1>
<p class="brand-subtitle">Initial Setup</p>
</div>
<div class="alert alert-info mb-4">
<i class="bi bi-info-circle me-1"></i>
<strong>Welcome!</strong> Please set an application password to secure your scanner.
</div>
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class="alert alert-{{ 'danger' if category == 'error' else category }} alert-dismissible fade show" role="alert">
{{ message }}
<button type="button" class="btn-close" data-bs-dismiss="alert"></button>
</div>
{% endfor %}
{% endif %}
{% endwith %}
<form method="post" action="{{ url_for('auth.setup') }}">
<div class="mb-3">
<label for="password" class="form-label">Password</label>
<input type="password"
class="form-control form-control-lg"
id="password"
name="password"
required
minlength="8"
autofocus
placeholder="Enter password (min 8 characters)">
<div class="form-text">Password must be at least 8 characters long.</div>
</div>
<div class="mb-4">
<label for="confirm_password" class="form-label">Confirm Password</label>
<input type="password"
class="form-control form-control-lg"
id="confirm_password"
name="confirm_password"
required
minlength="8"
placeholder="Confirm your password">
</div>
<button type="submit" class="btn btn-primary btn-lg w-100">
Set Password
</button>
</form>
</div>
{% endblock %}

1187
app/web/templates/sites.html Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,9 @@
{
"title": "{{ scan.title }} - {{ alert.type|title|replace('_', ' ') }}",
"message": "{{ alert.message }}{% if alert.ip_address %} on {{ alert.ip_address }}{% endif %}{% if alert.port %}:{{ alert.port }}{% endif %}",
"priority": {% if alert.severity == 'critical' %}5{% elif alert.severity == 'warning' %}3{% else %}1{% endif %},
"severity": "{{ alert.severity }}",
"scan_id": {{ scan.id }},
"alert_id": {{ alert.id }},
"timestamp": "{{ timestamp.isoformat() }}"
}

View File

@@ -0,0 +1,25 @@
{
"event": "alert.created",
"alert": {
"id": {{ alert.id }},
"type": "{{ alert.type }}",
"severity": "{{ alert.severity }}",
"message": "{{ alert.message }}",
{% if alert.ip_address %}"ip_address": "{{ alert.ip_address }}",{% endif %}
{% if alert.port %}"port": {{ alert.port }},{% endif %}
"acknowledged": {{ alert.acknowledged|lower }},
"created_at": "{{ alert.created_at.isoformat() }}"
},
"scan": {
"id": {{ scan.id }},
"title": "{{ scan.title }}",
"timestamp": "{{ scan.timestamp.isoformat() }}",
"status": "{{ scan.status }}"
},
"rule": {
"id": {{ rule.id }},
"name": "{{ rule.name }}",
"type": "{{ rule.type }}",
"threshold": {{ rule.threshold if rule.threshold else 'null' }}
}
}

Some files were not shown because too many files have changed in this diff Show More