Compare commits

...

17 Commits

Author SHA1 Message Date
4b197e0b3d Merge pull request 'beta' (#10) from beta into master
Reviewed-on: #10
2025-11-25 20:49:46 +00:00
30f0987a99 Merge pull request 'nightly' (#9) from nightly into beta
Reviewed-on: #9
2025-11-25 20:49:25 +00:00
9e2fc348b7 Merge branch 'bug/long-scans-break' into nightly 2025-11-25 14:48:00 -06:00
847e05abbe Changes Made
1. app/web/utils/validators.py - Added 'finalizing' to valid_statuses list
  2. app/web/models.py - Updated status field comment to document all valid statuses
  3. app/web/jobs/scan_job.py
  - Added transition to 'finalizing' status before output file generation
  - Sets current_phase = 'generating_outputs' during this phase
  - Wrapped output generation in try-except with proper error handling
  - If output generation fails, scan is marked 'completed' with warning message (scan data is still valid)

  4. app/web/api/scans.py
  - Added _recover_orphaned_scan() helper function for smart recovery
  - Modified stop_running_scan() to:
    - Allow stopping scans with status 'running' OR 'finalizing'
    - When scanner not in registry, perform smart recovery instead of returning 404
    - Smart recovery checks for output files and marks as 'completed' if found, 'cancelled' if not

  5. app/web/services/scan_service.py
  - Enhanced cleanup_orphaned_scans() with smart recovery logic
  - Now finds scans in both 'running' and 'finalizing' status
  - Returns dict with stats: {'recovered': N, 'failed': N, 'total': N}

  6. app/web/app.py - Updated caller to handle new dict return type from cleanup_orphaned_scans()

  Expected Behavior Now

  1. Normal scan flow: running → finalizing → completed
  2. Stop on active scan: Sends cancel signal, becomes 'cancelled'
  3. Stop on orphaned scan with files: Smart recovery → 'completed'
  4. Stop on orphaned scan without files: → 'cancelled'
  5. App restart with orphans: Startup cleanup uses smart recovery
2025-11-25 14:47:36 -06:00
07c2bcfd11 Merge branch 'beta' 2025-11-24 12:54:58 -06:00
a560bae800 Merge branch 'nightly' into beta 2025-11-24 12:54:33 -06:00
56828e4184 Merge branch 'feat/fix-cron-schedules' into nightly 2025-11-24 12:53:44 -06:00
5e3a70f837 Fix schedule management and update documentation for database-backed configs
This commit addresses multiple issues with schedule management and updates
  documentation to reflect the transition from YAML-based to database-backed
  configuration system.

  **Documentation Updates:**
  - Update DEPLOYMENT.md to remove all references to YAML config files
  - Document that all configurations are now stored in SQLite database
  - Update API examples to use config IDs instead of YAML filenames
  - Remove configs directory from backup/restore procedures
  - Update volume management section to reflect database-only storage

  **Cron Expression Handling:**
  - Add comprehensive documentation for APScheduler cron format conversion
  - Document that from_crontab() accepts standard format (Sunday=0) and converts automatically
  - Add validate_cron_expression() helper method with detailed error messages
  - Include helpful hints for day-of-week field errors in validation
  - Fix all deprecated datetime.utcnow() calls, replace with datetime.now(timezone.utc)

  **Timezone-Aware DateTime Fixes:**
  - Fix "can't subtract offset-naive and offset-aware datetimes" error
  - Add timezone awareness to croniter.get_next() return values
  - Make _get_relative_time() defensive to handle both naive and aware datetimes
  - Ensure all datetime comparisons use timezone-aware objects

  **Schedule Edit UI Fixes:**
  - Fix JavaScript error "Cannot set properties of null (setting 'value')"
  - Change reference from non-existent 'config-id' to correct 'config-file' element
  - Add config_name field to schedule API responses for better UX
  - Eagerly load Schedule.config relationship using joinedload()
  - Fix AttributeError: use schedule.config.title instead of .name
  - Display config title and ID in schedule edit form

  **Technical Details:**
  - app/web/services/schedule_service.py: 6 datetime.utcnow() fixes, validation enhancements
  - app/web/services/scheduler_service.py: Documentation, validation, timezone fixes
  - app/web/templates/schedule_edit.html: JavaScript element reference fix
  - docs/DEPLOYMENT.md: Complete rewrite of config management sections

  Fixes scheduling for Sunday at midnight (cron: 0 0 * * 0)
  Fixes schedule edit page JavaScript errors
  Improves user experience with config title display
2025-11-24 12:53:06 -06:00
451c7e92ff Merge pull request 'Merging beta into master' (#8) from beta into master
Reviewed-on: #8
2025-11-21 22:07:06 +00:00
8b89fd506d Merge pull request 'nightly merge into beta' (#7) from nightly into beta
Reviewed-on: #7
2025-11-21 22:05:43 +00:00
f24bd11dfd Add unique IP count and duplicate detection to sites page
The sites page previously showed total IP count which included duplicates
across multiple sites, leading to inflated numbers. Now displays unique
IP count as the primary metric with duplicate count shown when present.

- Add get_global_ip_stats() method to SiteService for unique/duplicate counts
- Update /api/sites?all=true endpoint to include IP statistics
- Update sites.html to display unique IPs with optional duplicate indicator
- Update API documentation with new response fields

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 16:03:53 -06:00
9bd2f67150 Add quick button to mark unexpected ports as expected
Allow users to add ports to expected list directly from scan results page
instead of navigating through site config pages. The button appears next
to unexpected ports and updates the site IP configuration via the API.

- Add site_id and site_ip_id to scan result data for linking to config
- Add "Mark Expected" button next to unexpected ports in scan detail view
- Implement markPortExpected() JS function to update site IP settings
2025-11-21 15:40:37 -06:00
3058c69c39 Add scan cancellation feature
- Replace subprocess.run() with Popen for cancellable processes
- Add cancel() method to SneakyScanner with process termination
- Track running scanners in registry for stop signal delivery
- Handle ScanCancelledError to set scan status to 'cancelled'
- Add POST /api/scans/<id>/stop endpoint
- Add 'cancelled' as valid scan status
- Add Stop button to scans list and detail views
- Show cancelled status with warning badge in UI
2025-11-21 14:17:26 -06:00
04dc238aea Add configurable UDP scanning and numeric IP sorting
- Add UDP_SCAN_ENABLED and UDP_PORTS environment variables to control UDP scanning
- UDP scanning disabled by default for faster scans
- Support port ranges (100-200), lists (53,67,68), or mixed formats
- Sort IPs numerically by octets in site management modal
2025-11-21 13:33:38 -06:00
c592000c96 Add real-time scan progress tracking
- Add ScanProgress model and progress fields to Scan model
- Implement progress callback in scanner to report phase completion
- Update scan_job to write per-IP results to database during execution
- Add /api/scans/<id>/progress endpoint for progress polling
- Add progress section to scan detail page with live updates
- Progress table shows current phase, completion bar, and per-IP results
- Poll every 3 seconds during active scans
- Sort IPs numerically for proper ordering
- Add database migration for new tables/columns
2025-11-21 12:49:27 -06:00
4c6b4bf35d Add IP address search feature with global search box
- Add API endpoint GET /api/scans/by-ip/{ip_address} to retrieve
  last 10 scans containing a specific IP
- Add ScanService.get_scans_by_ip() method with ScanIP join query
- Add search box to global navigation header
- Create dedicated search results page at /search/ip
- Update API documentation with new endpoint
2025-11-21 11:29:03 -06:00
3adb51ece2 Add configurable nmap host timeout setting
Move nmap host timeout from hardcoded 5m to configurable setting
in app/web/config.py with a default of 2m for faster scans.
2025-11-21 11:11:37 -06:00
25 changed files with 1947 additions and 220 deletions

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,58 @@
"""Add scan progress tracking
Revision ID: 012
Revises: 011
Create Date: 2024-01-01 00:00:00.000000
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '012'
down_revision = '011'
branch_labels = None
depends_on = None
def upgrade():
# Add progress tracking columns to scans table
op.add_column('scans', sa.Column('current_phase', sa.String(50), nullable=True,
comment='Current scan phase: ping, tcp_scan, udp_scan, service_detection, http_analysis'))
op.add_column('scans', sa.Column('total_ips', sa.Integer(), nullable=True,
comment='Total number of IPs to scan'))
op.add_column('scans', sa.Column('completed_ips', sa.Integer(), nullable=True, default=0,
comment='Number of IPs completed in current phase'))
# Create scan_progress table for per-IP progress tracking
op.create_table(
'scan_progress',
sa.Column('id', sa.Integer(), primary_key=True, autoincrement=True),
sa.Column('scan_id', sa.Integer(), sa.ForeignKey('scans.id'), nullable=False, index=True),
sa.Column('ip_address', sa.String(45), nullable=False, comment='IP address being scanned'),
sa.Column('site_name', sa.String(255), nullable=True, comment='Site name this IP belongs to'),
sa.Column('phase', sa.String(50), nullable=False,
comment='Phase: ping, tcp_scan, udp_scan, service_detection, http_analysis'),
sa.Column('status', sa.String(20), nullable=False, default='pending',
comment='pending, in_progress, completed, failed'),
sa.Column('ping_result', sa.Boolean(), nullable=True, comment='Ping response result'),
sa.Column('tcp_ports', sa.Text(), nullable=True, comment='JSON array of discovered TCP ports'),
sa.Column('udp_ports', sa.Text(), nullable=True, comment='JSON array of discovered UDP ports'),
sa.Column('services', sa.Text(), nullable=True, comment='JSON array of detected services'),
sa.Column('created_at', sa.DateTime(), nullable=False, server_default=sa.func.now(),
comment='Entry creation time'),
sa.Column('updated_at', sa.DateTime(), nullable=False, server_default=sa.func.now(),
onupdate=sa.func.now(), comment='Last update time'),
sa.UniqueConstraint('scan_id', 'ip_address', name='uix_scan_progress_ip')
)
def downgrade():
# Drop scan_progress table
op.drop_table('scan_progress')
# Remove progress tracking columns from scans table
op.drop_column('scans', 'completed_ips')
op.drop_column('scans', 'total_ips')
op.drop_column('scans', 'current_phase')

View File

@@ -6,14 +6,17 @@ SneakyScanner - Masscan-based network scanner with YAML configuration
import argparse
import json
import logging
import os
import signal
import subprocess
import sys
import tempfile
import threading
import time
import zipfile
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any
from typing import Dict, List, Any, Callable, Optional
import xml.etree.ElementTree as ET
import yaml
@@ -22,12 +25,18 @@ from libnmap.parser import NmapParser
from src.screenshot_capture import ScreenshotCapture
from src.report_generator import HTMLReportGenerator
from web.config import NMAP_HOST_TIMEOUT
# Force unbuffered output for Docker
sys.stdout.reconfigure(line_buffering=True)
sys.stderr.reconfigure(line_buffering=True)
class ScanCancelledError(Exception):
"""Raised when a scan is cancelled by the user."""
pass
class SneakyScanner:
"""Wrapper for masscan to perform network scans based on YAML config or database config"""
@@ -61,6 +70,34 @@ class SneakyScanner:
self.screenshot_capture = None
# Cancellation support
self._cancelled = False
self._cancel_lock = threading.Lock()
self._active_process = None
self._process_lock = threading.Lock()
def cancel(self):
"""
Cancel the running scan.
Terminates any active subprocess and sets cancellation flag.
"""
with self._cancel_lock:
self._cancelled = True
with self._process_lock:
if self._active_process and self._active_process.poll() is None:
try:
# Terminate the process group
os.killpg(os.getpgid(self._active_process.pid), signal.SIGTERM)
except (ProcessLookupError, OSError):
pass
def is_cancelled(self) -> bool:
"""Check if scan has been cancelled."""
with self._cancel_lock:
return self._cancelled
def _load_config(self) -> Dict[str, Any]:
"""
Load and validate configuration from file or database.
@@ -381,11 +418,31 @@ class SneakyScanner:
raise ValueError(f"Invalid protocol: {protocol}")
print(f"Running: {' '.join(cmd)}", flush=True)
result = subprocess.run(cmd, capture_output=True, text=True)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
stdout, stderr = self._active_process.communicate()
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
return []
print(f"Masscan {protocol.upper()} scan completed", flush=True)
if result.returncode != 0:
print(f"Masscan stderr: {result.stderr}", file=sys.stderr)
if returncode != 0:
print(f"Masscan stderr: {stderr}", file=sys.stderr)
# Parse masscan JSON output
results = []
@@ -433,11 +490,31 @@ class SneakyScanner:
]
print(f"Running: {' '.join(cmd)}", flush=True)
result = subprocess.run(cmd, capture_output=True, text=True)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
stdout, stderr = self._active_process.communicate()
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
return {}
print(f"Masscan PING scan completed", flush=True)
if result.returncode != 0:
print(f"Masscan stderr: {result.stderr}", file=sys.stderr, flush=True)
if returncode != 0:
print(f"Masscan stderr: {stderr}", file=sys.stderr, flush=True)
# Parse results
responding_ips = set()
@@ -475,6 +552,10 @@ class SneakyScanner:
all_services = {}
for ip, ports in ip_ports.items():
# Check if cancelled before each host
if self.is_cancelled():
break
if not ports:
all_services[ip] = []
continue
@@ -496,14 +577,33 @@ class SneakyScanner:
'--version-intensity', '5', # Balanced speed/accuracy
'-p', port_list,
'-oX', xml_output, # XML output
'--host-timeout', '5m', # Timeout per host
'--host-timeout', NMAP_HOST_TIMEOUT, # Timeout per host
ip
]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=600)
# Use Popen for cancellation support
with self._process_lock:
self._active_process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
start_new_session=True
)
if result.returncode != 0:
print(f" Nmap warning for {ip}: {result.stderr}", file=sys.stderr, flush=True)
stdout, stderr = self._active_process.communicate(timeout=600)
returncode = self._active_process.returncode
with self._process_lock:
self._active_process = None
# Check if cancelled
if self.is_cancelled():
Path(xml_output).unlink(missing_ok=True)
break
if returncode != 0:
print(f" Nmap warning for {ip}: {stderr}", file=sys.stderr, flush=True)
# Parse XML output
services = self._parse_nmap_xml(xml_output)
@@ -576,29 +676,57 @@ class SneakyScanner:
return services
def _is_likely_web_service(self, service: Dict) -> bool:
def _is_likely_web_service(self, service: Dict, ip: str = None) -> bool:
"""
Check if a service is likely HTTP/HTTPS based on nmap detection or common web ports
Check if a service is a web server by actually making an HTTP request
Args:
service: Service dictionary from nmap results
ip: IP address to test (required for HTTP probe)
Returns:
True if service appears to be web-related
True if service responds to HTTP/HTTPS requests
"""
# Check service name
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Quick check for known web service names first
web_services = ['http', 'https', 'ssl', 'http-proxy', 'https-alt',
'http-alt', 'ssl/http', 'ssl/https']
service_name = service.get('service', '').lower()
# If no IP provided, can't do HTTP probe
port = service.get('port')
if not ip or not port:
# check just the service if no IP - honestly shouldn't get here, but just incase...
if service_name in web_services:
return True
return False
# Check common non-standard web ports
web_ports = [80, 443, 8000, 8006, 8008, 8080, 8081, 8443, 8888, 9443]
port = service.get('port')
# Actually try to connect - this is the definitive test
# Try HTTPS first, then HTTP
for protocol in ['https', 'http']:
url = f"{protocol}://{ip}:{port}/"
try:
response = requests.get(
url,
timeout=3,
verify=False,
allow_redirects=False
)
# Any status code means it's a web server
# (including 404, 500, etc. - still a web server)
return True
except requests.exceptions.SSLError:
# SSL error on HTTPS, try HTTP next
continue
except (requests.exceptions.ConnectionError,
requests.exceptions.Timeout,
requests.exceptions.RequestException):
continue
return port in web_ports
return False
def _detect_http_https(self, ip: str, port: int, timeout: int = 5) -> str:
"""
@@ -786,7 +914,7 @@ class SneakyScanner:
ip_results = {}
for service in services:
if not self._is_likely_web_service(service):
if not self._is_likely_web_service(service, ip):
continue
port = service['port']
@@ -832,10 +960,17 @@ class SneakyScanner:
return all_results
def scan(self) -> Dict[str, Any]:
def scan(self, progress_callback: Optional[Callable] = None) -> Dict[str, Any]:
"""
Perform complete scan based on configuration
Args:
progress_callback: Optional callback function for progress updates.
Called with (phase, ip, data) where:
- phase: 'init', 'ping', 'tcp_scan', 'udp_scan', 'service_detection', 'http_analysis'
- ip: IP address being processed (or None for phase start)
- data: Dict with progress data (results, counts, etc.)
Returns:
Dictionary containing scan results
"""
@@ -872,17 +1007,61 @@ class SneakyScanner:
all_ips = sorted(list(all_ips))
print(f"Total IPs to scan: {len(all_ips)}", flush=True)
# Report initialization with total IP count
if progress_callback:
progress_callback('init', None, {
'total_ips': len(all_ips),
'ip_to_site': ip_to_site
})
# Perform ping scan
print(f"\n[1/5] Performing ping scan on {len(all_ips)} IPs...", flush=True)
if progress_callback:
progress_callback('ping', None, {'status': 'starting'})
ping_results = self._run_ping_scan(all_ips)
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Report ping results
if progress_callback:
progress_callback('ping', None, {
'status': 'completed',
'results': ping_results
})
# Perform TCP scan (all ports)
print(f"\n[2/5] Performing TCP scan on {len(all_ips)} IPs (ports 0-65535)...", flush=True)
if progress_callback:
progress_callback('tcp_scan', None, {'status': 'starting'})
tcp_results = self._run_masscan(all_ips, '0-65535', 'tcp')
# Perform UDP scan (all ports)
print(f"\n[3/5] Performing UDP scan on {len(all_ips)} IPs (ports 0-65535)...", flush=True)
udp_results = self._run_masscan(all_ips, '0-65535', 'udp')
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Perform UDP scan (if enabled)
udp_enabled = os.environ.get('UDP_SCAN_ENABLED', 'false').lower() == 'true'
udp_ports = os.environ.get('UDP_PORTS', '53,67,68,69,123,161,500,514,1900')
if udp_enabled:
print(f"\n[3/5] Performing UDP scan on {len(all_ips)} IPs (ports {udp_ports})...", flush=True)
if progress_callback:
progress_callback('udp_scan', None, {'status': 'starting'})
udp_results = self._run_masscan(all_ips, udp_ports, 'udp')
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
else:
print(f"\n[3/5] Skipping UDP scan (disabled)...", flush=True)
if progress_callback:
progress_callback('udp_scan', None, {'status': 'skipped'})
udp_results = []
# Organize results by IP
results_by_ip = {}
@@ -917,20 +1096,56 @@ class SneakyScanner:
results_by_ip[ip]['actual']['tcp_ports'].sort()
results_by_ip[ip]['actual']['udp_ports'].sort()
# Report TCP/UDP scan results with discovered ports per IP
if progress_callback:
tcp_udp_results = {}
for ip in all_ips:
tcp_udp_results[ip] = {
'tcp_ports': results_by_ip[ip]['actual']['tcp_ports'],
'udp_ports': results_by_ip[ip]['actual']['udp_ports']
}
progress_callback('tcp_scan', None, {
'status': 'completed',
'results': tcp_udp_results
})
# Perform service detection on TCP ports
print(f"\n[4/5] Performing service detection on discovered TCP ports...", flush=True)
if progress_callback:
progress_callback('service_detection', None, {'status': 'starting'})
ip_ports = {ip: results_by_ip[ip]['actual']['tcp_ports'] for ip in all_ips}
service_results = self._run_nmap_service_detection(ip_ports)
# Check for cancellation
if self.is_cancelled():
print("\nScan cancelled by user", flush=True)
raise ScanCancelledError("Scan cancelled by user")
# Add service information to results
for ip, services in service_results.items():
if ip in results_by_ip:
results_by_ip[ip]['actual']['services'] = services
# Report service detection results
if progress_callback:
progress_callback('service_detection', None, {
'status': 'completed',
'results': service_results
})
# Perform HTTP/HTTPS analysis on web services
print(f"\n[5/5] Analyzing HTTP/HTTPS services and SSL/TLS configuration...", flush=True)
if progress_callback:
progress_callback('http_analysis', None, {'status': 'starting'})
http_results = self._run_http_analysis(service_results)
# Report HTTP analysis completion
if progress_callback:
progress_callback('http_analysis', None, {
'status': 'completed',
'results': http_results
})
# Merge HTTP analysis into service results
for ip, port_results in http_results.items():
if ip in results_by_ip:

View File

@@ -5,18 +5,107 @@ Handles endpoints for triggering scans, listing scan history, and retrieving
scan results.
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from flask import Blueprint, current_app, jsonify, request
from sqlalchemy.exc import SQLAlchemyError
from web.auth.decorators import api_auth_required
from web.models import Scan, ScanProgress
from web.services.scan_service import ScanService
from web.utils.pagination import validate_page_params
from web.jobs.scan_job import stop_scan
bp = Blueprint('scans', __name__)
logger = logging.getLogger(__name__)
def _recover_orphaned_scan(scan: Scan, session) -> dict:
"""
Recover an orphaned scan by checking for output files.
If output files exist: mark as 'completed' (smart recovery)
If no output files: mark as 'cancelled'
Args:
scan: The orphaned Scan object
session: Database session
Returns:
Dictionary with recovery result for API response
"""
# Check for existing output files
output_exists = False
output_files_found = []
# Check paths stored in database
if scan.json_path and Path(scan.json_path).exists():
output_exists = True
output_files_found.append('json')
if scan.html_path and Path(scan.html_path).exists():
output_files_found.append('html')
if scan.zip_path and Path(scan.zip_path).exists():
output_files_found.append('zip')
# Also check by timestamp pattern if paths not stored yet
if not output_exists and scan.started_at:
output_dir = Path('/app/output')
if output_dir.exists():
timestamp_pattern = scan.started_at.strftime('%Y%m%d')
for json_file in output_dir.glob(f'scan_report_{timestamp_pattern}*.json'):
output_exists = True
output_files_found.append('json')
# Update scan record with found paths
scan.json_path = str(json_file)
html_file = json_file.with_suffix('.html')
if html_file.exists():
scan.html_path = str(html_file)
output_files_found.append('html')
zip_file = json_file.with_suffix('.zip')
if zip_file.exists():
scan.zip_path = str(zip_file)
output_files_found.append('zip')
break
if output_exists:
# Smart recovery: outputs exist, mark as completed
scan.status = 'completed'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.error_message = None
session.commit()
logger.info(f"Scan {scan.id}: Recovered as completed (files: {output_files_found})")
return {
'scan_id': scan.id,
'status': 'completed',
'message': f'Scan recovered as completed (output files found: {", ".join(output_files_found)})',
'recovery_type': 'smart_recovery'
}
else:
# No outputs: mark as cancelled
scan.status = 'cancelled'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.error_message = 'Scan process was interrupted before completion. No output files were generated.'
session.commit()
logger.info(f"Scan {scan.id}: Marked as cancelled (orphaned, no output files)")
return {
'scan_id': scan.id,
'status': 'cancelled',
'message': 'Orphaned scan cancelled (no output files found)',
'recovery_type': 'orphan_cleanup'
}
@bp.route('', methods=['GET'])
@api_auth_required
def list_scans():
@@ -240,6 +329,77 @@ def delete_scan(scan_id):
}), 500
@bp.route('/<int:scan_id>/stop', methods=['POST'])
@api_auth_required
def stop_running_scan(scan_id):
"""
Stop a running scan with smart recovery for orphaned scans.
If the scan is actively running in the registry, sends a cancel signal.
If the scan shows as running/finalizing but is not in the registry (orphaned),
performs smart recovery: marks as 'completed' if output files exist,
otherwise marks as 'cancelled'.
Args:
scan_id: Scan ID to stop
Returns:
JSON response with stop status or recovery result
"""
try:
session = current_app.db_session
# Check if scan exists
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
logger.warning(f"Scan not found for stop request: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
# Allow stopping scans with status 'running' or 'finalizing'
if scan.status not in ('running', 'finalizing'):
logger.warning(f"Cannot stop scan {scan_id}: status is '{scan.status}'")
return jsonify({
'error': 'Invalid state',
'message': f"Cannot stop scan: status is '{scan.status}'"
}), 400
# Get database URL from app config
db_url = current_app.config['SQLALCHEMY_DATABASE_URI']
# Attempt to stop the scan
stopped = stop_scan(scan_id, db_url)
if stopped:
logger.info(f"Stop signal sent to scan {scan_id}")
return jsonify({
'scan_id': scan_id,
'message': 'Stop signal sent to scan',
'status': 'stopping'
}), 200
else:
# Scanner not in registry - this is an orphaned scan
# Attempt smart recovery
logger.warning(f"Scan {scan_id} not in registry, attempting smart recovery")
recovery_result = _recover_orphaned_scan(scan, session)
return jsonify(recovery_result), 200
except SQLAlchemyError as e:
logger.error(f"Database error stopping scan {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to stop scan'
}), 500
except Exception as e:
logger.error(f"Unexpected error stopping scan {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id>/status', methods=['GET'])
@api_auth_required
def get_scan_status(scan_id):
@@ -281,6 +441,141 @@ def get_scan_status(scan_id):
}), 500
@bp.route('/<int:scan_id>/progress', methods=['GET'])
@api_auth_required
def get_scan_progress(scan_id):
"""
Get detailed progress for a running scan including per-IP results.
Args:
scan_id: Scan ID
Returns:
JSON response with scan progress including:
- current_phase: Current scan phase
- total_ips: Total IPs being scanned
- completed_ips: Number of IPs completed in current phase
- progress_entries: List of per-IP progress with discovered results
"""
try:
session = current_app.db_session
# Get scan record
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
logger.warning(f"Scan not found for progress check: {scan_id}")
return jsonify({
'error': 'Not found',
'message': f'Scan with ID {scan_id} not found'
}), 404
# Get progress entries
progress_entries = session.query(ScanProgress).filter_by(scan_id=scan_id).all()
# Build progress data
entries = []
for entry in progress_entries:
entry_data = {
'ip_address': entry.ip_address,
'site_name': entry.site_name,
'phase': entry.phase,
'status': entry.status,
'ping_result': entry.ping_result
}
# Parse JSON fields
if entry.tcp_ports:
entry_data['tcp_ports'] = json.loads(entry.tcp_ports)
else:
entry_data['tcp_ports'] = []
if entry.udp_ports:
entry_data['udp_ports'] = json.loads(entry.udp_ports)
else:
entry_data['udp_ports'] = []
if entry.services:
entry_data['services'] = json.loads(entry.services)
else:
entry_data['services'] = []
entries.append(entry_data)
# Sort entries by site name then IP (numerically)
def ip_sort_key(ip_str):
"""Convert IP to tuple of integers for proper numeric sorting."""
try:
return tuple(int(octet) for octet in ip_str.split('.'))
except (ValueError, AttributeError):
return (0, 0, 0, 0)
entries.sort(key=lambda x: (x['site_name'] or '', ip_sort_key(x['ip_address'])))
response = {
'scan_id': scan_id,
'status': scan.status,
'current_phase': scan.current_phase or 'pending',
'total_ips': scan.total_ips or 0,
'completed_ips': scan.completed_ips or 0,
'progress_entries': entries
}
logger.debug(f"Retrieved progress for scan {scan_id}: phase={scan.current_phase}, {scan.completed_ips}/{scan.total_ips} IPs")
return jsonify(response)
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scan progress {scan_id}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scan progress'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scan progress {scan_id}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/by-ip/<ip_address>', methods=['GET'])
@api_auth_required
def get_scans_by_ip(ip_address):
"""
Get last 10 scans containing a specific IP address.
Args:
ip_address: IP address to search for
Returns:
JSON response with list of scans containing the IP
"""
try:
# Get scans from service
scan_service = ScanService(current_app.db_session)
scans = scan_service.get_scans_by_ip(ip_address)
logger.info(f"Retrieved {len(scans)} scans for IP: {ip_address}")
return jsonify({
'ip_address': ip_address,
'scans': scans,
'count': len(scans)
})
except SQLAlchemyError as e:
logger.error(f"Database error retrieving scans for IP {ip_address}: {str(e)}")
return jsonify({
'error': 'Database error',
'message': 'Failed to retrieve scans'
}), 500
except Exception as e:
logger.error(f"Unexpected error retrieving scans for IP {ip_address}: {str(e)}", exc_info=True)
return jsonify({
'error': 'Internal server error',
'message': 'An unexpected error occurred'
}), 500
@bp.route('/<int:scan_id1>/compare/<int:scan_id2>', methods=['GET'])
@api_auth_required
def compare_scans(scan_id1, scan_id2):

View File

@@ -36,9 +36,15 @@ def list_sites():
if request.args.get('all', '').lower() == 'true':
site_service = SiteService(current_app.db_session)
sites = site_service.list_all_sites()
ip_stats = site_service.get_global_ip_stats()
logger.info(f"Listed all sites (count={len(sites)})")
return jsonify({'sites': sites})
return jsonify({
'sites': sites,
'total_ips': ip_stats['total_ips'],
'unique_ips': ip_stats['unique_ips'],
'duplicate_ips': ip_stats['duplicate_ips']
})
# Get and validate query parameters
page = request.args.get('page', 1, type=int)

View File

@@ -307,9 +307,12 @@ def init_scheduler(app: Flask) -> None:
with app.app_context():
# Clean up any orphaned scans from previous crashes/restarts
scan_service = ScanService(app.db_session)
orphaned_count = scan_service.cleanup_orphaned_scans()
if orphaned_count > 0:
app.logger.warning(f"Cleaned up {orphaned_count} orphaned scan(s) on startup")
cleanup_result = scan_service.cleanup_orphaned_scans()
if cleanup_result['total'] > 0:
app.logger.warning(
f"Cleaned up {cleanup_result['total']} orphaned scan(s) on startup: "
f"{cleanup_result['recovered']} recovered, {cleanup_result['failed']} failed"
)
# Load all enabled schedules from database
scheduler.load_schedules_on_startup()

View File

@@ -11,3 +11,6 @@ APP_VERSION = '1.0.0-beta'
# Repository URL
REPO_URL = 'https://git.sneakygeek.net/sneakygeek/SneakyScan'
# Scanner settings
NMAP_HOST_TIMEOUT = '2m' # Timeout per host for nmap service detection

View File

@@ -5,7 +5,9 @@ This module handles the execution of scans in background threads,
updating database status and handling errors.
"""
import json
import logging
import threading
import traceback
from datetime import datetime
from pathlib import Path
@@ -13,13 +15,168 @@ from pathlib import Path
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.scanner import SneakyScanner
from web.models import Scan
from src.scanner import SneakyScanner, ScanCancelledError
from web.models import Scan, ScanProgress
from web.services.scan_service import ScanService
from web.services.alert_service import AlertService
logger = logging.getLogger(__name__)
# Registry for tracking running scanners (scan_id -> SneakyScanner instance)
_running_scanners = {}
_running_scanners_lock = threading.Lock()
def get_running_scanner(scan_id: int):
"""Get a running scanner instance by scan ID."""
with _running_scanners_lock:
return _running_scanners.get(scan_id)
def stop_scan(scan_id: int, db_url: str) -> bool:
"""
Stop a running scan.
Args:
scan_id: ID of the scan to stop
db_url: Database connection URL
Returns:
True if scan was cancelled, False if not found or already stopped
"""
logger.info(f"Attempting to stop scan {scan_id}")
# Get the scanner instance
scanner = get_running_scanner(scan_id)
if not scanner:
logger.warning(f"Scanner for scan {scan_id} not found in registry")
return False
# Cancel the scanner
scanner.cancel()
logger.info(f"Cancellation signal sent to scan {scan_id}")
return True
def create_progress_callback(scan_id: int, session):
"""
Create a progress callback function for updating scan progress in database.
Args:
scan_id: ID of the scan record
session: Database session
Returns:
Callback function that accepts (phase, ip, data)
"""
ip_to_site = {}
def progress_callback(phase: str, ip: str, data: dict):
"""Update scan progress in database."""
nonlocal ip_to_site
try:
# Get scan record
scan = session.query(Scan).filter_by(id=scan_id).first()
if not scan:
return
# Handle initialization phase
if phase == 'init':
scan.total_ips = data.get('total_ips', 0)
scan.completed_ips = 0
scan.current_phase = 'ping'
ip_to_site = data.get('ip_to_site', {})
# Create progress entries for all IPs
for ip_addr, site_name in ip_to_site.items():
progress = ScanProgress(
scan_id=scan_id,
ip_address=ip_addr,
site_name=site_name,
phase='pending',
status='pending'
)
session.add(progress)
session.commit()
return
# Update current phase
if data.get('status') == 'starting':
scan.current_phase = phase
scan.completed_ips = 0
session.commit()
return
# Handle phase completion with results
if data.get('status') == 'completed':
results = data.get('results', {})
if phase == 'ping':
# Update progress entries with ping results
for ip_addr, ping_result in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
progress.ping_result = ping_result
progress.phase = 'ping'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'tcp_scan':
# Update progress entries with TCP/UDP port results
for ip_addr, port_data in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
progress.tcp_ports = json.dumps(port_data.get('tcp_ports', []))
progress.udp_ports = json.dumps(port_data.get('udp_ports', []))
progress.phase = 'tcp_scan'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'service_detection':
# Update progress entries with service detection results
for ip_addr, services in results.items():
progress = session.query(ScanProgress).filter_by(
scan_id=scan_id, ip_address=ip_addr
).first()
if progress:
# Simplify service data for storage
service_list = []
for svc in services:
service_list.append({
'port': svc.get('port'),
'service': svc.get('service', 'unknown'),
'product': svc.get('product', ''),
'version': svc.get('version', '')
})
progress.services = json.dumps(service_list)
progress.phase = 'service_detection'
progress.status = 'completed'
scan.completed_ips = len(results)
elif phase == 'http_analysis':
# Mark HTTP analysis as complete
scan.current_phase = 'completed'
scan.completed_ips = scan.total_ips
session.commit()
except Exception as e:
logger.error(f"Progress callback error for scan {scan_id}: {str(e)}")
# Don't re-raise - we don't want to break the scan
session.rollback()
return progress_callback
def execute_scan(scan_id: int, config_id: int, db_url: str = None):
"""
@@ -66,20 +223,61 @@ def execute_scan(scan_id: int, config_id: int, db_url: str = None):
# Initialize scanner with database config
scanner = SneakyScanner(config_id=config_id)
# Execute scan
# Register scanner in the running registry
with _running_scanners_lock:
_running_scanners[scan_id] = scanner
logger.debug(f"Scan {scan_id}: Registered in running scanners registry")
# Create progress callback
progress_callback = create_progress_callback(scan_id, session)
# Execute scan with progress tracking
logger.info(f"Scan {scan_id}: Running scanner...")
start_time = datetime.utcnow()
report, timestamp = scanner.scan()
report, timestamp = scanner.scan(progress_callback=progress_callback)
end_time = datetime.utcnow()
scan_duration = (end_time - start_time).total_seconds()
logger.info(f"Scan {scan_id}: Scanner completed in {scan_duration:.2f} seconds")
# Generate output files (JSON, HTML, ZIP)
# Transition to 'finalizing' status before output generation
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'finalizing'
scan.current_phase = 'generating_outputs'
session.commit()
logger.info(f"Scan {scan_id}: Status changed to 'finalizing'")
except Exception as e:
logger.error(f"Scan {scan_id}: Failed to update status to finalizing: {e}")
session.rollback()
# Generate output files (JSON, HTML, ZIP) with error handling
output_paths = {}
output_generation_failed = False
try:
logger.info(f"Scan {scan_id}: Generating output files...")
output_paths = scanner.generate_outputs(report, timestamp)
except Exception as e:
output_generation_failed = True
logger.error(f"Scan {scan_id}: Output generation failed: {str(e)}")
logger.error(f"Scan {scan_id}: Traceback:\n{traceback.format_exc()}")
# Still mark scan as completed with warning since scan data is valid
try:
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'completed'
scan.error_message = f"Scan completed but output file generation failed: {str(e)}"
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
session.commit()
logger.info(f"Scan {scan_id}: Marked as completed with output generation warning")
except Exception as db_error:
logger.error(f"Scan {scan_id}: Failed to update status after output error: {db_error}")
# Save results to database
# Save results to database (only if output generation succeeded)
if not output_generation_failed:
logger.info(f"Scan {scan_id}: Saving results to database...")
scan_service = ScanService(session)
scan_service._save_scan_to_db(report, scan_id, status='completed', output_paths=output_paths)
@@ -97,6 +295,19 @@ def execute_scan(scan_id: int, config_id: int, db_url: str = None):
logger.info(f"Scan {scan_id}: Completed successfully")
except ScanCancelledError:
# Scan was cancelled by user
logger.info(f"Scan {scan_id}: Cancelled by user")
scan = session.query(Scan).filter_by(id=scan_id).first()
if scan:
scan.status = 'cancelled'
scan.error_message = 'Scan cancelled by user'
scan.completed_at = datetime.utcnow()
if scan.started_at:
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
session.commit()
except FileNotFoundError as e:
# Config file not found
error_msg = f"Configuration file not found: {str(e)}"
@@ -126,6 +337,12 @@ def execute_scan(scan_id: int, config_id: int, db_url: str = None):
logger.error(f"Scan {scan_id}: Failed to update error status in database: {str(db_error)}")
finally:
# Unregister scanner from registry
with _running_scanners_lock:
if scan_id in _running_scanners:
del _running_scanners[scan_id]
logger.debug(f"Scan {scan_id}: Unregistered from running scanners registry")
# Always close the session
session.close()
logger.info(f"Scan {scan_id}: Background job completed, session closed")

View File

@@ -45,7 +45,7 @@ class Scan(Base):
id = Column(Integer, primary_key=True, autoincrement=True)
timestamp = Column(DateTime, nullable=False, index=True, comment="Scan start time (UTC)")
duration = Column(Float, nullable=True, comment="Total scan duration in seconds")
status = Column(String(20), nullable=False, default='running', comment="running, completed, failed")
status = Column(String(20), nullable=False, default='running', comment="running, finalizing, completed, failed, cancelled")
config_id = Column(Integer, ForeignKey('scan_configs.id'), nullable=True, index=True, comment="FK to scan_configs table")
title = Column(Text, nullable=True, comment="Scan title from config")
json_path = Column(Text, nullable=True, comment="Path to JSON report")
@@ -59,6 +59,11 @@ class Scan(Base):
completed_at = Column(DateTime, nullable=True, comment="Scan execution completion time")
error_message = Column(Text, nullable=True, comment="Error message if scan failed")
# Progress tracking fields
current_phase = Column(String(50), nullable=True, comment="Current scan phase: ping, tcp_scan, udp_scan, service_detection, http_analysis")
total_ips = Column(Integer, nullable=True, comment="Total number of IPs to scan")
completed_ips = Column(Integer, nullable=True, default=0, comment="Number of IPs completed in current phase")
# Relationships
sites = relationship('ScanSite', back_populates='scan', cascade='all, delete-orphan')
ips = relationship('ScanIP', back_populates='scan', cascade='all, delete-orphan')
@@ -70,6 +75,7 @@ class Scan(Base):
schedule = relationship('Schedule', back_populates='scans')
config = relationship('ScanConfig', back_populates='scans')
site_associations = relationship('ScanSiteAssociation', back_populates='scan', cascade='all, delete-orphan')
progress_entries = relationship('ScanProgress', back_populates='scan', cascade='all, delete-orphan')
def __repr__(self):
return f"<Scan(id={self.id}, title='{self.title}', status='{self.status}')>"
@@ -244,6 +250,43 @@ class ScanTLSVersion(Base):
return f"<ScanTLSVersion(id={self.id}, tls_version='{self.tls_version}', supported={self.supported})>"
class ScanProgress(Base):
"""
Real-time progress tracking for individual IPs during scan execution.
Stores intermediate results as they become available, allowing users to
see progress and results before the full scan completes.
"""
__tablename__ = 'scan_progress'
id = Column(Integer, primary_key=True, autoincrement=True)
scan_id = Column(Integer, ForeignKey('scans.id'), nullable=False, index=True)
ip_address = Column(String(45), nullable=False, comment="IP address being scanned")
site_name = Column(String(255), nullable=True, comment="Site name this IP belongs to")
phase = Column(String(50), nullable=False, comment="Phase: ping, tcp_scan, udp_scan, service_detection, http_analysis")
status = Column(String(20), nullable=False, default='pending', comment="pending, in_progress, completed, failed")
# Results data (stored as JSON)
ping_result = Column(Boolean, nullable=True, comment="Ping response result")
tcp_ports = Column(Text, nullable=True, comment="JSON array of discovered TCP ports")
udp_ports = Column(Text, nullable=True, comment="JSON array of discovered UDP ports")
services = Column(Text, nullable=True, comment="JSON array of detected services")
created_at = Column(DateTime, nullable=False, default=datetime.utcnow, comment="Entry creation time")
updated_at = Column(DateTime, nullable=False, default=datetime.utcnow, onupdate=datetime.utcnow, comment="Last update time")
# Relationships
scan = relationship('Scan', back_populates='progress_entries')
# Index for efficient lookups
__table_args__ = (
UniqueConstraint('scan_id', 'ip_address', name='uix_scan_progress_ip'),
)
def __repr__(self):
return f"<ScanProgress(id={self.id}, ip='{self.ip_address}', phase='{self.phase}', status='{self.status}')>"
# ============================================================================
# Reusable Site Definition Tables
# ============================================================================

View File

@@ -7,7 +7,7 @@ Provides dashboard and scan viewing pages.
import logging
import os
from flask import Blueprint, current_app, redirect, render_template, send_from_directory, url_for
from flask import Blueprint, current_app, redirect, render_template, request, send_from_directory, url_for
from web.auth.decorators import login_required
@@ -83,6 +83,19 @@ def compare_scans(scan_id1, scan_id2):
return render_template('scan_compare.html', scan_id1=scan_id1, scan_id2=scan_id2)
@bp.route('/search/ip')
@login_required
def search_ip():
"""
IP search results page - shows scans containing a specific IP address.
Returns:
Rendered search results template
"""
ip_address = request.args.get('ip', '').strip()
return render_template('ip_search_results.html', ip_address=ip_address)
@bp.route('/schedules')
@login_required
def schedules():

View File

@@ -16,7 +16,7 @@ from sqlalchemy.orm import Session, joinedload
from web.models import (
Scan, ScanSite, ScanIP, ScanPort, ScanService as ScanServiceModel,
ScanCertificate, ScanTLSVersion, Site, ScanSiteAssociation
ScanCertificate, ScanTLSVersion, Site, ScanSiteAssociation, SiteIP
)
from web.utils.pagination import paginate, PaginatedResult
from web.utils.validators import validate_scan_status
@@ -257,55 +257,125 @@ class ScanService:
elif scan.status == 'failed':
status_info['progress'] = 'Failed'
status_info['error_message'] = scan.error_message
elif scan.status == 'cancelled':
status_info['progress'] = 'Cancelled'
status_info['error_message'] = scan.error_message
return status_info
def cleanup_orphaned_scans(self) -> int:
def get_scans_by_ip(self, ip_address: str, limit: int = 10) -> List[Dict[str, Any]]:
"""
Clean up orphaned scans that are stuck in 'running' status.
Get the last N scans containing a specific IP address.
Args:
ip_address: IP address to search for
limit: Maximum number of scans to return (default: 10)
Returns:
List of scan summary dictionaries, most recent first
"""
scans = (
self.db.query(Scan)
.join(ScanIP, Scan.id == ScanIP.scan_id)
.filter(ScanIP.ip_address == ip_address)
.filter(Scan.status == 'completed')
.order_by(Scan.timestamp.desc())
.limit(limit)
.all()
)
return [self._scan_to_summary_dict(scan) for scan in scans]
def cleanup_orphaned_scans(self) -> dict:
"""
Clean up orphaned scans with smart recovery.
For scans stuck in 'running' or 'finalizing' status:
- If output files exist: mark as 'completed' (smart recovery)
- If no output files: mark as 'failed'
This should be called on application startup to handle scans that
were running when the system crashed or was restarted.
Scans in 'running' status are marked as 'failed' with an appropriate
error message indicating they were orphaned.
Returns:
Number of orphaned scans cleaned up
Dictionary with cleanup results: {'recovered': N, 'failed': N, 'total': N}
"""
# Find all scans with status='running'
orphaned_scans = self.db.query(Scan).filter(Scan.status == 'running').all()
# Find all scans with status='running' or 'finalizing'
orphaned_scans = self.db.query(Scan).filter(
Scan.status.in_(['running', 'finalizing'])
).all()
if not orphaned_scans:
logger.info("No orphaned scans found")
return 0
return {'recovered': 0, 'failed': 0, 'total': 0}
count = len(orphaned_scans)
logger.warning(f"Found {count} orphaned scan(s) in 'running' status, marking as failed")
logger.warning(f"Found {count} orphaned scan(s), attempting smart recovery")
recovered_count = 0
failed_count = 0
output_dir = Path('/app/output')
# Mark each orphaned scan as failed
for scan in orphaned_scans:
# Check for existing output files
output_exists = False
output_files_found = []
# Check paths stored in database
if scan.json_path and Path(scan.json_path).exists():
output_exists = True
output_files_found.append('json')
if scan.html_path and Path(scan.html_path).exists():
output_files_found.append('html')
if scan.zip_path and Path(scan.zip_path).exists():
output_files_found.append('zip')
# Also check by timestamp pattern if paths not stored yet
if not output_exists and scan.started_at and output_dir.exists():
timestamp_pattern = scan.started_at.strftime('%Y%m%d')
for json_file in output_dir.glob(f'scan_report_{timestamp_pattern}*.json'):
output_exists = True
output_files_found.append('json')
# Update scan record with found paths
scan.json_path = str(json_file)
html_file = json_file.with_suffix('.html')
if html_file.exists():
scan.html_path = str(html_file)
output_files_found.append('html')
zip_file = json_file.with_suffix('.zip')
if zip_file.exists():
scan.zip_path = str(zip_file)
output_files_found.append('zip')
break
if output_exists:
# Smart recovery: outputs exist, mark as completed
scan.status = 'completed'
scan.error_message = f'Recovered from orphaned state (output files found: {", ".join(output_files_found)})'
recovered_count += 1
logger.info(f"Recovered orphaned scan {scan.id} as completed (files: {output_files_found})")
else:
# No outputs: mark as failed
scan.status = 'failed'
scan.completed_at = datetime.utcnow()
scan.error_message = (
"Scan was interrupted by system shutdown or crash. "
"The scan was running but did not complete normally."
"No output files were generated."
)
failed_count += 1
logger.info(f"Marked orphaned scan {scan.id} as failed (no output files)")
# Calculate duration if we have a started_at time
scan.completed_at = datetime.utcnow()
if scan.started_at:
duration = (datetime.utcnow() - scan.started_at).total_seconds()
scan.duration = duration
logger.info(
f"Marked orphaned scan {scan.id} as failed "
f"(started: {scan.started_at.isoformat() if scan.started_at else 'unknown'})"
)
scan.duration = (datetime.utcnow() - scan.started_at).total_seconds()
self.db.commit()
logger.info(f"Cleaned up {count} orphaned scan(s)")
logger.info(f"Cleaned up {count} orphaned scan(s): {recovered_count} recovered, {failed_count} failed")
return count
return {
'recovered': recovered_count,
'failed': failed_count,
'total': count
}
def _save_scan_to_db(self, report: Dict[str, Any], scan_id: int,
status: str = 'completed', output_paths: Dict = None) -> None:
@@ -604,17 +674,47 @@ class ScanService:
def _site_to_dict(self, site: ScanSite) -> Dict[str, Any]:
"""Convert ScanSite to dictionary."""
# Look up the master Site ID from ScanSiteAssociation
master_site_id = None
assoc = (
self.db.query(ScanSiteAssociation)
.filter(
ScanSiteAssociation.scan_id == site.scan_id,
)
.join(Site)
.filter(Site.name == site.site_name)
.first()
)
if assoc:
master_site_id = assoc.site_id
return {
'id': site.id,
'name': site.site_name,
'ips': [self._ip_to_dict(ip) for ip in site.ips]
'site_id': master_site_id, # The actual Site ID for config updates
'ips': [self._ip_to_dict(ip, master_site_id) for ip in site.ips]
}
def _ip_to_dict(self, ip: ScanIP) -> Dict[str, Any]:
def _ip_to_dict(self, ip: ScanIP, site_id: Optional[int] = None) -> Dict[str, Any]:
"""Convert ScanIP to dictionary."""
# Look up the SiteIP ID for this IP address in the master Site
site_ip_id = None
if site_id:
site_ip = (
self.db.query(SiteIP)
.filter(
SiteIP.site_id == site_id,
SiteIP.ip_address == ip.ip_address
)
.first()
)
if site_ip:
site_ip_id = site_ip.id
return {
'id': ip.id,
'address': ip.ip_address,
'site_ip_id': site_ip_id, # The actual SiteIP ID for config updates
'ping_expected': ip.ping_expected,
'ping_actual': ip.ping_actual,
'ports': [self._port_to_dict(port) for port in ip.ports]

View File

@@ -6,7 +6,7 @@ scheduled scans with cron expressions.
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from croniter import croniter
@@ -71,6 +71,7 @@ class ScheduleService:
next_run = self.calculate_next_run(cron_expression) if enabled else None
# Create schedule record
now_utc = datetime.now(timezone.utc)
schedule = Schedule(
name=name,
config_id=config_id,
@@ -78,8 +79,8 @@ class ScheduleService:
enabled=enabled,
last_run=None,
next_run=next_run,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow()
created_at=now_utc,
updated_at=now_utc
)
self.db.add(schedule)
@@ -103,7 +104,14 @@ class ScheduleService:
Raises:
ValueError: If schedule not found
"""
schedule = self.db.query(Schedule).filter(Schedule.id == schedule_id).first()
from sqlalchemy.orm import joinedload
schedule = (
self.db.query(Schedule)
.options(joinedload(Schedule.config))
.filter(Schedule.id == schedule_id)
.first()
)
if not schedule:
raise ValueError(f"Schedule {schedule_id} not found")
@@ -138,8 +146,10 @@ class ScheduleService:
'pages': int
}
"""
# Build query
query = self.db.query(Schedule)
from sqlalchemy.orm import joinedload
# Build query and eagerly load config relationship
query = self.db.query(Schedule).options(joinedload(Schedule.config))
# Apply filter
if enabled_filter is not None:
@@ -215,7 +225,7 @@ class ScheduleService:
if hasattr(schedule, key):
setattr(schedule, key, value)
schedule.updated_at = datetime.utcnow()
schedule.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(schedule)
@@ -298,7 +308,7 @@ class ScheduleService:
schedule.last_run = last_run
schedule.next_run = next_run
schedule.updated_at = datetime.utcnow()
schedule.updated_at = datetime.now(timezone.utc)
self.db.commit()
@@ -311,23 +321,43 @@ class ScheduleService:
Validate a cron expression.
Args:
cron_expr: Cron expression to validate
cron_expr: Cron expression to validate in standard crontab format
Format: minute hour day month day_of_week
Day of week: 0=Sunday, 1=Monday, ..., 6=Saturday
(APScheduler will convert this to its internal format automatically)
Returns:
Tuple of (is_valid, error_message)
- (True, None) if valid
- (False, error_message) if invalid
Note:
This validates using croniter which uses standard crontab format.
APScheduler's from_crontab() will handle the conversion when the
schedule is registered with the scheduler.
"""
try:
# Try to create a croniter instance
base_time = datetime.utcnow()
# croniter uses standard crontab format (Sunday=0)
from datetime import timezone
base_time = datetime.now(timezone.utc)
cron = croniter(cron_expr, base_time)
# Try to get the next run time (validates the expression)
cron.get_next(datetime)
# Validate basic format (5 fields)
fields = cron_expr.split()
if len(fields) != 5:
return (False, f"Cron expression must have 5 fields (minute hour day month day_of_week), got {len(fields)}")
return (True, None)
except (ValueError, KeyError) as e:
error_msg = str(e)
# Add helpful hint for day_of_week errors
if "day" in error_msg.lower() and len(cron_expr.split()) >= 5:
hint = "\nNote: Use standard crontab format where 0=Sunday, 1=Monday, ..., 6=Saturday"
return (False, f"{error_msg}{hint}")
return (False, str(e))
except Exception as e:
return (False, f"Unexpected error: {str(e)}")
@@ -345,17 +375,24 @@ class ScheduleService:
from_time: Base time (defaults to now UTC)
Returns:
Next run datetime (UTC)
Next run datetime (UTC, timezone-aware)
Raises:
ValueError: If cron expression is invalid
"""
if from_time is None:
from_time = datetime.utcnow()
from_time = datetime.now(timezone.utc)
try:
cron = croniter(cron_expr, from_time)
return cron.get_next(datetime)
next_run = cron.get_next(datetime)
# croniter returns naive datetime, so we need to add timezone info
# Since we're using UTC for all calculations, add UTC timezone
if next_run.tzinfo is None:
next_run = next_run.replace(tzinfo=timezone.utc)
return next_run
except Exception as e:
raise ValueError(f"Invalid cron expression '{cron_expr}': {str(e)}")
@@ -403,10 +440,16 @@ class ScheduleService:
Returns:
Dictionary representation
"""
# Get config title if relationship is loaded
config_name = None
if schedule.config:
config_name = schedule.config.title
return {
'id': schedule.id,
'name': schedule.name,
'config_id': schedule.config_id,
'config_name': config_name,
'cron_expression': schedule.cron_expression,
'enabled': schedule.enabled,
'last_run': schedule.last_run.isoformat() if schedule.last_run else None,
@@ -421,7 +464,7 @@ class ScheduleService:
Format datetime as relative time.
Args:
dt: Datetime to format (UTC)
dt: Datetime to format (UTC, can be naive or aware)
Returns:
Human-readable relative time (e.g., "in 2 hours", "yesterday")
@@ -429,7 +472,13 @@ class ScheduleService:
if dt is None:
return None
now = datetime.utcnow()
# Ensure both datetimes are timezone-aware for comparison
now = datetime.now(timezone.utc)
# If dt is naive, assume it's UTC and add timezone info
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
diff = dt - now
# Future times

View File

@@ -149,6 +149,51 @@ class SchedulerService:
except Exception as e:
logger.error(f"Error loading schedules on startup: {str(e)}", exc_info=True)
@staticmethod
def validate_cron_expression(cron_expression: str) -> tuple[bool, str]:
"""
Validate a cron expression and provide helpful feedback.
Args:
cron_expression: Cron expression to validate
Returns:
Tuple of (is_valid: bool, message: str)
- If valid: (True, "Valid cron expression")
- If invalid: (False, "Error message with details")
Note:
Standard crontab format: minute hour day month day_of_week
Day of week: 0=Sunday, 1=Monday, ..., 6=Saturday (or 7=Sunday)
"""
from apscheduler.triggers.cron import CronTrigger
try:
# Try to parse the expression
trigger = CronTrigger.from_crontab(cron_expression)
# Validate basic format (5 fields)
fields = cron_expression.split()
if len(fields) != 5:
return False, f"Cron expression must have 5 fields (minute hour day month day_of_week), got {len(fields)}"
return True, "Valid cron expression"
except (ValueError, KeyError) as e:
error_msg = str(e)
# Provide helpful hints for common errors
if "day_of_week" in error_msg.lower() or (len(cron_expression.split()) >= 5):
# Check if day_of_week field might be using APScheduler format by mistake
fields = cron_expression.split()
if len(fields) == 5:
dow_field = fields[4]
if dow_field.isdigit() and int(dow_field) >= 0:
hint = "\nNote: Use standard crontab format where 0=Sunday, 1=Monday, ..., 6=Saturday"
return False, f"Invalid cron expression: {error_msg}{hint}"
return False, f"Invalid cron expression: {error_msg}"
def queue_scan(self, scan_id: int, config_id: int) -> str:
"""
Queue a scan for immediate background execution.
@@ -188,6 +233,10 @@ class SchedulerService:
schedule_id: Database ID of the schedule
config_id: Database config ID
cron_expression: Cron expression (e.g., "0 2 * * *" for 2am daily)
IMPORTANT: Use standard crontab format where:
- Day of week: 0 = Sunday, 1 = Monday, ..., 6 = Saturday
- APScheduler automatically converts to its internal format
- from_crontab() handles the conversion properly
Returns:
Job ID from APScheduler
@@ -195,18 +244,29 @@ class SchedulerService:
Raises:
RuntimeError: If scheduler not initialized
ValueError: If cron expression is invalid
Note:
APScheduler internally uses Monday=0, but from_crontab() accepts
standard crontab format (Sunday=0) and converts it automatically.
"""
if not self.scheduler:
raise RuntimeError("Scheduler not initialized. Call init_scheduler() first.")
from apscheduler.triggers.cron import CronTrigger
# Validate cron expression first to provide helpful error messages
is_valid, message = self.validate_cron_expression(cron_expression)
if not is_valid:
raise ValueError(message)
# Create cron trigger from expression using local timezone
# This allows users to specify times in their local timezone
# from_crontab() parses standard crontab format (Sunday=0)
# and converts to APScheduler's internal format (Monday=0) automatically
try:
trigger = CronTrigger.from_crontab(cron_expression)
# timezone defaults to local system timezone
except (ValueError, KeyError) as e:
# This should not happen due to validation above, but catch anyway
raise ValueError(f"Invalid cron expression '{cron_expression}': {str(e)}")
# Add cron job
@@ -294,11 +354,16 @@ class SchedulerService:
# Update schedule's last_run and next_run
from croniter import croniter
next_run = croniter(schedule['cron_expression'], datetime.utcnow()).get_next(datetime)
now_utc = datetime.now(timezone.utc)
next_run = croniter(schedule['cron_expression'], now_utc).get_next(datetime)
# croniter returns naive datetime, add UTC timezone
if next_run.tzinfo is None:
next_run = next_run.replace(tzinfo=timezone.utc)
schedule_service.update_run_times(
schedule_id=schedule_id,
last_run=datetime.utcnow(),
last_run=now_utc,
next_run=next_run
)

View File

@@ -228,6 +228,34 @@ class SiteService:
return [self._site_to_dict(site) for site in sites]
def get_global_ip_stats(self) -> Dict[str, int]:
"""
Get global IP statistics across all sites.
Returns:
Dictionary with:
- total_ips: Total count of IP entries (including duplicates)
- unique_ips: Count of distinct IP addresses
- duplicate_ips: Number of duplicate entries (total - unique)
"""
# Total IP entries
total_ips = (
self.db.query(func.count(SiteIP.id))
.scalar() or 0
)
# Unique IP addresses
unique_ips = (
self.db.query(func.count(func.distinct(SiteIP.ip_address)))
.scalar() or 0
)
return {
'total_ips': total_ips,
'unique_ips': unique_ips,
'duplicate_ips': total_ips - unique_ips
}
def bulk_add_ips_from_cidr(self, site_id: int, cidr: str,
expected_ping: Optional[bool] = None,
expected_tcp_ports: Optional[List[int]] = None,

View File

@@ -76,6 +76,13 @@
</ul>
</li>
</ul>
<form class="d-flex me-3" action="{{ url_for('main.search_ip') }}" method="GET">
<input class="form-control form-control-sm me-2" type="search" name="ip"
placeholder="Search IP..." aria-label="Search IP" style="width: 150px;">
<button class="btn btn-outline-primary btn-sm" type="submit">
<i class="bi bi-search"></i>
</button>
</form>
<ul class="navbar-nav">
<li class="nav-item">
<a class="nav-link {% if request.endpoint == 'main.help' %}active{% endif %}"

View File

@@ -0,0 +1,175 @@
{% extends "base.html" %}
{% block title %}Search Results for {{ ip_address }} - SneakyScanner{% endblock %}
{% block content %}
<div class="row mt-4">
<div class="col-12 d-flex justify-content-between align-items-center mb-4">
<h1>
<i class="bi bi-search"></i>
Search Results
{% if ip_address %}
<small class="text-muted">for {{ ip_address }}</small>
{% endif %}
</h1>
<a href="{{ url_for('main.scans') }}" class="btn btn-secondary">
<i class="bi bi-arrow-left"></i> Back to Scans
</a>
</div>
</div>
{% if not ip_address %}
<!-- No IP provided -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-body text-center py-5">
<i class="bi bi-exclamation-circle text-warning" style="font-size: 3rem;"></i>
<h4 class="mt-3">No IP Address Provided</h4>
<p class="text-muted">Please enter an IP address in the search box to find related scans.</p>
</div>
</div>
</div>
</div>
{% else %}
<!-- Results Table -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Last 10 Scans Containing {{ ip_address }}</h5>
</div>
<div class="card-body">
<div id="results-loading" class="text-center py-5">
<div class="spinner-border" role="status">
<span class="visually-hidden">Loading...</span>
</div>
<p class="mt-3 text-muted">Searching for scans...</p>
</div>
<div id="results-error" class="alert alert-danger" style="display: none;"></div>
<div id="results-empty" class="text-center py-5 text-muted" style="display: none;">
<i class="bi bi-search" style="font-size: 3rem;"></i>
<h5 class="mt-3">No Scans Found</h5>
<p>No completed scans contain the IP address <strong>{{ ip_address }}</strong>.</p>
</div>
<div id="results-table-container" style="display: none;">
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th style="width: 80px;">ID</th>
<th>Title</th>
<th style="width: 200px;">Timestamp</th>
<th style="width: 100px;">Duration</th>
<th style="width: 120px;">Status</th>
<th style="width: 100px;">Actions</th>
</tr>
</thead>
<tbody id="results-tbody">
</tbody>
</table>
</div>
<div class="text-muted mt-3">
Found <span id="result-count">0</span> scan(s) containing this IP address.
</div>
</div>
</div>
</div>
</div>
</div>
{% endif %}
{% endblock %}
{% block scripts %}
<script>
const ipAddress = "{{ ip_address | e }}";
// Load results when page loads
document.addEventListener('DOMContentLoaded', function() {
if (ipAddress) {
loadResults();
}
});
// Load search results from API
async function loadResults() {
const loadingEl = document.getElementById('results-loading');
const errorEl = document.getElementById('results-error');
const emptyEl = document.getElementById('results-empty');
const tableEl = document.getElementById('results-table-container');
// Show loading state
loadingEl.style.display = 'block';
errorEl.style.display = 'none';
emptyEl.style.display = 'none';
tableEl.style.display = 'none';
try {
const response = await fetch(`/api/scans/by-ip/${encodeURIComponent(ipAddress)}`);
if (!response.ok) {
throw new Error('Failed to search for scans');
}
const data = await response.json();
const scans = data.scans || [];
loadingEl.style.display = 'none';
if (scans.length === 0) {
emptyEl.style.display = 'block';
} else {
tableEl.style.display = 'block';
renderResultsTable(scans);
document.getElementById('result-count').textContent = data.count;
}
} catch (error) {
console.error('Error searching for scans:', error);
loadingEl.style.display = 'none';
errorEl.textContent = 'Failed to search for scans. Please try again.';
errorEl.style.display = 'block';
}
}
// Render results table
function renderResultsTable(scans) {
const tbody = document.getElementById('results-tbody');
tbody.innerHTML = '';
scans.forEach(scan => {
const row = document.createElement('tr');
row.classList.add('scan-row');
// Format timestamp
const timestamp = new Date(scan.timestamp).toLocaleString();
// Format duration
const duration = scan.duration ? `${scan.duration.toFixed(1)}s` : '-';
// Status badge
let statusBadge = '';
if (scan.status === 'completed') {
statusBadge = '<span class="badge badge-success">Completed</span>';
} else if (scan.status === 'running') {
statusBadge = '<span class="badge badge-info">Running</span>';
} else if (scan.status === 'failed') {
statusBadge = '<span class="badge badge-danger">Failed</span>';
} else {
statusBadge = `<span class="badge badge-info">${scan.status}</span>`;
}
row.innerHTML = `
<td class="mono">${scan.id}</td>
<td>${scan.title || 'Untitled Scan'}</td>
<td class="text-muted">${timestamp}</td>
<td class="mono">${duration}</td>
<td>${statusBadge}</td>
<td>
<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>
</td>
`;
tbody.appendChild(row);
});
}
</script>
{% endblock %}

View File

@@ -20,6 +20,10 @@
<span id="refresh-text">Refresh</span>
<span id="refresh-spinner" class="spinner-border spinner-border-sm ms-1" style="display: none;"></span>
</button>
<button class="btn btn-warning ms-2" onclick="stopScan()" id="stop-btn" style="display: none;">
<span id="stop-text">Stop Scan</span>
<span id="stop-spinner" class="spinner-border spinner-border-sm ms-1" style="display: none;"></span>
</button>
<button class="btn btn-danger ms-2" onclick="deleteScan()" id="delete-btn">Delete Scan</button>
</div>
</div>
@@ -84,6 +88,50 @@
</div>
</div>
<!-- Progress Section (shown when scan is running) -->
<div class="row mb-4" id="progress-section" style="display: none;">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0" style="color: #60a5fa;">
<i class="bi bi-hourglass-split"></i> Scan Progress
</h5>
</div>
<div class="card-body">
<!-- Phase and Progress Bar -->
<div class="mb-3">
<div class="d-flex justify-content-between align-items-center mb-2">
<span>Current Phase: <strong id="current-phase">Initializing...</strong></span>
<span id="progress-count">0 / 0 IPs</span>
</div>
<div class="progress" style="height: 20px; background-color: #334155;">
<div id="progress-bar" class="progress-bar bg-info" role="progressbar" style="width: 0%"></div>
</div>
</div>
<!-- Per-IP Results Table -->
<div class="table-responsive" style="max-height: 400px; overflow-y: auto;">
<table class="table table-sm">
<thead style="position: sticky; top: 0; background-color: #1e293b;">
<tr>
<th>Site</th>
<th>IP Address</th>
<th>Ping</th>
<th>TCP Ports</th>
<th>UDP Ports</th>
<th>Services</th>
</tr>
</thead>
<tbody id="progress-table-body">
<tr><td colspan="6" class="text-center text-muted">Waiting for results...</td></tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<!-- Stats Row -->
<div class="row mb-4">
<div class="col-md-3">
@@ -222,6 +270,7 @@
const scanId = {{ scan_id }};
let scanData = null;
let historyChart = null; // Store chart instance to prevent duplicates
let progressInterval = null; // Store progress polling interval
// Show alert notification
function showAlert(type, message) {
@@ -247,16 +296,136 @@
loadScan().then(() => {
findPreviousScan();
loadHistoricalChart();
// Start progress polling if scan is running
if (scanData && scanData.status === 'running') {
startProgressPolling();
}
});
// Auto-refresh every 10 seconds if scan is running
setInterval(function() {
if (scanData && scanData.status === 'running') {
loadScan();
}
}, 10000);
});
// Start polling for progress updates
function startProgressPolling() {
// Show progress section
document.getElementById('progress-section').style.display = 'block';
// Initial load
loadProgress();
// Poll every 3 seconds
progressInterval = setInterval(loadProgress, 3000);
}
// Stop polling for progress updates
function stopProgressPolling() {
if (progressInterval) {
clearInterval(progressInterval);
progressInterval = null;
}
// Hide progress section when scan completes
document.getElementById('progress-section').style.display = 'none';
}
// Load progress data
async function loadProgress() {
try {
const response = await fetch(`/api/scans/${scanId}/progress`);
if (!response.ok) return;
const progress = await response.json();
// Check if scan is still running
if (progress.status !== 'running') {
stopProgressPolling();
loadScan(); // Refresh full scan data
return;
}
renderProgress(progress);
} catch (error) {
console.error('Error loading progress:', error);
}
}
// Render progress data
function renderProgress(progress) {
// Update phase display
const phaseNames = {
'pending': 'Initializing',
'ping': 'Ping Scan',
'tcp_scan': 'TCP Port Scan',
'udp_scan': 'UDP Port Scan',
'service_detection': 'Service Detection',
'http_analysis': 'HTTP/HTTPS Analysis',
'completed': 'Completing'
};
const phaseName = phaseNames[progress.current_phase] || progress.current_phase;
document.getElementById('current-phase').textContent = phaseName;
// Update progress count and bar
const total = progress.total_ips || 0;
const completed = progress.completed_ips || 0;
const percent = total > 0 ? Math.round((completed / total) * 100) : 0;
document.getElementById('progress-count').textContent = `${completed} / ${total} IPs`;
document.getElementById('progress-bar').style.width = `${percent}%`;
// Update progress table
const tbody = document.getElementById('progress-table-body');
const entries = progress.progress_entries || [];
if (entries.length === 0) {
tbody.innerHTML = '<tr><td colspan="6" class="text-center text-muted">Waiting for results...</td></tr>';
return;
}
let html = '';
entries.forEach(entry => {
// Ping result
let pingDisplay = '-';
if (entry.ping_result !== null && entry.ping_result !== undefined) {
pingDisplay = entry.ping_result
? '<span class="badge badge-success">Yes</span>'
: '<span class="badge badge-danger">No</span>';
}
// TCP ports
const tcpPorts = entry.tcp_ports || [];
let tcpDisplay = tcpPorts.length > 0
? `<span class="badge bg-info">${tcpPorts.length}</span> <small class="text-muted">${tcpPorts.slice(0, 5).join(', ')}${tcpPorts.length > 5 ? '...' : ''}</small>`
: '-';
// UDP ports
const udpPorts = entry.udp_ports || [];
let udpDisplay = udpPorts.length > 0
? `<span class="badge bg-info">${udpPorts.length}</span>`
: '-';
// Services
const services = entry.services || [];
let svcDisplay = '-';
if (services.length > 0) {
const svcNames = services.map(s => s.service || 'unknown').slice(0, 3);
svcDisplay = `<span class="badge bg-info">${services.length}</span> <small class="text-muted">${svcNames.join(', ')}${services.length > 3 ? '...' : ''}</small>`;
}
html += `
<tr class="scan-row">
<td>${entry.site_name || '-'}</td>
<td class="mono">${entry.ip_address}</td>
<td>${pingDisplay}</td>
<td>${tcpDisplay}</td>
<td>${udpDisplay}</td>
<td>${svcDisplay}</td>
</tr>
`;
});
tbody.innerHTML = html;
}
// Load scan details
async function loadScan() {
const loadingEl = document.getElementById('scan-loading');
@@ -306,8 +475,11 @@
} else if (scan.status === 'running') {
statusBadge = '<span class="badge badge-info">Running</span>';
document.getElementById('delete-btn').disabled = true;
document.getElementById('stop-btn').style.display = 'inline-block';
} else if (scan.status === 'failed') {
statusBadge = '<span class="badge badge-danger">Failed</span>';
} else if (scan.status === 'cancelled') {
statusBadge = '<span class="badge badge-warning">Cancelled</span>';
} else {
statusBadge = `<span class="badge badge-info">${scan.status}</span>`;
}
@@ -414,6 +586,19 @@
const screenshotPath = service && service.screenshot_path ? service.screenshot_path : null;
const certificate = service && service.certificates && service.certificates.length > 0 ? service.certificates[0] : null;
// Build status cell with optional "Mark Expected" button
let statusCell;
if (port.expected) {
statusCell = '<span class="badge badge-good">Expected</span>';
} else {
// Show "Unexpected" badge with "Mark Expected" button if site_id and site_ip_id are available
const canMarkExpected = site.site_id && ip.site_ip_id;
statusCell = `<span class="badge badge-warning">Unexpected</span>`;
if (canMarkExpected) {
statusCell += ` <button class="btn btn-sm btn-outline-success ms-1" onclick="markPortExpected(${site.site_id}, ${ip.site_ip_id}, ${port.port}, '${port.protocol}')" title="Add to expected ports"><i class="bi bi-plus-circle"></i></button>`;
}
}
const row = document.createElement('tr');
row.classList.add('scan-row'); // Fix white row bug
row.innerHTML = `
@@ -423,7 +608,7 @@
<td>${service ? service.service_name : '-'}</td>
<td>${service ? service.product || '-' : '-'}</td>
<td class="mono">${service ? service.version || '-' : '-'}</td>
<td>${port.expected ? '<span class="badge badge-good">Expected</span>' : '<span class="badge badge-warning">Unexpected</span>'}</td>
<td>${statusCell}</td>
<td>${screenshotPath ? `<a href="/output/${screenshotPath.replace(/^\/?(?:app\/)?output\/?/, '')}" target="_blank" class="btn btn-sm btn-outline-primary" title="View Screenshot"><i class="bi bi-image"></i></a>` : '-'}</td>
<td>${certificate ? `<button class="btn btn-sm btn-outline-info" onclick='showCertificateModal(${JSON.stringify(certificate).replace(/'/g, "&#39;")})' title="View Certificate"><i class="bi bi-shield-lock"></i></button>` : '-'}</td>
`;
@@ -532,6 +717,127 @@
}
}
// Stop scan
async function stopScan() {
if (!confirm(`Are you sure you want to stop scan ${scanId}?`)) {
return;
}
const stopBtn = document.getElementById('stop-btn');
const stopText = document.getElementById('stop-text');
const stopSpinner = document.getElementById('stop-spinner');
// Show loading state
stopBtn.disabled = true;
stopText.style.display = 'none';
stopSpinner.style.display = 'inline-block';
try {
const response = await fetch(`/api/scans/${scanId}/stop`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
}
});
if (!response.ok) {
let errorMessage = `HTTP ${response.status}: Failed to stop scan`;
try {
const data = await response.json();
errorMessage = data.message || errorMessage;
} catch (e) {
// Ignore JSON parse errors
}
throw new Error(errorMessage);
}
// Show success message
showAlert('success', `Stop signal sent to scan ${scanId}.`);
// Refresh scan data after a short delay
setTimeout(() => {
loadScan();
}, 1000);
} catch (error) {
console.error('Error stopping scan:', error);
showAlert('danger', `Failed to stop scan: ${error.message}`);
// Re-enable button on error
stopBtn.disabled = false;
stopText.style.display = 'inline';
stopSpinner.style.display = 'none';
}
}
// Mark a port as expected in the site config
async function markPortExpected(siteId, ipId, portNumber, protocol) {
try {
// First, get the current IP settings - fetch all IPs with high per_page to find the one we need
const getResponse = await fetch(`/api/sites/${siteId}/ips?per_page=200`);
if (!getResponse.ok) {
throw new Error('Failed to get site IPs');
}
const ipsData = await getResponse.json();
// Find the IP in the site
const ipData = ipsData.ips.find(ip => ip.id === ipId);
if (!ipData) {
throw new Error('IP not found in site');
}
// Get current expected ports
let expectedTcpPorts = ipData.expected_tcp_ports || [];
let expectedUdpPorts = ipData.expected_udp_ports || [];
// Add the new port to the appropriate list
if (protocol.toLowerCase() === 'tcp') {
if (!expectedTcpPorts.includes(portNumber)) {
expectedTcpPorts.push(portNumber);
expectedTcpPorts.sort((a, b) => a - b);
}
} else if (protocol.toLowerCase() === 'udp') {
if (!expectedUdpPorts.includes(portNumber)) {
expectedUdpPorts.push(portNumber);
expectedUdpPorts.sort((a, b) => a - b);
}
}
// Update the IP settings
const updateResponse = await fetch(`/api/sites/${siteId}/ips/${ipId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
expected_tcp_ports: expectedTcpPorts,
expected_udp_ports: expectedUdpPorts
})
});
if (!updateResponse.ok) {
let errorMessage = 'Failed to update IP settings';
try {
const errorData = await updateResponse.json();
errorMessage = errorData.message || errorMessage;
} catch (e) {
// Ignore JSON parse errors
}
throw new Error(errorMessage);
}
// Show success message
showAlert('success', `Port ${portNumber}/${protocol.toUpperCase()} added to expected ports for this IP. Refresh the page to see updated status.`);
// Optionally refresh the scan data to show the change
// Note: The scan data itself won't change, but the user knows it's been updated
} catch (error) {
console.error('Error marking port as expected:', error);
showAlert('danger', `Failed to mark port as expected: ${error.message}`);
}
}
// Find previous scan and show compare button
let previousScanId = null;
let currentConfigId = null;

View File

@@ -26,6 +26,7 @@
<option value="running">Running</option>
<option value="completed">Completed</option>
<option value="failed">Failed</option>
<option value="cancelled">Cancelled</option>
</select>
</div>
<div class="col-md-4">
@@ -248,20 +249,27 @@
statusBadge = '<span class="badge badge-info">Running</span>';
} else if (scan.status === 'failed') {
statusBadge = '<span class="badge badge-danger">Failed</span>';
} else if (scan.status === 'cancelled') {
statusBadge = '<span class="badge badge-warning">Cancelled</span>';
} else {
statusBadge = `<span class="badge badge-info">${scan.status}</span>`;
}
// Action buttons
let actionButtons = `<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>`;
if (scan.status === 'running') {
actionButtons += `<button class="btn btn-sm btn-warning ms-1" onclick="stopScan(${scan.id})">Stop</button>`;
} else {
actionButtons += `<button class="btn btn-sm btn-danger ms-1" onclick="deleteScan(${scan.id})">Delete</button>`;
}
row.innerHTML = `
<td class="mono">${scan.id}</td>
<td>${scan.title || 'Untitled Scan'}</td>
<td class="text-muted">${timestamp}</td>
<td class="mono">${duration}</td>
<td>${statusBadge}</td>
<td>
<a href="/scans/${scan.id}" class="btn btn-sm btn-secondary">View</a>
${scan.status !== 'running' ? `<button class="btn btn-sm btn-danger ms-1" onclick="deleteScan(${scan.id})">Delete</button>` : ''}
</td>
<td>${actionButtons}</td>
`;
tbody.appendChild(row);
@@ -489,6 +497,33 @@
}
}
// Stop scan
async function stopScan(scanId) {
if (!confirm(`Are you sure you want to stop scan ${scanId}?`)) {
return;
}
try {
const response = await fetch(`/api/scans/${scanId}/stop`, {
method: 'POST'
});
if (!response.ok) {
const data = await response.json();
throw new Error(data.message || 'Failed to stop scan');
}
// Show success message
showAlert('success', `Stop signal sent to scan ${scanId}.`);
// Refresh scans after a short delay
setTimeout(() => loadScans(), 1000);
} catch (error) {
console.error('Error stopping scan:', error);
showAlert('danger', `Failed to stop scan: ${error.message}`);
}
}
// Delete scan
async function deleteScan(scanId) {
if (!confirm(`Are you sure you want to delete scan ${scanId}?`)) {

View File

@@ -298,7 +298,11 @@ async function loadSchedule() {
function populateForm(schedule) {
document.getElementById('schedule-id').value = schedule.id;
document.getElementById('schedule-name').value = schedule.name;
document.getElementById('config-id').value = schedule.config_id;
// Display config name and ID in the readonly config-file field
const configDisplay = schedule.config_name
? `${schedule.config_name} (ID: ${schedule.config_id})`
: `Config ID: ${schedule.config_id}`;
document.getElementById('config-file').value = configDisplay;
document.getElementById('cron-expression').value = schedule.cron_expression;
document.getElementById('schedule-enabled').checked = schedule.enabled;

View File

@@ -26,8 +26,11 @@
</div>
<div class="col-md-4">
<div class="stat-card">
<div class="stat-value" id="total-ips">-</div>
<div class="stat-label">Total IPs</div>
<div class="stat-value" id="unique-ips">-</div>
<div class="stat-label">Unique IPs</div>
<div class="stat-sublabel" id="duplicate-ips-label" style="display: none; font-size: 0.75rem; color: #fbbf24;">
(<span id="duplicate-ips">0</span> duplicates)
</div>
</div>
</div>
<div class="col-md-4">
@@ -499,7 +502,7 @@ async function loadSites() {
const data = await response.json();
sitesData = data.sites || [];
updateStats();
updateStats(data.unique_ips, data.duplicate_ips);
renderSites(sitesData);
document.getElementById('sites-loading').style.display = 'none';
@@ -514,12 +517,20 @@ async function loadSites() {
}
// Update summary stats
function updateStats() {
function updateStats(uniqueIps, duplicateIps) {
const totalSites = sitesData.length;
const totalIps = sitesData.reduce((sum, site) => sum + (site.ip_count || 0), 0);
document.getElementById('total-sites').textContent = totalSites;
document.getElementById('total-ips').textContent = totalIps;
document.getElementById('unique-ips').textContent = uniqueIps || 0;
// Show duplicate count if there are any
if (duplicateIps && duplicateIps > 0) {
document.getElementById('duplicate-ips').textContent = duplicateIps;
document.getElementById('duplicate-ips-label').style.display = 'block';
} else {
document.getElementById('duplicate-ips-label').style.display = 'none';
}
document.getElementById('sites-in-use').textContent = '-'; // Will be updated async
// Count sites in use (async)
@@ -688,6 +699,18 @@ async function loadSiteIps(siteId) {
const data = await response.json();
const ips = data.ips || [];
// Sort IPs by numeric octets
ips.sort((a, b) => {
const partsA = a.ip_address.split('.').map(Number);
const partsB = b.ip_address.split('.').map(Number);
for (let i = 0; i < 4; i++) {
if (partsA[i] !== partsB[i]) {
return partsA[i] - partsB[i];
}
}
return 0;
});
document.getElementById('ip-count').textContent = data.total || ips.length;
// Render flat IP table

View File

@@ -23,7 +23,7 @@ def validate_scan_status(status: str) -> tuple[bool, Optional[str]]:
>>> validate_scan_status('invalid')
(False, 'Invalid status: invalid. Must be one of: running, completed, failed')
"""
valid_statuses = ['running', 'completed', 'failed']
valid_statuses = ['running', 'finalizing', 'completed', 'failed', 'cancelled']
if status not in valid_statuses:
return False, f'Invalid status: {status}. Must be one of: {", ".join(valid_statuses)}'

View File

@@ -41,6 +41,9 @@ services:
# Scheduler configuration (APScheduler)
- SCHEDULER_EXECUTORS=${SCHEDULER_EXECUTORS:-2}
- SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES=${SCHEDULER_JOB_DEFAULTS_MAX_INSTANCES:-3}
# UDP scanning configuration
- UDP_SCAN_ENABLED=${UDP_SCAN_ENABLED:-false}
- UDP_PORTS=${UDP_PORTS:-53,67,68,69,123,161,500,514,1900}
# Scanner functionality requires privileged mode and host network for masscan/nmap
privileged: true
network_mode: host

View File

@@ -117,7 +117,7 @@ Retrieve a paginated list of all sites.
| `per_page` | integer | No | 20 | Items per page (1-100) |
| `all` | string | No | - | Set to "true" to return all sites without pagination |
**Success Response (200 OK):**
**Success Response (200 OK) - Paginated:**
```json
{
"sites": [
@@ -139,13 +139,40 @@ Retrieve a paginated list of all sites.
}
```
**Success Response (200 OK) - All Sites (all=true):**
```json
{
"sites": [
{
"id": 1,
"name": "Production DC",
"description": "Production datacenter servers",
"ip_count": 25,
"created_at": "2025-11-19T10:30:00Z",
"updated_at": "2025-11-19T10:30:00Z"
}
],
"total_ips": 100,
"unique_ips": 85,
"duplicate_ips": 15
}
```
**Response Fields (all=true):**
| Field | Type | Description |
|-------|------|-------------|
| `total_ips` | integer | Total count of IP entries across all sites (including duplicates) |
| `unique_ips` | integer | Count of distinct IP addresses |
| `duplicate_ips` | integer | Number of duplicate IP entries (total_ips - unique_ips) |
**Usage Example:**
```bash
# List first page
curl -X GET http://localhost:5000/api/sites \
-b cookies.txt
# Get all sites (for dropdowns)
# Get all sites with global IP stats
curl -X GET "http://localhost:5000/api/sites?all=true" \
-b cookies.txt
```
@@ -989,6 +1016,56 @@ curl -X DELETE http://localhost:5000/api/scans/42 \
-b cookies.txt
```
### Get Scans by IP
Get the last 10 scans containing a specific IP address.
**Endpoint:** `GET /api/scans/by-ip/{ip_address}`
**Authentication:** Required
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ip_address` | string | Yes | IP address to search for |
**Success Response (200 OK):**
```json
{
"ip_address": "192.168.1.10",
"scans": [
{
"id": 42,
"timestamp": "2025-11-14T10:30:00Z",
"duration": 125.5,
"status": "completed",
"title": "Production Network Scan",
"config_id": 1,
"triggered_by": "manual",
"created_at": "2025-11-14T10:30:00Z"
},
{
"id": 38,
"timestamp": "2025-11-13T10:30:00Z",
"duration": 98.2,
"status": "completed",
"title": "Production Network Scan",
"config_id": 1,
"triggered_by": "scheduled",
"created_at": "2025-11-13T10:30:00Z"
}
],
"count": 2
}
```
**Usage Example:**
```bash
curl -X GET http://localhost:5000/api/scans/by-ip/192.168.1.10 \
-b cookies.txt
```
### Compare Scans
Compare two scans to identify differences in ports, services, and certificates.

View File

@@ -24,10 +24,10 @@ SneakyScanner is deployed as a Docker container running a Flask web application
**Architecture:**
- **Web Application**: Flask app on port 5000 with modern web UI
- **Database**: SQLite (persisted to volume)
- **Database**: SQLite (persisted to volume) - stores all configurations, scan results, and settings
- **Background Jobs**: APScheduler for async scan execution
- **Scanner**: masscan, nmap, sslyze, Playwright
- **Config Creator**: Web-based CIDR-to-YAML configuration builder
- **Config Management**: Database-backed configuration system managed entirely via web UI
- **Scheduling**: Cron-based scheduled scans with dashboard management
---
@@ -143,6 +143,13 @@ docker compose -f docker-compose-standalone.yml up
SneakyScanner is configured via environment variables. The recommended approach is to use a `.env` file.
**UDP Port Scanning**
- UDP Port scanning is disabled by default.
- You can turn it on via the .env variable.
- By Default, UDP port scanning only scans the top 20 ports, for convenience I have included the NMAP top 100 UDP ports as well.
#### Creating Your .env File
```bash
@@ -160,6 +167,7 @@ python3 -c "from cryptography.fernet import Fernet; print('SNEAKYSCANNER_ENCRYPT
nano .env
```
#### Key Configuration Options
| Variable | Description | Default | Required |
@@ -190,54 +198,30 @@ The application needs these directories (created automatically by Docker):
```bash
# Verify directories exist
ls -la configs/ data/ output/ logs/
ls -la data/ output/ logs/
# If missing, create them
mkdir -p configs data output logs
mkdir -p data output logs
```
### Step 2: Configure Scan Targets
You can create scan configurations in two ways:
After starting the application, create scan configurations using the web UI:
**Option A: Using the Web UI (Recommended - Phase 4 Feature)**
**Creating Configurations via Web UI**
1. Navigate to **Configs** in the web interface
2. Click **"Create New Config"**
3. Use the CIDR-based config creator for quick setup:
3. Use the form-based config creator:
- Enter site name
- Enter CIDR range (e.g., `192.168.1.0/24`)
- Select expected ports from dropdowns
- Click **"Generate Config"**
4. Or use the **YAML Editor** for advanced configurations
5. Save and use immediately in scans or schedules
- Select expected TCP/UDP ports from dropdowns
- Optionally enable ping checks
4. Click **"Save Configuration"**
5. Configuration is saved to database and immediately available for scans and schedules
**Option B: Manual YAML Files**
**Note**: All configurations are stored in the database, not as files. This provides better reliability, easier backup, and seamless management through the web interface.
Create YAML configuration files manually in the `configs/` directory:
```bash
# Example configuration
cat > configs/my-network.yaml <<EOF
title: "My Network Infrastructure"
sites:
- name: "Web Servers"
cidr: "192.168.1.0/24" # Scan entire subnet
expected_ports:
- port: 80
protocol: tcp
service: "http"
- port: 443
protocol: tcp
service: "https"
- port: 22
protocol: tcp
service: "ssh"
ping_expected: true
EOF
```
**Note**: Phase 4 introduced a powerful config creator in the web UI that makes it easy to generate configs from CIDR ranges without manually editing YAML.
### Step 3: Build Docker Image
@@ -389,38 +373,37 @@ The dashboard provides a central view of your scanning activity:
- **Trend Charts**: Port count trends over time using Chart.js
- **Quick Actions**: Buttons to run scans, create configs, manage schedules
### Managing Scan Configurations (Phase 4)
### Managing Scan Configurations
All scan configurations are stored in the database and managed entirely through the web interface.
**Creating Configs:**
1. Navigate to **Configs****Create New Config**
2. **CIDR Creator Mode**:
2. Fill in the configuration form:
- Enter site name (e.g., "Production Servers")
- Enter CIDR range (e.g., `192.168.1.0/24`)
- Select expected TCP/UDP ports from dropdowns
- Click **"Generate Config"** to create YAML
3. **YAML Editor Mode**:
- Switch to editor tab for advanced configurations
- Syntax highlighting with line numbers
- Validate YAML before saving
- Enable/disable ping checks
3. Click **"Save Configuration"**
4. Configuration is immediately stored in database and available for use
**Editing Configs:**
1. Navigate to **Configs** → Select config
1. Navigate to **Configs** → Select config from list
2. Click **"Edit"** button
3. Make changes in YAML editor
4. Save changes (validates YAML automatically)
3. Modify any fields in the configuration form
4. Click **"Save Changes"** to update database
**Uploading Configs:**
1. Navigate to **Configs** **Upload**
2. Select YAML file from your computer
3. File is validated and saved to `configs/` directory
**Downloading Configs:**
- Click **"Download"** button next to any config
- Saves YAML file to your local machine
**Viewing Configs:**
- Navigate to **Configs** page to see all saved configurations
- Each config shows site name, CIDR range, and expected ports
- Click on any config to view full details
**Deleting Configs:**
- Click **"Delete"** button
- Click **"Delete"** button next to any config
- **Warning**: Cannot delete configs used by active schedules
- Deletion removes the configuration from the database permanently
**Note**: All configurations are database-backed, providing automatic backups when you backup the database file.
### Running Scans
@@ -477,12 +460,11 @@ SneakyScanner uses several mounted volumes for data persistence:
| Volume | Container Path | Purpose | Important? |
|--------|----------------|---------|------------|
| `./configs` | `/app/configs` | Scan configuration files (managed via web UI) | Yes |
| `./data` | `/app/data` | SQLite database (contains all scan history) | **Critical** |
| `./data` | `/app/data` | SQLite database (contains configurations, scan history, settings) | **Critical** |
| `./output` | `/app/output` | Scan results (JSON, HTML, ZIP, screenshots) | Yes |
| `./logs` | `/app/logs` | Application logs (rotating file handler) | No |
**Note**: As of Phase 4, the `./configs` volume is read-write to support the web-based config creator and editor. The web UI can now create, edit, and delete configuration files directly.
**Note**: All scan configurations are stored in the SQLite database (`./data/sneakyscanner.db`). There is no separate configs directory or YAML files. Backing up the database file ensures all your configurations are preserved.
### Backing Up Data
@@ -490,23 +472,22 @@ SneakyScanner uses several mounted volumes for data persistence:
# Create backup directory
mkdir -p backups/$(date +%Y%m%d)
# Backup database
# Backup database (includes all configurations)
cp data/sneakyscanner.db backups/$(date +%Y%m%d)/
# Backup scan outputs
tar -czf backups/$(date +%Y%m%d)/output.tar.gz output/
# Backup configurations
tar -czf backups/$(date +%Y%m%d)/configs.tar.gz configs/
```
**Important**: The database backup includes all scan configurations, settings, schedules, and scan history. No separate configuration file backup is needed.
### Restoring Data
```bash
# Stop application
docker compose -f docker-compose.yml down
# Restore database
# Restore database (includes all configurations)
cp backups/YYYYMMDD/sneakyscanner.db data/
# Restore outputs
@@ -516,6 +497,8 @@ tar -xzf backups/YYYYMMDD/output.tar.gz
docker compose -f docker-compose.yml up -d
```
**Note**: Restoring the database file restores all configurations, settings, schedules, and scan history.
### Cleaning Up Old Scan Results
**Option A: Using the Web UI (Recommended)**
@@ -564,50 +547,52 @@ curl -X POST http://localhost:5000/api/auth/logout \
-b cookies.txt
```
### Config Management (Phase 4)
### Config Management
```bash
# List all configs
curl http://localhost:5000/api/configs \
-b cookies.txt
# Get specific config
curl http://localhost:5000/api/configs/prod-network.yaml \
# Get specific config by ID
curl http://localhost:5000/api/configs/1 \
-b cookies.txt
# Create new config
curl -X POST http://localhost:5000/api/configs \
-H "Content-Type: application/json" \
-d '{
"filename": "test-network.yaml",
"content": "title: Test Network\nsites:\n - name: Test\n cidr: 10.0.0.0/24"
"name": "Test Network",
"cidr": "10.0.0.0/24",
"expected_ports": [
{"port": 80, "protocol": "tcp", "service": "http"},
{"port": 443, "protocol": "tcp", "service": "https"}
],
"ping_expected": true
}' \
-b cookies.txt
# Update config
curl -X PUT http://localhost:5000/api/configs/test-network.yaml \
curl -X PUT http://localhost:5000/api/configs/1 \
-H "Content-Type: application/json" \
-d '{
"content": "title: Updated Test Network\nsites:\n - name: Test Site\n cidr: 10.0.0.0/24"
"name": "Updated Test Network",
"cidr": "10.0.1.0/24"
}' \
-b cookies.txt
# Download config
curl http://localhost:5000/api/configs/test-network.yaml/download \
-b cookies.txt -o test-network.yaml
# Delete config
curl -X DELETE http://localhost:5000/api/configs/test-network.yaml \
curl -X DELETE http://localhost:5000/api/configs/1 \
-b cookies.txt
```
### Scan Management
```bash
# Trigger a scan
# Trigger a scan (using config ID from database)
curl -X POST http://localhost:5000/api/scans \
-H "Content-Type: application/json" \
-d '{"config_id": "/app/configs/prod-network.yaml"}' \
-d '{"config_id": 1}' \
-b cookies.txt
# List all scans
@@ -634,12 +619,12 @@ curl -X DELETE http://localhost:5000/api/scans/123 \
curl http://localhost:5000/api/schedules \
-b cookies.txt
# Create schedule
# Create schedule (using config ID from database)
curl -X POST http://localhost:5000/api/schedules \
-H "Content-Type: application/json" \
-d '{
"name": "Daily Production Scan",
"config_id": "/app/configs/prod-network.yaml",
"config_id": 1,
"cron_expression": "0 2 * * *",
"enabled": true
}' \
@@ -875,24 +860,25 @@ docker compose -f docker-compose.yml logs web | grep -E "(ERROR|Exception|Traceb
docker compose -f docker-compose.yml exec web which masscan nmap
```
### Config Files Not Appearing in Web UI
### Configs Not Appearing in Web UI
**Problem**: Manually created configs don't show up in web interface
**Problem**: Created configs don't show up in web interface
```bash
# Check file permissions (must be readable by web container)
ls -la configs/
# Check database connectivity
docker compose -f docker-compose.yml logs web | grep -i "database"
# Fix permissions if needed
sudo chown -R 1000:1000 configs/
chmod 644 configs/*.yaml
# Verify database file exists and is readable
ls -lh data/sneakyscanner.db
# Verify YAML syntax is valid
docker compose -f docker-compose.yml exec web python3 -c \
"import yaml; yaml.safe_load(open('/app/configs/your-config.yaml'))"
# Check web logs for parsing errors
# Check for errors when creating configs
docker compose -f docker-compose.yml logs web | grep -i "config"
# Try accessing configs via API
curl http://localhost:5000/api/configs -b cookies.txt
# If database is corrupted, check integrity
docker compose -f docker-compose.yml exec web sqlite3 /app/data/sneakyscanner.db "PRAGMA integrity_check;"
```
### Health Check Failing
@@ -979,11 +965,11 @@ server {
# Ensure proper ownership of data directories
sudo chown -R $USER:$USER data/ output/ logs/
# Restrict database file permissions
# Restrict database file permissions (contains configurations and sensitive data)
chmod 600 data/sneakyscanner.db
# Configs should be read-only
chmod 444 configs/*.yaml
# Ensure database directory is writable
chmod 700 data/
```
---
@@ -1051,19 +1037,17 @@ mkdir -p "$BACKUP_DIR"
# Stop application for consistent backup
docker compose -f docker-compose.yml stop web
# Backup database
# Backup database (includes all configurations)
cp data/sneakyscanner.db "$BACKUP_DIR/"
# Backup outputs (last 30 days only)
find output/ -type f -mtime -30 -exec cp --parents {} "$BACKUP_DIR/" \;
# Backup configs
cp -r configs/ "$BACKUP_DIR/"
# Restart application
docker compose -f docker-compose.yml start web
echo "Backup complete: $BACKUP_DIR"
echo "Database backup includes all configurations, settings, and scan history"
```
Make executable and schedule with cron:
@@ -1083,15 +1067,18 @@ crontab -e
# Stop application
docker compose -f docker-compose.yml down
# Restore files
# Restore database (includes all configurations)
cp backups/YYYYMMDD_HHMMSS/sneakyscanner.db data/
cp -r backups/YYYYMMDD_HHMMSS/configs/* configs/
# Restore output files
cp -r backups/YYYYMMDD_HHMMSS/output/* output/
# Start application
docker compose -f docker-compose.yml up -d
```
**Note**: Restoring the database file will restore all configurations, settings, schedules, and scan history from the backup.
---
## Support and Further Reading
@@ -1105,13 +1092,13 @@ docker compose -f docker-compose.yml up -d
## What's New
### Phase 4 (2025-11-17) - Config Creator
- **CIDR-based Config Creator**: Web UI for generating scan configs from CIDR ranges
- **YAML Editor**: Built-in editor with syntax highlighting (CodeMirror)
- **Config Management UI**: List, view, edit, download, and delete configs via web interface
- **Config Upload**: Direct YAML file upload for advanced users
- **REST API**: 7 new config management endpoints
### Phase 4+ (2025-11-17) - Database-Backed Configuration System
- **Database-Backed Configs**: All configurations stored in SQLite database (no YAML files)
- **Web-Based Config Creator**: Form-based UI for creating scan configs from CIDR ranges
- **Config Management UI**: List, view, edit, and delete configs via web interface
- **REST API**: Full config management via RESTful API with database storage
- **Schedule Protection**: Prevents deleting configs used by active schedules
- **Automatic Backups**: Configurations included in database backups
### Phase 3 (2025-11-14) - Dashboard & Scheduling ✅
- **Dashboard**: Summary stats, recent scans, trend charts
@@ -1133,5 +1120,5 @@ docker compose -f docker-compose.yml up -d
---
**Last Updated**: 2025-11-17
**Version**: Phase 4 - Config Creator Complete
**Last Updated**: 2025-11-24
**Version**: Phase 4+ - Database-Backed Configuration System

0
docs/KNOWN_ISSUES.md Normal file
View File