fisrt
This commit is contained in:
36
.dockerignore
Normal file
36
.dockerignore
Normal file
@@ -0,0 +1,36 @@
|
||||
# Node modules
|
||||
frontend/node_modules
|
||||
frontend/dist
|
||||
|
||||
# Python
|
||||
backend/__pycache__
|
||||
backend/**/__pycache__
|
||||
backend/**/*.pyc
|
||||
backend/**/*.pyo
|
||||
backend/**/*.pyd
|
||||
backend/.pytest_cache
|
||||
backend/**/.pytest_cache
|
||||
|
||||
# Données et logs
|
||||
data/
|
||||
logs/
|
||||
*.sqlite
|
||||
*.db
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Git
|
||||
.git/
|
||||
.gitignore
|
||||
|
||||
# Documentation
|
||||
*.md
|
||||
!README.md
|
||||
|
||||
# Divers
|
||||
.env
|
||||
.DS_Store
|
||||
37
.gitignore
vendored
Normal file
37
.gitignore
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
backend/**/__pycache__/
|
||||
.pytest_cache/
|
||||
|
||||
# Node
|
||||
frontend/node_modules/
|
||||
frontend/dist/
|
||||
frontend/.vite/
|
||||
|
||||
# Environnement
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
|
||||
# Données
|
||||
data/
|
||||
logs/
|
||||
*.sqlite
|
||||
*.db
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
|
||||
# Build
|
||||
build/
|
||||
dist/
|
||||
*.egg-info/
|
||||
122
CLAUDE.md
Normal file
122
CLAUDE.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
IPWatch is a network scanner web application that visualizes IP addresses, their states (online/offline), open ports, and historical data. The project consists of:
|
||||
|
||||
- **Backend**: FastAPI + SQLAlchemy + APScheduler for network scanning
|
||||
- **Frontend**: Vue 3 + Vite + Tailwind with Monokai dark theme
|
||||
- **Deployment**: Docker containerization with volumes for config and database
|
||||
|
||||
## Key Specification Files
|
||||
Speek in french and comment in french
|
||||
The project has detailed specifications that MUST be followed when implementing features:
|
||||
|
||||
- [prompt-claude-code.md](prompt-claude-code.md) - Overall project objectives and deliverables
|
||||
- [architecture-technique.md](architecture-technique.md) - Technical architecture (backend modules, frontend structure, Docker setup)
|
||||
- [modele-donnees.md](modele-donnees.md) - SQLite database schema (ip and ip_history tables with required indexes)
|
||||
- [workflow-scan.md](workflow-scan.md) - 10-step scan pipeline from YAML config to WebSocket push
|
||||
- [consigne-parametrage.md](consigne-parametrage.md) - Complete YAML configuration structure with all sections (app, network, ip_classes, scan, ports, locations, hosts, history, ui, colors, network_advanced, filters, database)
|
||||
- [consigne-design_webui.md](consigne-design_webui.md) - UI layout (3-column design), interaction patterns, visual states
|
||||
- [guidelines-css.md](guidelines-css.md) - Monokai color palette, IP cell styling rules (solid border for online, dashed for offline, animated halo for ping)
|
||||
- [tests-backend.md](tests-backend.md) - Required unit and integration tests
|
||||
|
||||
## Architecture Principles
|
||||
|
||||
### Backend Structure
|
||||
- FastAPI application with separate modules for network operations (ping, ARP, port scanning)
|
||||
- SQLAlchemy models matching the schema in [modele-donnees.md](modele-donnees.md)
|
||||
- APScheduler for periodic network scans
|
||||
- WebSocket endpoint for real-time push notifications
|
||||
- REST APIs for: IP management, scan operations, configuration, historical data
|
||||
|
||||
### Frontend Structure
|
||||
- Vue 3 with Composition API
|
||||
- Pinia for global state management
|
||||
- WebSocket client for real-time updates
|
||||
- 3-column layout: left (IP details), center (IP grid + legend), right (new detections)
|
||||
- Monokai dark theme with specific color codes from [guidelines-css.md](guidelines-css.md)
|
||||
|
||||
### Data Flow
|
||||
1. YAML configuration loads network CIDR and scan parameters
|
||||
2. Scheduled scan generates IP list, performs ping (parallel), ARP lookup, port scanning
|
||||
3. Results classified and stored in SQLite
|
||||
4. New/changed IPs trigger WebSocket push to frontend
|
||||
5. UI updates grid with appropriate visual states
|
||||
|
||||
## Database Schema
|
||||
|
||||
### ip table (PRIMARY)
|
||||
- `ip` (PK): IP address
|
||||
- `name`, `known` (bool), `location`, `host`: metadata
|
||||
- `first_seen`, `last_seen`: timestamps
|
||||
- `last_status`: current online/offline state
|
||||
- `mac`, `vendor`, `hostname`: network info
|
||||
- `open_ports`: JSON array
|
||||
|
||||
### ip_history table
|
||||
- `id` (PK)
|
||||
- `ip` (FK to ip.ip)
|
||||
- `timestamp`, `status`, `open_ports` (JSON)
|
||||
- **Required index**: timestamp for efficient historical queries
|
||||
|
||||
### Important Indexes
|
||||
- Index on `ip.last_status` for filtering
|
||||
- Index on `ip_history.timestamp` for 24h history retrieval
|
||||
|
||||
## Visual Design Rules
|
||||
|
||||
### IP Cell States
|
||||
- **Online + Known**: Green (#A6E22E) with solid border
|
||||
- **Online + Unknown**: Cyan (#66D9EF) with solid border
|
||||
- **Offline**: Dashed border + configurable transparency
|
||||
- **Ping in progress**: Animated halo using CSS keyframes
|
||||
- **Free IP**: Distinct color from occupied states
|
||||
|
||||
### Theme Colors (Monokai)
|
||||
- Background: `#272822`
|
||||
- Text: `#F8F8F2`
|
||||
- Accents: `#A6E22E` (green), `#F92672` (pink), `#66D9EF` (cyan)
|
||||
|
||||
## Configuration System
|
||||
|
||||
The application is driven by a YAML configuration file ([consigne-parametrage.md](consigne-parametrage.md)) with these sections:
|
||||
- `network`: CIDR, gateway, DNS
|
||||
- `ip_classes`: Define known IPs with metadata
|
||||
- `scan`: Intervals, parallelization settings
|
||||
- `ports`: Port scan ranges
|
||||
- `locations`, `hosts`: Categorical data
|
||||
- `history`: Retention period
|
||||
- `ui`: Display preferences, transparency
|
||||
- `colors`: Custom color mapping
|
||||
- `network_advanced`: ARP, timeout settings
|
||||
- `filters`: Default filter states
|
||||
- `database`: SQLite path
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
When implementing backend features, ensure tests cover ([tests-backend.md](tests-backend.md)):
|
||||
- Network module unit tests: `test_ping()`, `test_port_scan()`, `test_classification()`
|
||||
- SQLAlchemy models: `test_sqlalchemy_models()`
|
||||
- API endpoints: `test_api_get_ip()`, `test_api_update_ip()`
|
||||
- Scheduler: `test_scheduler()`
|
||||
- Integration: Full network scan simulation, WebSocket notification flow
|
||||
|
||||
## Docker Setup
|
||||
|
||||
The application should run as a single Docker service:
|
||||
- Combined backend + frontend container
|
||||
- Volume mount for `config.yaml`
|
||||
- Volume mount for `db.sqlite`
|
||||
- Exposed ports for web access and WebSocket
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- **Parallelization**: Ping operations must be parallelized for performance
|
||||
- **Real-time updates**: WebSocket is critical for live UI updates during scans
|
||||
- **MAC vendor lookup**: Use ARP data to populate vendor information
|
||||
- **Port scanning**: Respect intervals defined in YAML to avoid network overload
|
||||
- **Classification logic**: Follow the 10-step workflow in [workflow-scan.md](workflow-scan.md)
|
||||
- **Responsive design**: Grid layout must be fluid with collapsible columns
|
||||
BIN
Capture d’écran du 2025-12-06 04-55-12.png
Normal file
BIN
Capture d’écran du 2025-12-06 04-55-12.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 164 KiB |
55
Dockerfile
Normal file
55
Dockerfile
Normal file
@@ -0,0 +1,55 @@
|
||||
# Dockerfile multi-stage pour IPWatch
|
||||
# Backend FastAPI + Frontend Vue 3
|
||||
|
||||
# Stage 1: Build frontend Vue
|
||||
FROM node:20-alpine AS frontend-build
|
||||
|
||||
WORKDIR /frontend
|
||||
|
||||
# Copier package.json et installer dépendances
|
||||
COPY frontend/package*.json ./
|
||||
RUN npm install
|
||||
|
||||
# Copier le code source et builder
|
||||
COPY frontend/ ./
|
||||
RUN npm run build
|
||||
|
||||
|
||||
# Stage 2: Image finale avec backend + frontend statique
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Variables d'environnement
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
|
||||
# Installer les outils réseau nécessaires
|
||||
RUN apt-get update && apt-get install -y \
|
||||
iputils-ping \
|
||||
net-tools \
|
||||
tcpdump \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Créer le répertoire de travail
|
||||
WORKDIR /app
|
||||
|
||||
# Copier et installer les dépendances Python
|
||||
COPY backend/requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copier le code backend
|
||||
COPY backend/ ./backend/
|
||||
|
||||
# Copier le frontend buildé depuis le stage 1
|
||||
COPY --from=frontend-build /frontend/dist ./frontend/dist
|
||||
|
||||
# Créer les dossiers pour volumes
|
||||
RUN mkdir -p /app/data
|
||||
|
||||
# Copier config.yaml par défaut (sera écrasé par le volume)
|
||||
COPY config.yaml /app/config.yaml
|
||||
|
||||
# Exposer le port
|
||||
EXPOSE 8080
|
||||
|
||||
# Commande de démarrage
|
||||
CMD ["uvicorn", "backend.app.main:app", "--host", "0.0.0.0", "--port", "8080"]
|
||||
79
Makefile
Normal file
79
Makefile
Normal file
@@ -0,0 +1,79 @@
|
||||
# Makefile pour IPWatch
|
||||
|
||||
.PHONY: help build up down logs restart clean test install-backend install-frontend dev
|
||||
|
||||
help: ## Afficher l'aide
|
||||
@echo "IPWatch - Commandes disponibles:"
|
||||
@echo ""
|
||||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
# Docker
|
||||
build: ## Construire l'image Docker
|
||||
docker-compose build
|
||||
|
||||
up: ## Démarrer les conteneurs
|
||||
docker-compose up -d
|
||||
@echo "IPWatch démarré sur http://localhost:8000"
|
||||
|
||||
down: ## Arrêter les conteneurs
|
||||
docker-compose down
|
||||
|
||||
logs: ## Afficher les logs
|
||||
docker-compose logs -f
|
||||
|
||||
restart: ## Redémarrer les conteneurs
|
||||
docker-compose restart
|
||||
|
||||
clean: ## Nettoyer conteneurs, images et volumes
|
||||
docker-compose down -v
|
||||
rm -rf data/*.sqlite logs/*
|
||||
|
||||
# Développement
|
||||
install-backend: ## Installer dépendances backend
|
||||
cd backend && pip install -r requirements.txt
|
||||
|
||||
install-frontend: ## Installer dépendances frontend
|
||||
cd frontend && npm install
|
||||
|
||||
dev-backend: ## Lancer le backend en dev
|
||||
cd backend && python -m backend.app.main
|
||||
|
||||
dev-frontend: ## Lancer le frontend en dev
|
||||
cd frontend && npm run dev
|
||||
|
||||
dev: ## Lancer backend + frontend en dev (tmux requis)
|
||||
@echo "Lancement backend et frontend..."
|
||||
@tmux new-session -d -s ipwatch 'cd backend && python -m backend.app.main'
|
||||
@tmux split-window -h 'cd frontend && npm run dev'
|
||||
@tmux attach-session -t ipwatch
|
||||
|
||||
# Tests
|
||||
test: ## Exécuter les tests backend
|
||||
cd backend && pytest -v
|
||||
|
||||
test-coverage: ## Tests avec couverture
|
||||
cd backend && pytest --cov=app --cov-report=html
|
||||
|
||||
# Utilitaires
|
||||
init: ## Initialiser le projet (install + build)
|
||||
make install-backend
|
||||
make install-frontend
|
||||
make build
|
||||
|
||||
setup-config: ## Créer config.yaml depuis template (si absent)
|
||||
@if [ ! -f config.yaml ]; then \
|
||||
echo "Création de config.yaml..."; \
|
||||
cp config.yaml.example config.yaml 2>/dev/null || echo "config.yaml déjà présent"; \
|
||||
else \
|
||||
echo "config.yaml existe déjà"; \
|
||||
fi
|
||||
|
||||
db-backup: ## Sauvegarder la base de données
|
||||
@mkdir -p backups
|
||||
@cp data/db.sqlite backups/db_$$(date +%Y%m%d_%H%M%S).sqlite
|
||||
@echo "Sauvegarde créée dans backups/"
|
||||
|
||||
db-reset: ## Réinitialiser la base de données
|
||||
@echo "⚠️ Suppression de la base de données..."
|
||||
rm -f data/db.sqlite
|
||||
@echo "Base de données supprimée. Elle sera recréée au prochain démarrage."
|
||||
270
README.md
270
README.md
@@ -1,2 +1,270 @@
|
||||
# LANMap
|
||||
# IPWatch - Scanner Réseau Temps Réel
|
||||
|
||||
IPWatch est une application web de scan réseau qui visualise en temps réel l'état des adresses IP, leurs ports ouverts, et l'historique des détections sur votre réseau local.
|
||||
|
||||
## Fonctionnalités
|
||||
|
||||
- 🔍 **Scan réseau automatique** : Ping, ARP lookup, et scan de ports périodiques
|
||||
- 📊 **Visualisation temps réel** : Interface web avec mise à jour WebSocket
|
||||
- 🎨 **Thème Monokai** : Interface sombre avec codes couleurs intuitifs
|
||||
- 📝 **Gestion des IP** : Nommage, classification (connue/inconnue), métadonnées
|
||||
- 📈 **Historique 24h** : Suivi de l'évolution de l'état du réseau
|
||||
- 🔔 **Détection automatique** : Notification des nouvelles IP sur le réseau
|
||||
- 🐳 **Déploiement Docker** : Configuration simple avec docker-compose
|
||||
|
||||
## Technologies
|
||||
|
||||
### Backend
|
||||
- **FastAPI** - API REST et WebSocket
|
||||
- **SQLAlchemy** - ORM pour SQLite
|
||||
- **APScheduler** - Tâches planifiées
|
||||
- **Scapy** - Scan ARP et réseau
|
||||
|
||||
### Frontend
|
||||
- **Vue 3** - Framework UI avec Composition API
|
||||
- **Pinia** - State management
|
||||
- **Tailwind CSS** - Styles avec palette Monokai
|
||||
- **Vite** - Build tool
|
||||
|
||||
### Infrastructure
|
||||
- **Docker** - Conteneurisation
|
||||
- **SQLite** - Base de données
|
||||
- **WebSocket** - Communication temps réel
|
||||
|
||||
## Installation
|
||||
|
||||
### Avec Docker (recommandé)
|
||||
|
||||
1. **Cloner le repository**
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd ipwatch
|
||||
```
|
||||
|
||||
2. **Configurer le réseau**
|
||||
|
||||
Éditer `config.yaml` et ajuster le CIDR de votre réseau :
|
||||
```yaml
|
||||
network:
|
||||
cidr: "192.168.1.0/24" # Adapter à votre réseau
|
||||
```
|
||||
|
||||
3. **Lancer avec docker-compose**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
4. **Accéder à l'interface**
|
||||
|
||||
Ouvrir votre navigateur : `http://localhost:8080`
|
||||
|
||||
### Installation manuelle (développement)
|
||||
|
||||
#### Backend
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
pip install -r requirements.txt
|
||||
python -m backend.app.main
|
||||
```
|
||||
|
||||
#### Frontend
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
L'API sera accessible sur `http://localhost:8080`
|
||||
Le frontend sur `http://localhost:3000`
|
||||
|
||||
## Configuration
|
||||
|
||||
Le fichier `config.yaml` permet de configurer :
|
||||
|
||||
- **Réseau** : CIDR, gateway, DNS
|
||||
- **IPs connues** : Liste des appareils avec noms et emplacements
|
||||
- **Scan** : Intervalles ping/ports, parallélisation
|
||||
- **Ports** : Ports à scanner
|
||||
- **Historique** : Durée de rétention
|
||||
- **Interface** : Transparence, couleurs
|
||||
- **Base de données** : Chemin SQLite
|
||||
|
||||
Exemple :
|
||||
```yaml
|
||||
network:
|
||||
cidr: "192.168.1.0/24"
|
||||
|
||||
scan:
|
||||
ping_interval: 60 # Scan ping toutes les 60s
|
||||
port_scan_interval: 300 # Scan ports toutes les 5min
|
||||
parallel_pings: 50 # 50 pings simultanés max
|
||||
|
||||
ports:
|
||||
ranges:
|
||||
- "22" # SSH
|
||||
- "80" # HTTP
|
||||
- "443" # HTTPS
|
||||
- "3389" # RDP
|
||||
|
||||
ip_classes:
|
||||
"192.168.1.1":
|
||||
name: "Box Internet"
|
||||
location: "Entrée"
|
||||
host: "Routeur"
|
||||
```
|
||||
|
||||
## Interface utilisateur
|
||||
|
||||
L'interface est organisée en 3 colonnes :
|
||||
|
||||
### Colonne gauche - Détails IP
|
||||
- Informations détaillées de l'IP sélectionnée
|
||||
- Formulaire d'édition (nom, localisation, type d'hôte)
|
||||
- Informations réseau (MAC, vendor, hostname, ports ouverts)
|
||||
|
||||
### Colonne centrale - Grille d'IP
|
||||
- Vue d'ensemble de toutes les IP du réseau
|
||||
- Codes couleurs selon l'état :
|
||||
- 🟢 **Vert** : En ligne + connue
|
||||
- 🔵 **Cyan** : En ligne + inconnue
|
||||
- 🔴 **Rose** : Hors ligne + connue (bordure pointillée)
|
||||
- 🟣 **Violet** : Hors ligne + inconnue (bordure pointillée)
|
||||
- ⚪ **Gris** : IP libre
|
||||
- Filtres : En ligne, Hors ligne, Connues, Inconnues, Libres
|
||||
- Légende interactive
|
||||
|
||||
### Colonne droite - Nouvelles détections
|
||||
- Liste des IP récemment découvertes
|
||||
- Tri par ordre chronologique
|
||||
- Indicateur temps relatif
|
||||
|
||||
## API REST
|
||||
|
||||
### Endpoints IPs
|
||||
|
||||
- `GET /api/ips/` - Liste toutes les IPs (avec filtres optionnels)
|
||||
- `GET /api/ips/{ip}` - Détails d'une IP
|
||||
- `PUT /api/ips/{ip}` - Mettre à jour une IP
|
||||
- `DELETE /api/ips/{ip}` - Supprimer une IP
|
||||
- `GET /api/ips/{ip}/history` - Historique d'une IP
|
||||
- `GET /api/ips/stats/summary` - Statistiques globales
|
||||
|
||||
### Endpoints Scan
|
||||
|
||||
- `POST /api/scan/start` - Lancer un scan immédiat
|
||||
- `POST /api/scan/cleanup-history` - Nettoyer l'historique ancien
|
||||
|
||||
### WebSocket
|
||||
|
||||
- `WS /ws` - Connexion WebSocket pour notifications temps réel
|
||||
|
||||
Messages WebSocket :
|
||||
- `scan_start` - Début de scan
|
||||
- `scan_complete` - Fin de scan avec statistiques
|
||||
- `ip_update` - Changement d'état d'une IP
|
||||
- `new_ip` - Nouvelle IP détectée
|
||||
|
||||
## Tests
|
||||
|
||||
Exécuter les tests backend :
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
pytest
|
||||
```
|
||||
|
||||
Tests disponibles :
|
||||
- `test_network.py` - Tests modules réseau (ping, ARP, port scan)
|
||||
- `test_models.py` - Tests modèles SQLAlchemy
|
||||
- `test_api.py` - Tests endpoints API
|
||||
- `test_scheduler.py` - Tests scheduler APScheduler
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
ipwatch/
|
||||
├── backend/
|
||||
│ ├── app/
|
||||
│ │ ├── core/ # Configuration, database
|
||||
│ │ ├── models/ # Modèles SQLAlchemy
|
||||
│ │ ├── routers/ # Endpoints API
|
||||
│ │ ├── services/ # Services réseau, scheduler, WebSocket
|
||||
│ │ └── main.py # Application FastAPI
|
||||
│ └── requirements.txt
|
||||
├── frontend/
|
||||
│ ├── src/
|
||||
│ │ ├── assets/ # CSS Monokai
|
||||
│ │ ├── components/ # Composants Vue
|
||||
│ │ ├── stores/ # Pinia stores
|
||||
│ │ └── main.js
|
||||
│ └── package.json
|
||||
├── tests/ # Tests backend
|
||||
├── config.yaml # Configuration
|
||||
├── docker-compose.yml
|
||||
└── Dockerfile
|
||||
|
||||
```
|
||||
|
||||
## Workflow de scan
|
||||
|
||||
Le scan réseau suit ce workflow (10 étapes) :
|
||||
|
||||
1. Charger configuration YAML
|
||||
2. Générer liste IP du CIDR
|
||||
3. Ping (parallélisé)
|
||||
4. ARP + MAC vendor lookup
|
||||
5. Port scan selon intervalle
|
||||
6. Classification état (online/offline)
|
||||
7. Mise à jour SQLite
|
||||
8. Détection nouvelles IP
|
||||
9. Push WebSocket vers clients
|
||||
10. Mise à jour UI temps réel
|
||||
|
||||
## Sécurité
|
||||
|
||||
⚠️ **Attention** : IPWatch nécessite des privilèges réseau élevés (ping, ARP).
|
||||
|
||||
Le conteneur Docker utilise :
|
||||
- `network_mode: host` - Accès au réseau local
|
||||
- `privileged: true` - Privilèges pour scan réseau
|
||||
- `cap_add: NET_ADMIN, NET_RAW` - Capacités réseau
|
||||
|
||||
**N'exposez pas cette application sur internet** - Usage réseau local uniquement.
|
||||
|
||||
## Volumes Docker
|
||||
|
||||
Trois volumes sont montés :
|
||||
- `./config.yaml` - Configuration (lecture seule)
|
||||
- `./data/` - Base de données SQLite
|
||||
- `./logs/` - Logs applicatifs
|
||||
|
||||
## Dépannage
|
||||
|
||||
### Le scan ne détecte aucune IP
|
||||
|
||||
1. Vérifier le CIDR dans `config.yaml`
|
||||
2. Vérifier que Docker a accès au réseau (`network_mode: host`)
|
||||
3. Vérifier les logs : `docker logs ipwatch`
|
||||
|
||||
### WebSocket déconnecté
|
||||
|
||||
- Vérifier que le port 8080 est accessible
|
||||
- Vérifier les logs du navigateur (F12 → Console)
|
||||
- Le WebSocket se reconnecte automatiquement après 5s
|
||||
|
||||
### Erreur de permissions réseau
|
||||
|
||||
Le conteneur nécessite `privileged: true` pour :
|
||||
- Envoi de paquets ICMP (ping)
|
||||
- Scan ARP
|
||||
- Capture de paquets réseau
|
||||
|
||||
## Licence
|
||||
|
||||
MIT
|
||||
|
||||
## Auteur
|
||||
|
||||
Développé avec Claude Code selon les spécifications IPWatch.
|
||||
|
||||
260
STRUCTURE.md
Normal file
260
STRUCTURE.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# Structure du Projet IPWatch
|
||||
|
||||
## Vue d'ensemble
|
||||
|
||||
```
|
||||
ipwatch/
|
||||
├── backend/ # Backend FastAPI
|
||||
│ ├── app/
|
||||
│ │ ├── core/ # Configuration et database
|
||||
│ │ │ ├── config.py # Gestionnaire config YAML
|
||||
│ │ │ └── database.py # Setup SQLAlchemy
|
||||
│ │ ├── models/ # Modèles SQLAlchemy
|
||||
│ │ │ └── ip.py # Tables IP et IPHistory
|
||||
│ │ ├── routers/ # Endpoints API REST
|
||||
│ │ │ ├── ips.py # CRUD IPs + historique
|
||||
│ │ │ ├── scan.py # Contrôle scans
|
||||
│ │ │ └── websocket.py # Endpoint WebSocket
|
||||
│ │ ├── services/ # Services métier
|
||||
│ │ │ ├── network.py # Scanner réseau (ping, ARP, ports)
|
||||
│ │ │ ├── scheduler.py # APScheduler pour tâches périodiques
|
||||
│ │ │ └── websocket.py # Gestionnaire WebSocket
|
||||
│ │ └── main.py # Application FastAPI principale
|
||||
│ └── requirements.txt # Dépendances Python
|
||||
│
|
||||
├── frontend/ # Frontend Vue 3
|
||||
│ ├── src/
|
||||
│ │ ├── assets/
|
||||
│ │ │ └── main.css # Styles Monokai + animations
|
||||
│ │ ├── components/
|
||||
│ │ │ ├── AppHeader.vue # Header avec stats et contrôles
|
||||
│ │ │ ├── IPCell.vue # Cellule IP dans la grille
|
||||
│ │ │ ├── IPDetails.vue # Détails IP (colonne gauche)
|
||||
│ │ │ ├── IPGrid.vue # Grille d'IP (colonne centrale)
|
||||
│ │ │ └── NewDetections.vue # Nouvelles IP (colonne droite)
|
||||
│ │ ├── stores/
|
||||
│ │ │ └── ipStore.js # Store Pinia + WebSocket client
|
||||
│ │ ├── App.vue # Layout 3 colonnes
|
||||
│ │ └── main.js # Point d'entrée
|
||||
│ ├── package.json # Dépendances Node
|
||||
│ ├── vite.config.js # Configuration Vite
|
||||
│ ├── tailwind.config.js # Configuration Tailwind (Monokai)
|
||||
│ └── index.html # HTML principal
|
||||
│
|
||||
├── tests/ # Tests backend
|
||||
│ ├── test_network.py # Tests modules réseau
|
||||
│ ├── test_models.py # Tests modèles SQLAlchemy
|
||||
│ ├── test_api.py # Tests endpoints API
|
||||
│ └── test_scheduler.py # Tests APScheduler
|
||||
│
|
||||
├── config.yaml # Configuration principale
|
||||
├── docker-compose.yml # Orchestration Docker
|
||||
├── Dockerfile # Image multi-stage
|
||||
├── Makefile # Commandes utiles
|
||||
├── start.sh # Script démarrage rapide
|
||||
├── pytest.ini # Configuration pytest
|
||||
├── .gitignore # Exclusions Git
|
||||
├── .dockerignore # Exclusions Docker
|
||||
├── README.md # Documentation
|
||||
├── CLAUDE.md # Guide pour Claude Code
|
||||
└── STRUCTURE.md # Ce fichier
|
||||
```
|
||||
|
||||
## Flux de données
|
||||
|
||||
### 1. Scan réseau (backend)
|
||||
|
||||
```
|
||||
APScheduler (scheduler.py)
|
||||
↓ déclenche périodiquement
|
||||
NetworkScanner (network.py)
|
||||
↓ effectue scan complet
|
||||
├─→ Ping parallélisé
|
||||
├─→ ARP lookup + MAC vendor
|
||||
└─→ Port scan
|
||||
↓ résultats
|
||||
SQLAlchemy (models/ip.py)
|
||||
↓ enregistre dans
|
||||
SQLite (data/db.sqlite)
|
||||
↓ notifie via
|
||||
WebSocket Manager (services/websocket.py)
|
||||
↓ broadcast vers
|
||||
Clients WebSocket (frontend)
|
||||
```
|
||||
|
||||
### 2. Interface utilisateur (frontend)
|
||||
|
||||
```
|
||||
App.vue (layout 3 colonnes)
|
||||
├─→ IPDetails.vue (gauche)
|
||||
├─→ IPGrid.vue (centre)
|
||||
│ └─→ IPCell.vue (x254)
|
||||
└─→ NewDetections.vue (droite)
|
||||
↓ tous utilisent
|
||||
Pinia Store (ipStore.js)
|
||||
↓ communique avec
|
||||
├─→ API REST (/api/ips/*)
|
||||
└─→ WebSocket (/ws)
|
||||
```
|
||||
|
||||
### 3. Workflow complet d'un scan
|
||||
|
||||
```
|
||||
1. Scheduler déclenche scan
|
||||
2. NetworkScanner génère liste IP (CIDR)
|
||||
3. Ping parallélisé (50 simultanés)
|
||||
4. ARP lookup pour MAC/vendor
|
||||
5. Port scan (ports configurés)
|
||||
6. Classification état (online/offline)
|
||||
7. Mise à jour base de données
|
||||
8. Détection nouvelles IP
|
||||
9. Push WebSocket vers clients
|
||||
10. Mise à jour UI temps réel
|
||||
```
|
||||
|
||||
## Composants clés
|
||||
|
||||
### Backend
|
||||
|
||||
| Fichier | Responsabilité | Lignes |
|
||||
|---------|---------------|--------|
|
||||
| `services/network.py` | Scan réseau (ping, ARP, ports) | ~300 |
|
||||
| `services/scheduler.py` | Tâches planifiées | ~100 |
|
||||
| `services/websocket.py` | Gestionnaire WebSocket | ~150 |
|
||||
| `routers/ips.py` | API CRUD IPs | ~200 |
|
||||
| `routers/scan.py` | API contrôle scan | ~150 |
|
||||
| `models/ip.py` | Modèles SQLAlchemy | ~100 |
|
||||
| `core/config.py` | Gestion config YAML | ~150 |
|
||||
| `main.py` | Application FastAPI | ~150 |
|
||||
|
||||
### Frontend
|
||||
|
||||
| Fichier | Responsabilité | Lignes |
|
||||
|---------|---------------|--------|
|
||||
| `stores/ipStore.js` | State management + WebSocket | ~250 |
|
||||
| `components/IPGrid.vue` | Grille IP + filtres | ~100 |
|
||||
| `components/IPDetails.vue` | Détails + édition IP | ~200 |
|
||||
| `components/IPCell.vue` | Cellule IP individuelle | ~80 |
|
||||
| `components/NewDetections.vue` | Liste nouvelles IP | ~120 |
|
||||
| `assets/main.css` | Styles Monokai | ~150 |
|
||||
|
||||
## Points d'entrée
|
||||
|
||||
### Développement
|
||||
|
||||
**Backend** :
|
||||
```bash
|
||||
cd backend
|
||||
python -m backend.app.main
|
||||
# ou
|
||||
make dev-backend
|
||||
```
|
||||
|
||||
**Frontend** :
|
||||
```bash
|
||||
cd frontend
|
||||
npm run dev
|
||||
# ou
|
||||
make dev-frontend
|
||||
```
|
||||
|
||||
### Production (Docker)
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
# ou
|
||||
./start.sh
|
||||
# ou
|
||||
make up
|
||||
```
|
||||
|
||||
## Configuration requise
|
||||
|
||||
### Backend
|
||||
- Python 3.11+
|
||||
- Privilèges réseau (ping, ARP)
|
||||
- Accès au réseau local
|
||||
|
||||
### Frontend
|
||||
- Node.js 20+
|
||||
- npm
|
||||
|
||||
### Docker
|
||||
- Docker 20+
|
||||
- docker-compose 2+
|
||||
|
||||
## Ports utilisés
|
||||
|
||||
- **8080** : API backend + frontend buildé (production)
|
||||
- **3000** : Frontend dev (développement)
|
||||
|
||||
## Volumes Docker
|
||||
|
||||
- `./config.yaml` → `/app/config.yaml` (ro)
|
||||
- `./data/` → `/app/data/`
|
||||
- `./logs/` → `/app/logs/`
|
||||
|
||||
## Base de données
|
||||
|
||||
**SQLite** : `data/db.sqlite`
|
||||
|
||||
Tables :
|
||||
- `ip` : Table principale des IP (14 colonnes)
|
||||
- `ip_history` : Historique des états (5 colonnes)
|
||||
|
||||
Index :
|
||||
- `ip.last_status`
|
||||
- `ip.known`
|
||||
- `ip_history.timestamp`
|
||||
- `ip_history.ip`
|
||||
|
||||
## Tests
|
||||
|
||||
Lancer les tests :
|
||||
```bash
|
||||
pytest
|
||||
# ou
|
||||
make test
|
||||
```
|
||||
|
||||
Couverture :
|
||||
```bash
|
||||
pytest --cov=backend.app --cov-report=html
|
||||
# ou
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
## Commandes utiles
|
||||
|
||||
Voir toutes les commandes :
|
||||
```bash
|
||||
make help
|
||||
```
|
||||
|
||||
Principales commandes :
|
||||
- `make build` - Construire l'image
|
||||
- `make up` - Démarrer
|
||||
- `make down` - Arrêter
|
||||
- `make logs` - Voir les logs
|
||||
- `make test` - Tests
|
||||
- `make clean` - Nettoyer
|
||||
- `make db-backup` - Sauvegarder DB
|
||||
- `make db-reset` - Réinitialiser DB
|
||||
|
||||
## Dépendances principales
|
||||
|
||||
### Backend (Python)
|
||||
- fastapi 0.109.0
|
||||
- uvicorn 0.27.0
|
||||
- sqlalchemy 2.0.25
|
||||
- pydantic 2.5.3
|
||||
- apscheduler 3.10.4
|
||||
- scapy 2.5.0
|
||||
- pytest 7.4.4
|
||||
|
||||
### Frontend (JavaScript)
|
||||
- vue 3.4.15
|
||||
- pinia 2.1.7
|
||||
- axios 1.6.5
|
||||
- vite 5.0.11
|
||||
- tailwindcss 3.4.1
|
||||
17
architecture-technique.md
Normal file
17
architecture-technique.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# architecture-technique.md
|
||||
|
||||
## Backend
|
||||
- FastAPI + SQLAlchemy + APScheduler
|
||||
- Modules réseau : ping, arp, port scan
|
||||
- WebSocket pour push temps réel
|
||||
- APIs REST pour : IP, scan, paramètres, historique
|
||||
|
||||
## Frontend
|
||||
- Vue 3 + Vite + Tailwind
|
||||
- State global (Pinia)
|
||||
- WebSocket client
|
||||
|
||||
## Docker
|
||||
- service web (backend + frontend)
|
||||
- volume config.yaml
|
||||
- volume db.sqlite
|
||||
1
backend/app/__init__.py
Normal file
1
backend/app/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# IPWatch Backend Application
|
||||
1
backend/app/core/__init__.py
Normal file
1
backend/app/core/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Core configuration modules
|
||||
111
backend/app/core/config.py
Normal file
111
backend/app/core/config.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""
|
||||
Configuration management pour IPWatch
|
||||
Charge et valide le fichier config.yaml
|
||||
"""
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List, Optional
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class AppConfig(BaseModel):
|
||||
"""Configuration de l'application"""
|
||||
name: str = "IPWatch"
|
||||
version: str = "1.0.0"
|
||||
debug: bool = False
|
||||
|
||||
|
||||
class NetworkConfig(BaseModel):
|
||||
"""Configuration réseau"""
|
||||
cidr: str
|
||||
gateway: Optional[str] = None
|
||||
dns: Optional[List[str]] = None
|
||||
|
||||
|
||||
class ScanConfig(BaseModel):
|
||||
"""Configuration des scans"""
|
||||
ping_interval: int = 60 # secondes
|
||||
port_scan_interval: int = 300 # secondes
|
||||
parallel_pings: int = 50
|
||||
timeout: float = 1.0
|
||||
|
||||
|
||||
class PortsConfig(BaseModel):
|
||||
"""Configuration des ports à scanner"""
|
||||
ranges: List[str] = ["22", "80", "443", "3389", "8080"]
|
||||
|
||||
|
||||
class HistoryConfig(BaseModel):
|
||||
"""Configuration de l'historique"""
|
||||
retention_hours: int = 24
|
||||
|
||||
|
||||
class UIConfig(BaseModel):
|
||||
"""Configuration UI"""
|
||||
offline_transparency: float = 0.5
|
||||
show_mac: bool = True
|
||||
show_vendor: bool = True
|
||||
|
||||
|
||||
class ColorsConfig(BaseModel):
|
||||
"""Configuration des couleurs"""
|
||||
free: str = "#75715E"
|
||||
online_known: str = "#A6E22E"
|
||||
online_unknown: str = "#66D9EF"
|
||||
offline_known: str = "#F92672"
|
||||
offline_unknown: str = "#AE81FF"
|
||||
|
||||
|
||||
class DatabaseConfig(BaseModel):
|
||||
"""Configuration base de données"""
|
||||
path: str = "./data/db.sqlite"
|
||||
|
||||
|
||||
class IPWatchConfig(BaseModel):
|
||||
"""Configuration complète IPWatch"""
|
||||
app: AppConfig = Field(default_factory=AppConfig)
|
||||
network: NetworkConfig
|
||||
ip_classes: Dict[str, Any] = Field(default_factory=dict)
|
||||
scan: ScanConfig = Field(default_factory=ScanConfig)
|
||||
ports: PortsConfig = Field(default_factory=PortsConfig)
|
||||
locations: List[str] = Field(default_factory=list)
|
||||
hosts: List[str] = Field(default_factory=list)
|
||||
history: HistoryConfig = Field(default_factory=HistoryConfig)
|
||||
ui: UIConfig = Field(default_factory=UIConfig)
|
||||
colors: ColorsConfig = Field(default_factory=ColorsConfig)
|
||||
database: DatabaseConfig = Field(default_factory=DatabaseConfig)
|
||||
|
||||
|
||||
class ConfigManager:
|
||||
"""Gestionnaire de configuration singleton"""
|
||||
_instance: Optional['ConfigManager'] = None
|
||||
_config: Optional[IPWatchConfig] = None
|
||||
|
||||
def __new__(cls):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
def load_config(self, config_path: str = "./config.yaml") -> IPWatchConfig:
|
||||
"""Charge la configuration depuis le fichier YAML"""
|
||||
path = Path(config_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Fichier de configuration non trouvé: {config_path}")
|
||||
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
yaml_data = yaml.safe_load(f)
|
||||
|
||||
self._config = IPWatchConfig(**yaml_data)
|
||||
return self._config
|
||||
|
||||
@property
|
||||
def config(self) -> IPWatchConfig:
|
||||
"""Retourne la configuration actuelle"""
|
||||
if self._config is None:
|
||||
raise RuntimeError("Configuration non chargée. Appelez load_config() d'abord.")
|
||||
return self._config
|
||||
|
||||
|
||||
# Instance globale
|
||||
config_manager = ConfigManager()
|
||||
47
backend/app/core/database.py
Normal file
47
backend/app/core/database.py
Normal file
@@ -0,0 +1,47 @@
|
||||
"""
|
||||
Configuration de la base de données SQLAlchemy
|
||||
"""
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from pathlib import Path
|
||||
|
||||
# Base pour les modèles SQLAlchemy
|
||||
Base = declarative_base()
|
||||
|
||||
# Engine et session
|
||||
engine = None
|
||||
SessionLocal = None
|
||||
|
||||
|
||||
def init_database(db_path: str = "./data/db.sqlite"):
|
||||
"""Initialise la connexion à la base de données"""
|
||||
global engine, SessionLocal
|
||||
|
||||
# Créer le dossier data si nécessaire
|
||||
Path(db_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Créer l'engine SQLite
|
||||
database_url = f"sqlite:///{db_path}"
|
||||
engine = create_engine(
|
||||
database_url,
|
||||
connect_args={"check_same_thread": False},
|
||||
echo=False
|
||||
)
|
||||
|
||||
# Créer la session factory
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
# Créer les tables
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
return engine
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency pour obtenir une session DB"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
187
backend/app/main.py
Normal file
187
backend/app/main.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""
|
||||
Application FastAPI principale pour IPWatch
|
||||
Point d'entrée du backend
|
||||
"""
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import FileResponse
|
||||
from contextlib import asynccontextmanager
|
||||
from pathlib import Path
|
||||
|
||||
from backend.app.core.config import config_manager
|
||||
from backend.app.core.database import init_database, get_db
|
||||
from backend.app.routers import ips_router, scan_router, websocket_router
|
||||
from backend.app.services.scheduler import scan_scheduler
|
||||
from backend.app.routers.scan import perform_scan
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""
|
||||
Gestionnaire du cycle de vie de l'application
|
||||
Initialise et nettoie les ressources
|
||||
"""
|
||||
# Startup
|
||||
print("=== Démarrage IPWatch ===")
|
||||
|
||||
# 1. Charger la configuration
|
||||
try:
|
||||
config = config_manager.load_config("./config.yaml")
|
||||
print(f"✓ Configuration chargée: {config.network.cidr}")
|
||||
except Exception as e:
|
||||
print(f"✗ Erreur chargement config: {e}")
|
||||
raise
|
||||
|
||||
# 2. Initialiser la base de données
|
||||
try:
|
||||
init_database(config.database.path)
|
||||
print(f"✓ Base de données initialisée: {config.database.path}")
|
||||
except Exception as e:
|
||||
print(f"✗ Erreur initialisation DB: {e}")
|
||||
raise
|
||||
|
||||
# 3. Démarrer le scheduler
|
||||
try:
|
||||
scan_scheduler.start()
|
||||
|
||||
# Créer une session DB pour les scans planifiés
|
||||
from backend.app.core.database import SessionLocal
|
||||
|
||||
async def scheduled_scan():
|
||||
"""Wrapper pour scan planifié avec DB session"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
await perform_scan(db)
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
# Configurer les tâches périodiques
|
||||
scan_scheduler.add_ping_scan_job(
|
||||
scheduled_scan,
|
||||
interval_seconds=config.scan.ping_interval
|
||||
)
|
||||
|
||||
scan_scheduler.add_port_scan_job(
|
||||
scheduled_scan,
|
||||
interval_seconds=config.scan.port_scan_interval
|
||||
)
|
||||
|
||||
# Tâche de nettoyage historique
|
||||
async def cleanup_history():
|
||||
"""Nettoie l'historique ancien"""
|
||||
from backend.app.models.ip import IPHistory
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
db = SessionLocal()
|
||||
try:
|
||||
cutoff = datetime.utcnow() - timedelta(hours=config.history.retention_hours)
|
||||
deleted = db.query(IPHistory).filter(IPHistory.timestamp < cutoff).delete()
|
||||
db.commit()
|
||||
print(f"Nettoyage historique: {deleted} entrées supprimées")
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
scan_scheduler.add_cleanup_job(cleanup_history, interval_hours=1)
|
||||
|
||||
print("✓ Scheduler démarré")
|
||||
except Exception as e:
|
||||
print(f"✗ Erreur démarrage scheduler: {e}")
|
||||
|
||||
print("=== IPWatch prêt ===\n")
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
print("\n=== Arrêt IPWatch ===")
|
||||
scan_scheduler.stop()
|
||||
print("✓ Scheduler arrêté")
|
||||
|
||||
|
||||
# Créer l'application FastAPI
|
||||
app = FastAPI(
|
||||
title="IPWatch API",
|
||||
description="API backend pour IPWatch - Scanner réseau temps réel",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
# Configuration CORS pour le frontend
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # À restreindre en production
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Enregistrer les routers API
|
||||
app.include_router(ips_router)
|
||||
app.include_router(scan_router)
|
||||
app.include_router(websocket_router)
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"status": "healthy",
|
||||
"scheduler": scan_scheduler.is_running
|
||||
}
|
||||
|
||||
|
||||
# Servir les fichiers statiques du frontend
|
||||
frontend_dist = Path(__file__).parent.parent.parent / "frontend" / "dist"
|
||||
|
||||
if frontend_dist.exists():
|
||||
# Monter les assets statiques
|
||||
app.mount("/assets", StaticFiles(directory=str(frontend_dist / "assets")), name="assets")
|
||||
|
||||
# Route racine pour servir index.html
|
||||
@app.get("/")
|
||||
async def serve_frontend():
|
||||
"""Servir le frontend Vue"""
|
||||
index_file = frontend_dist / "index.html"
|
||||
if index_file.exists():
|
||||
return FileResponse(index_file)
|
||||
return {
|
||||
"name": "IPWatch API",
|
||||
"version": "1.0.0",
|
||||
"status": "running",
|
||||
"error": "Frontend non trouvé"
|
||||
}
|
||||
|
||||
# Catch-all pour le routing Vue (SPA)
|
||||
@app.get("/{full_path:path}")
|
||||
async def catch_all(full_path: str):
|
||||
"""Catch-all pour le routing Vue Router"""
|
||||
# Ne pas intercepter les routes API
|
||||
if full_path.startswith("api/") or full_path.startswith("ws"):
|
||||
return {"error": "Not found"}
|
||||
|
||||
# Servir index.html pour toutes les autres routes
|
||||
index_file = frontend_dist / "index.html"
|
||||
if index_file.exists():
|
||||
return FileResponse(index_file)
|
||||
return {"error": "Frontend non trouvé"}
|
||||
else:
|
||||
@app.get("/")
|
||||
async def root():
|
||||
"""Endpoint racine (mode développement sans frontend)"""
|
||||
return {
|
||||
"name": "IPWatch API",
|
||||
"version": "1.0.0",
|
||||
"status": "running",
|
||||
"note": "Frontend non buildé - utilisez le mode dev"
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(
|
||||
"backend.app.main:app",
|
||||
host="0.0.0.0",
|
||||
port=8080,
|
||||
reload=True
|
||||
)
|
||||
6
backend/app/models/__init__.py
Normal file
6
backend/app/models/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
"""
|
||||
Modèles SQLAlchemy pour IPWatch
|
||||
"""
|
||||
from .ip import IP, IPHistory
|
||||
|
||||
__all__ = ["IP", "IPHistory"]
|
||||
82
backend/app/models/ip.py
Normal file
82
backend/app/models/ip.py
Normal file
@@ -0,0 +1,82 @@
|
||||
"""
|
||||
Modèles de données pour les adresses IP et leur historique
|
||||
Basé sur modele-donnees.md
|
||||
"""
|
||||
from sqlalchemy import Column, String, Boolean, DateTime, Integer, ForeignKey, Index, JSON
|
||||
from sqlalchemy.orm import relationship
|
||||
from datetime import datetime
|
||||
from backend.app.core.database import Base
|
||||
|
||||
|
||||
class IP(Base):
|
||||
"""
|
||||
Table principale des adresses IP
|
||||
Stocke les informations actuelles et les métadonnées de chaque IP
|
||||
"""
|
||||
__tablename__ = "ip"
|
||||
|
||||
# Clé primaire
|
||||
ip = Column(String, primary_key=True, index=True)
|
||||
|
||||
# Métadonnées
|
||||
name = Column(String, nullable=True) # Nom donné à l'IP
|
||||
known = Column(Boolean, default=False, index=True) # IP connue ou inconnue
|
||||
location = Column(String, nullable=True) # Localisation (ex: "Bureau", "Serveur")
|
||||
host = Column(String, nullable=True) # Type d'hôte (ex: "PC", "Imprimante")
|
||||
|
||||
# Timestamps
|
||||
first_seen = Column(DateTime, default=datetime.utcnow) # Première détection
|
||||
last_seen = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) # Dernière vue
|
||||
|
||||
# État réseau
|
||||
last_status = Column(String, index=True) # "online", "offline", "unknown"
|
||||
|
||||
# Informations réseau
|
||||
mac = Column(String, nullable=True) # Adresse MAC
|
||||
vendor = Column(String, nullable=True) # Fabricant (lookup MAC)
|
||||
hostname = Column(String, nullable=True) # Nom d'hôte réseau
|
||||
|
||||
# Ports ouverts (stocké en JSON)
|
||||
open_ports = Column(JSON, default=list) # Liste des ports ouverts
|
||||
|
||||
# Relation avec l'historique
|
||||
history = relationship("IPHistory", back_populates="ip_ref", cascade="all, delete-orphan")
|
||||
|
||||
def __repr__(self):
|
||||
return f"<IP {self.ip} - {self.last_status} - {self.name or 'unnamed'}>"
|
||||
|
||||
|
||||
class IPHistory(Base):
|
||||
"""
|
||||
Table d'historique des états d'IP
|
||||
Stocke l'évolution dans le temps (24h par défaut)
|
||||
"""
|
||||
__tablename__ = "ip_history"
|
||||
|
||||
# Clé primaire auto-incrémentée
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
|
||||
# Foreign key vers la table IP
|
||||
ip = Column(String, ForeignKey("ip.ip", ondelete="CASCADE"), nullable=False, index=True)
|
||||
|
||||
# Timestamp de l'enregistrement
|
||||
timestamp = Column(DateTime, default=datetime.utcnow, index=True, nullable=False)
|
||||
|
||||
# État à ce moment
|
||||
status = Column(String, nullable=False) # "online", "offline"
|
||||
|
||||
# Ports ouverts à ce moment (JSON)
|
||||
open_ports = Column(JSON, default=list)
|
||||
|
||||
# Relation inverse vers IP
|
||||
ip_ref = relationship("IP", back_populates="history")
|
||||
|
||||
def __repr__(self):
|
||||
return f"<IPHistory {self.ip} - {self.timestamp} - {self.status}>"
|
||||
|
||||
|
||||
# Index recommandés (déjà définis dans les colonnes avec index=True)
|
||||
# Index supplémentaires si nécessaire
|
||||
Index('idx_ip_last_status', IP.last_status)
|
||||
Index('idx_ip_history_timestamp', IPHistory.timestamp)
|
||||
Index('idx_ip_history_ip', IPHistory.ip)
|
||||
8
backend/app/routers/__init__.py
Normal file
8
backend/app/routers/__init__.py
Normal file
@@ -0,0 +1,8 @@
|
||||
"""
|
||||
Routers API pour IPWatch
|
||||
"""
|
||||
from .ips import router as ips_router
|
||||
from .scan import router as scan_router
|
||||
from .websocket import router as websocket_router
|
||||
|
||||
__all__ = ["ips_router", "scan_router", "websocket_router"]
|
||||
216
backend/app/routers/ips.py
Normal file
216
backend/app/routers/ips.py
Normal file
@@ -0,0 +1,216 @@
|
||||
"""
|
||||
Endpoints API pour la gestion des IPs
|
||||
"""
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import desc
|
||||
from typing import List, Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from backend.app.core.database import get_db
|
||||
from backend.app.models.ip import IP, IPHistory
|
||||
from pydantic import BaseModel
|
||||
|
||||
router = APIRouter(prefix="/api/ips", tags=["IPs"])
|
||||
|
||||
|
||||
# Schémas Pydantic pour validation
|
||||
class IPUpdate(BaseModel):
|
||||
"""Schéma pour mise à jour d'IP"""
|
||||
name: Optional[str] = None
|
||||
known: Optional[bool] = None
|
||||
location: Optional[str] = None
|
||||
host: Optional[str] = None
|
||||
|
||||
|
||||
class IPResponse(BaseModel):
|
||||
"""Schéma de réponse IP"""
|
||||
ip: str
|
||||
name: Optional[str]
|
||||
known: bool
|
||||
location: Optional[str]
|
||||
host: Optional[str]
|
||||
first_seen: Optional[datetime]
|
||||
last_seen: Optional[datetime]
|
||||
last_status: Optional[str]
|
||||
mac: Optional[str]
|
||||
vendor: Optional[str]
|
||||
hostname: Optional[str]
|
||||
open_ports: List[int]
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
class IPHistoryResponse(BaseModel):
|
||||
"""Schéma de réponse historique"""
|
||||
id: int
|
||||
ip: str
|
||||
timestamp: datetime
|
||||
status: str
|
||||
open_ports: List[int]
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
@router.get("/", response_model=List[IPResponse])
|
||||
async def get_all_ips(
|
||||
status: Optional[str] = None,
|
||||
known: Optional[bool] = None,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Récupère toutes les IPs avec filtres optionnels
|
||||
|
||||
Args:
|
||||
status: Filtrer par statut (online/offline)
|
||||
known: Filtrer par IPs connues/inconnues
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
Liste des IPs
|
||||
"""
|
||||
query = db.query(IP)
|
||||
|
||||
if status:
|
||||
query = query.filter(IP.last_status == status)
|
||||
|
||||
if known is not None:
|
||||
query = query.filter(IP.known == known)
|
||||
|
||||
ips = query.all()
|
||||
return ips
|
||||
|
||||
|
||||
@router.get("/{ip_address}", response_model=IPResponse)
|
||||
async def get_ip(ip_address: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Récupère les détails d'une IP spécifique
|
||||
|
||||
Args:
|
||||
ip_address: Adresse IP
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
Détails de l'IP
|
||||
"""
|
||||
ip = db.query(IP).filter(IP.ip == ip_address).first()
|
||||
|
||||
if not ip:
|
||||
raise HTTPException(status_code=404, detail="IP non trouvée")
|
||||
|
||||
return ip
|
||||
|
||||
|
||||
@router.put("/{ip_address}", response_model=IPResponse)
|
||||
async def update_ip(
|
||||
ip_address: str,
|
||||
ip_update: IPUpdate,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Met à jour les informations d'une IP
|
||||
|
||||
Args:
|
||||
ip_address: Adresse IP
|
||||
ip_update: Données à mettre à jour
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
IP mise à jour
|
||||
"""
|
||||
ip = db.query(IP).filter(IP.ip == ip_address).first()
|
||||
|
||||
if not ip:
|
||||
raise HTTPException(status_code=404, detail="IP non trouvée")
|
||||
|
||||
# Mettre à jour les champs fournis
|
||||
update_data = ip_update.dict(exclude_unset=True)
|
||||
for field, value in update_data.items():
|
||||
setattr(ip, field, value)
|
||||
|
||||
db.commit()
|
||||
db.refresh(ip)
|
||||
|
||||
return ip
|
||||
|
||||
|
||||
@router.delete("/{ip_address}")
|
||||
async def delete_ip(ip_address: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Supprime une IP (et son historique)
|
||||
|
||||
Args:
|
||||
ip_address: Adresse IP
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
Message de confirmation
|
||||
"""
|
||||
ip = db.query(IP).filter(IP.ip == ip_address).first()
|
||||
|
||||
if not ip:
|
||||
raise HTTPException(status_code=404, detail="IP non trouvée")
|
||||
|
||||
db.delete(ip)
|
||||
db.commit()
|
||||
|
||||
return {"message": f"IP {ip_address} supprimée"}
|
||||
|
||||
|
||||
@router.get("/{ip_address}/history", response_model=List[IPHistoryResponse])
|
||||
async def get_ip_history(
|
||||
ip_address: str,
|
||||
hours: int = 24,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Récupère l'historique d'une IP
|
||||
|
||||
Args:
|
||||
ip_address: Adresse IP
|
||||
hours: Nombre d'heures d'historique (défaut: 24h)
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
Liste des événements historiques
|
||||
"""
|
||||
# Vérifier que l'IP existe
|
||||
ip = db.query(IP).filter(IP.ip == ip_address).first()
|
||||
if not ip:
|
||||
raise HTTPException(status_code=404, detail="IP non trouvée")
|
||||
|
||||
# Calculer la date limite
|
||||
since = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
# Récupérer l'historique
|
||||
history = db.query(IPHistory).filter(
|
||||
IPHistory.ip == ip_address,
|
||||
IPHistory.timestamp >= since
|
||||
).order_by(desc(IPHistory.timestamp)).all()
|
||||
|
||||
return history
|
||||
|
||||
|
||||
@router.get("/stats/summary")
|
||||
async def get_stats(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Récupère les statistiques globales du réseau
|
||||
|
||||
Returns:
|
||||
Statistiques (total, online, offline, known, unknown)
|
||||
"""
|
||||
total = db.query(IP).count()
|
||||
online = db.query(IP).filter(IP.last_status == "online").count()
|
||||
offline = db.query(IP).filter(IP.last_status == "offline").count()
|
||||
known = db.query(IP).filter(IP.known == True).count()
|
||||
unknown = db.query(IP).filter(IP.known == False).count()
|
||||
|
||||
return {
|
||||
"total": total,
|
||||
"online": online,
|
||||
"offline": offline,
|
||||
"known": known,
|
||||
"unknown": unknown
|
||||
}
|
||||
201
backend/app/routers/scan.py
Normal file
201
backend/app/routers/scan.py
Normal file
@@ -0,0 +1,201 @@
|
||||
"""
|
||||
Endpoints API pour le contrôle des scans réseau
|
||||
"""
|
||||
from fastapi import APIRouter, Depends, BackgroundTasks
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any
|
||||
|
||||
from backend.app.core.database import get_db
|
||||
from backend.app.core.config import config_manager
|
||||
from backend.app.models.ip import IP, IPHistory
|
||||
from backend.app.services.network import NetworkScanner
|
||||
from backend.app.services.websocket import ws_manager
|
||||
|
||||
router = APIRouter(prefix="/api/scan", tags=["Scan"])
|
||||
|
||||
|
||||
async def perform_scan(db: Session):
|
||||
"""
|
||||
Effectue un scan complet du réseau
|
||||
Fonction asynchrone pour background task
|
||||
|
||||
Args:
|
||||
db: Session de base de données
|
||||
"""
|
||||
try:
|
||||
print(f"[{datetime.now()}] Début du scan réseau...")
|
||||
|
||||
# Notifier début du scan
|
||||
try:
|
||||
await ws_manager.broadcast_scan_start()
|
||||
except Exception as e:
|
||||
print(f"Erreur broadcast start (ignorée): {e}")
|
||||
|
||||
# Récupérer la config
|
||||
config = config_manager.config
|
||||
print(f"[{datetime.now()}] Config chargée: {config.network.cidr}")
|
||||
|
||||
# Initialiser le scanner
|
||||
scanner = NetworkScanner(
|
||||
cidr=config.network.cidr,
|
||||
timeout=config.scan.timeout
|
||||
)
|
||||
|
||||
# Convertir les ports en liste d'entiers
|
||||
port_list = []
|
||||
for port_range in config.ports.ranges:
|
||||
if '-' in port_range:
|
||||
start, end = map(int, port_range.split('-'))
|
||||
port_list.extend(range(start, end + 1))
|
||||
else:
|
||||
port_list.append(int(port_range))
|
||||
|
||||
print(f"[{datetime.now()}] Ports à scanner: {port_list}")
|
||||
|
||||
# Récupérer les IPs connues
|
||||
known_ips = config.ip_classes
|
||||
print(f"[{datetime.now()}] IPs connues: {len(known_ips)}")
|
||||
|
||||
# Lancer le scan
|
||||
print(f"[{datetime.now()}] Lancement du scan (parallélisme: {config.scan.parallel_pings})...")
|
||||
scan_results = await scanner.full_scan(
|
||||
known_ips=known_ips,
|
||||
port_list=port_list,
|
||||
max_concurrent=config.scan.parallel_pings
|
||||
)
|
||||
print(f"[{datetime.now()}] Scan terminé: {len(scan_results)} IPs trouvées")
|
||||
|
||||
# Mettre à jour la base de données
|
||||
stats = {
|
||||
"total": 0,
|
||||
"online": 0,
|
||||
"offline": 0,
|
||||
"new": 0,
|
||||
"updated": 0
|
||||
}
|
||||
|
||||
for ip_address, ip_data in scan_results.items():
|
||||
stats["total"] += 1
|
||||
|
||||
if ip_data["last_status"] == "online":
|
||||
stats["online"] += 1
|
||||
else:
|
||||
stats["offline"] += 1
|
||||
|
||||
# Vérifier si l'IP existe déjà
|
||||
existing_ip = db.query(IP).filter(IP.ip == ip_address).first()
|
||||
|
||||
if existing_ip:
|
||||
# Mettre à jour l'IP existante
|
||||
old_status = existing_ip.last_status
|
||||
|
||||
existing_ip.last_status = ip_data["last_status"]
|
||||
if ip_data["last_seen"]:
|
||||
existing_ip.last_seen = ip_data["last_seen"]
|
||||
existing_ip.mac = ip_data.get("mac") or existing_ip.mac
|
||||
existing_ip.vendor = ip_data.get("vendor") or existing_ip.vendor
|
||||
existing_ip.hostname = ip_data.get("hostname") or existing_ip.hostname
|
||||
existing_ip.open_ports = ip_data.get("open_ports", [])
|
||||
|
||||
# Si l'état a changé, notifier via WebSocket
|
||||
if old_status != ip_data["last_status"]:
|
||||
await ws_manager.broadcast_ip_update({
|
||||
"ip": ip_address,
|
||||
"old_status": old_status,
|
||||
"new_status": ip_data["last_status"]
|
||||
})
|
||||
|
||||
stats["updated"] += 1
|
||||
|
||||
else:
|
||||
# Créer une nouvelle IP
|
||||
new_ip = IP(
|
||||
ip=ip_address,
|
||||
name=ip_data.get("name"),
|
||||
known=ip_data.get("known", False),
|
||||
location=ip_data.get("location"),
|
||||
host=ip_data.get("host"),
|
||||
first_seen=datetime.utcnow(),
|
||||
last_seen=ip_data.get("last_seen") or datetime.utcnow(),
|
||||
last_status=ip_data["last_status"],
|
||||
mac=ip_data.get("mac"),
|
||||
vendor=ip_data.get("vendor"),
|
||||
hostname=ip_data.get("hostname"),
|
||||
open_ports=ip_data.get("open_ports", [])
|
||||
)
|
||||
db.add(new_ip)
|
||||
|
||||
# Notifier nouvelle IP
|
||||
await ws_manager.broadcast_new_ip({
|
||||
"ip": ip_address,
|
||||
"status": ip_data["last_status"],
|
||||
"known": ip_data.get("known", False)
|
||||
})
|
||||
|
||||
stats["new"] += 1
|
||||
|
||||
# Ajouter à l'historique
|
||||
history_entry = IPHistory(
|
||||
ip=ip_address,
|
||||
timestamp=datetime.utcnow(),
|
||||
status=ip_data["last_status"],
|
||||
open_ports=ip_data.get("open_ports", [])
|
||||
)
|
||||
db.add(history_entry)
|
||||
|
||||
# Commit les changements
|
||||
db.commit()
|
||||
|
||||
# Notifier fin du scan avec stats
|
||||
await ws_manager.broadcast_scan_complete(stats)
|
||||
|
||||
print(f"[{datetime.now()}] Scan terminé: {stats}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Erreur lors du scan: {e}")
|
||||
db.rollback()
|
||||
|
||||
|
||||
@router.post("/start")
|
||||
async def start_scan(background_tasks: BackgroundTasks, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Déclenche un scan réseau immédiat
|
||||
|
||||
Returns:
|
||||
Message de confirmation
|
||||
"""
|
||||
# Lancer le scan en arrière-plan
|
||||
background_tasks.add_task(perform_scan, db)
|
||||
|
||||
return {
|
||||
"message": "Scan réseau démarré",
|
||||
"timestamp": datetime.utcnow()
|
||||
}
|
||||
|
||||
|
||||
@router.post("/cleanup-history")
|
||||
async def cleanup_history(hours: int = 24, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Nettoie l'historique plus ancien que X heures
|
||||
|
||||
Args:
|
||||
hours: Nombre d'heures à conserver (défaut: 24h)
|
||||
db: Session de base de données
|
||||
|
||||
Returns:
|
||||
Nombre d'entrées supprimées
|
||||
"""
|
||||
cutoff_date = datetime.utcnow() - timedelta(hours=hours)
|
||||
|
||||
deleted = db.query(IPHistory).filter(
|
||||
IPHistory.timestamp < cutoff_date
|
||||
).delete()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"message": f"Historique nettoyé",
|
||||
"deleted_entries": deleted,
|
||||
"older_than_hours": hours
|
||||
}
|
||||
35
backend/app/routers/websocket.py
Normal file
35
backend/app/routers/websocket.py
Normal file
@@ -0,0 +1,35 @@
|
||||
"""
|
||||
Endpoint WebSocket pour notifications temps réel
|
||||
"""
|
||||
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
|
||||
from backend.app.services.websocket import ws_manager
|
||||
|
||||
router = APIRouter(tags=["WebSocket"])
|
||||
|
||||
|
||||
@router.websocket("/ws")
|
||||
async def websocket_endpoint(websocket: WebSocket):
|
||||
"""
|
||||
Endpoint WebSocket pour notifications temps réel
|
||||
|
||||
Args:
|
||||
websocket: Connexion WebSocket
|
||||
"""
|
||||
await ws_manager.connect(websocket)
|
||||
|
||||
try:
|
||||
# Boucle de réception (keep-alive)
|
||||
while True:
|
||||
# Recevoir des messages du client (heartbeat)
|
||||
data = await websocket.receive_text()
|
||||
|
||||
# On peut gérer des commandes du client ici si nécessaire
|
||||
# Pour l'instant, on fait juste un echo pour keep-alive
|
||||
if data == "ping":
|
||||
await ws_manager.send_personal_message("pong", websocket)
|
||||
|
||||
except WebSocketDisconnect:
|
||||
ws_manager.disconnect(websocket)
|
||||
except Exception as e:
|
||||
print(f"Erreur WebSocket: {e}")
|
||||
ws_manager.disconnect(websocket)
|
||||
7
backend/app/services/__init__.py
Normal file
7
backend/app/services/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""
|
||||
Services réseau pour IPWatch
|
||||
"""
|
||||
from .network import NetworkScanner
|
||||
from .scheduler import ScanScheduler
|
||||
|
||||
__all__ = ["NetworkScanner", "ScanScheduler"]
|
||||
295
backend/app/services/network.py
Normal file
295
backend/app/services/network.py
Normal file
@@ -0,0 +1,295 @@
|
||||
"""
|
||||
Modules réseau pour scan d'IP, ping, ARP et port scan
|
||||
Implémente le workflow de scan selon workflow-scan.md
|
||||
"""
|
||||
import asyncio
|
||||
import ipaddress
|
||||
import platform
|
||||
import subprocess
|
||||
import socket
|
||||
from typing import List, Dict, Optional, Tuple
|
||||
from datetime import datetime
|
||||
import re
|
||||
|
||||
# Scapy pour ARP
|
||||
try:
|
||||
from scapy.all import ARP, Ether, srp
|
||||
SCAPY_AVAILABLE = True
|
||||
except ImportError:
|
||||
SCAPY_AVAILABLE = False
|
||||
|
||||
|
||||
class NetworkScanner:
|
||||
"""Scanner réseau principal"""
|
||||
|
||||
def __init__(self, cidr: str, timeout: float = 1.0):
|
||||
"""
|
||||
Initialise le scanner réseau
|
||||
|
||||
Args:
|
||||
cidr: Réseau CIDR (ex: "192.168.1.0/24")
|
||||
timeout: Timeout pour ping et connexions (secondes)
|
||||
"""
|
||||
self.cidr = cidr
|
||||
self.timeout = timeout
|
||||
self.network = ipaddress.ip_network(cidr, strict=False)
|
||||
|
||||
def generate_ip_list(self) -> List[str]:
|
||||
"""
|
||||
Génère la liste complète d'IP depuis le CIDR
|
||||
|
||||
Returns:
|
||||
Liste des adresses IP en string
|
||||
"""
|
||||
return [str(ip) for ip in self.network.hosts()]
|
||||
|
||||
async def ping(self, ip: str) -> bool:
|
||||
"""
|
||||
Ping une adresse IP (async)
|
||||
|
||||
Args:
|
||||
ip: Adresse IP à pinger
|
||||
|
||||
Returns:
|
||||
True si l'IP répond, False sinon
|
||||
"""
|
||||
# Détection de l'OS pour la commande ping
|
||||
param = '-n' if platform.system().lower() == 'windows' else '-c'
|
||||
timeout_param = '-w' if platform.system().lower() == 'windows' else '-W'
|
||||
|
||||
command = ['ping', param, '1', timeout_param, str(int(self.timeout * 1000) if platform.system().lower() == 'windows' else str(int(self.timeout))), ip]
|
||||
|
||||
try:
|
||||
# Exécuter le ping de manière asynchrone
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
*command,
|
||||
stdout=asyncio.subprocess.DEVNULL,
|
||||
stderr=asyncio.subprocess.DEVNULL
|
||||
)
|
||||
await asyncio.wait_for(process.wait(), timeout=self.timeout + 1)
|
||||
return process.returncode == 0
|
||||
except (asyncio.TimeoutError, Exception):
|
||||
return False
|
||||
|
||||
async def ping_parallel(self, ip_list: List[str], max_concurrent: int = 50) -> Dict[str, bool]:
|
||||
"""
|
||||
Ping multiple IPs en parallèle
|
||||
|
||||
Args:
|
||||
ip_list: Liste des IPs à pinger
|
||||
max_concurrent: Nombre maximum de pings simultanés
|
||||
|
||||
Returns:
|
||||
Dictionnaire {ip: online_status}
|
||||
"""
|
||||
results = {}
|
||||
semaphore = asyncio.Semaphore(max_concurrent)
|
||||
|
||||
async def ping_with_semaphore(ip: str):
|
||||
async with semaphore:
|
||||
results[ip] = await self.ping(ip)
|
||||
|
||||
# Lancer tous les pings en parallèle avec limite
|
||||
await asyncio.gather(*[ping_with_semaphore(ip) for ip in ip_list])
|
||||
|
||||
return results
|
||||
|
||||
def get_arp_table(self) -> Dict[str, Tuple[str, str]]:
|
||||
"""
|
||||
Récupère la table ARP du système
|
||||
|
||||
Returns:
|
||||
Dictionnaire {ip: (mac, vendor)}
|
||||
"""
|
||||
arp_data = {}
|
||||
|
||||
if SCAPY_AVAILABLE:
|
||||
try:
|
||||
# Utiliser Scapy pour ARP scan
|
||||
answered, _ = srp(
|
||||
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(pdst=self.cidr),
|
||||
timeout=2,
|
||||
verbose=False
|
||||
)
|
||||
|
||||
for sent, received in answered:
|
||||
ip = received.psrc
|
||||
mac = received.hwsrc
|
||||
vendor = self._get_mac_vendor(mac)
|
||||
arp_data[ip] = (mac, vendor)
|
||||
except Exception as e:
|
||||
print(f"Erreur ARP scan avec Scapy: {e}")
|
||||
else:
|
||||
# Fallback: parser la table ARP système
|
||||
try:
|
||||
if platform.system().lower() == 'windows':
|
||||
output = subprocess.check_output(['arp', '-a'], text=True)
|
||||
pattern = r'(\d+\.\d+\.\d+\.\d+)\s+([0-9a-fA-F-:]+)'
|
||||
else:
|
||||
output = subprocess.check_output(['arp', '-n'], text=True)
|
||||
pattern = r'(\d+\.\d+\.\d+\.\d+)\s+\w+\s+([0-9a-fA-F:]+)'
|
||||
|
||||
matches = re.findall(pattern, output)
|
||||
for ip, mac in matches:
|
||||
if ip in [str(h) for h in self.network.hosts()]:
|
||||
vendor = self._get_mac_vendor(mac)
|
||||
arp_data[ip] = (mac, vendor)
|
||||
except Exception as e:
|
||||
print(f"Erreur lecture table ARP: {e}")
|
||||
|
||||
return arp_data
|
||||
|
||||
def _get_mac_vendor(self, mac: str) -> str:
|
||||
"""
|
||||
Lookup du fabricant depuis l'adresse MAC
|
||||
Simplifié pour l'instant - peut être étendu avec une vraie DB OUI
|
||||
|
||||
Args:
|
||||
mac: Adresse MAC
|
||||
|
||||
Returns:
|
||||
Nom du fabricant ou "Unknown"
|
||||
"""
|
||||
# TODO: Implémenter lookup OUI complet
|
||||
# Pour l'instant, retourne un placeholder
|
||||
mac_prefix = mac[:8].upper().replace(':', '').replace('-', '')
|
||||
|
||||
# Mini DB des fabricants courants
|
||||
vendors = {
|
||||
"00:0C:29": "VMware",
|
||||
"00:50:56": "VMware",
|
||||
"08:00:27": "VirtualBox",
|
||||
"DC:A6:32": "Raspberry Pi",
|
||||
"B8:27:EB": "Raspberry Pi",
|
||||
}
|
||||
|
||||
for prefix, vendor in vendors.items():
|
||||
if mac.upper().startswith(prefix.replace(':', '')):
|
||||
return vendor
|
||||
|
||||
return "Unknown"
|
||||
|
||||
async def scan_ports(self, ip: str, ports: List[int]) -> List[int]:
|
||||
"""
|
||||
Scan des ports TCP sur une IP
|
||||
|
||||
Args:
|
||||
ip: Adresse IP cible
|
||||
ports: Liste des ports à scanner
|
||||
|
||||
Returns:
|
||||
Liste des ports ouverts
|
||||
"""
|
||||
open_ports = []
|
||||
|
||||
async def check_port(port: int) -> Optional[int]:
|
||||
try:
|
||||
# Tentative de connexion TCP
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(ip, port),
|
||||
timeout=self.timeout
|
||||
)
|
||||
writer.close()
|
||||
await writer.wait_closed()
|
||||
return port
|
||||
except:
|
||||
return None
|
||||
|
||||
# Scanner tous les ports en parallèle
|
||||
results = await asyncio.gather(*[check_port(p) for p in ports])
|
||||
open_ports = [p for p in results if p is not None]
|
||||
|
||||
return open_ports
|
||||
|
||||
def get_hostname(self, ip: str) -> Optional[str]:
|
||||
"""
|
||||
Résolution DNS inversée pour obtenir le hostname
|
||||
|
||||
Args:
|
||||
ip: Adresse IP
|
||||
|
||||
Returns:
|
||||
Hostname ou None
|
||||
"""
|
||||
try:
|
||||
hostname, _, _ = socket.gethostbyaddr(ip)
|
||||
return hostname
|
||||
except:
|
||||
return None
|
||||
|
||||
def classify_ip_status(self, is_online: bool, is_known: bool) -> str:
|
||||
"""
|
||||
Classification de l'état d'une IP
|
||||
|
||||
Args:
|
||||
is_online: IP en ligne
|
||||
is_known: IP connue dans la config
|
||||
|
||||
Returns:
|
||||
État: "online", "offline"
|
||||
"""
|
||||
return "online" if is_online else "offline"
|
||||
|
||||
async def full_scan(self, known_ips: Dict[str, Dict], port_list: List[int], max_concurrent: int = 50) -> Dict[str, Dict]:
|
||||
"""
|
||||
Scan complet du réseau selon workflow-scan.md
|
||||
|
||||
Args:
|
||||
known_ips: Dictionnaire des IPs connues depuis config
|
||||
port_list: Liste des ports à scanner
|
||||
max_concurrent: Pings simultanés max
|
||||
|
||||
Returns:
|
||||
Dictionnaire des résultats de scan pour chaque IP
|
||||
"""
|
||||
results = {}
|
||||
|
||||
# 1. Générer liste IP du CIDR
|
||||
ip_list = self.generate_ip_list()
|
||||
|
||||
# 2. Ping parallélisé
|
||||
ping_results = await self.ping_parallel(ip_list, max_concurrent)
|
||||
|
||||
# 3. ARP + MAC vendor
|
||||
arp_table = self.get_arp_table()
|
||||
|
||||
# 4. Pour chaque IP
|
||||
for ip in ip_list:
|
||||
is_online = ping_results.get(ip, False)
|
||||
is_known = ip in known_ips
|
||||
|
||||
ip_data = {
|
||||
"ip": ip,
|
||||
"known": is_known,
|
||||
"last_status": self.classify_ip_status(is_online, is_known),
|
||||
"last_seen": datetime.utcnow() if is_online else None,
|
||||
"mac": None,
|
||||
"vendor": None,
|
||||
"hostname": None,
|
||||
"open_ports": [],
|
||||
}
|
||||
|
||||
# Ajouter infos connues
|
||||
if is_known:
|
||||
ip_data.update(known_ips[ip])
|
||||
|
||||
# Infos ARP
|
||||
if ip in arp_table:
|
||||
mac, vendor = arp_table[ip]
|
||||
ip_data["mac"] = mac
|
||||
ip_data["vendor"] = vendor
|
||||
|
||||
# Hostname
|
||||
if is_online:
|
||||
hostname = self.get_hostname(ip)
|
||||
if hostname:
|
||||
ip_data["hostname"] = hostname
|
||||
|
||||
# 5. Port scan (uniquement si online)
|
||||
if is_online and port_list:
|
||||
open_ports = await self.scan_ports(ip, port_list)
|
||||
ip_data["open_ports"] = open_ports
|
||||
|
||||
results[ip] = ip_data
|
||||
|
||||
return results
|
||||
103
backend/app/services/scheduler.py
Normal file
103
backend/app/services/scheduler.py
Normal file
@@ -0,0 +1,103 @@
|
||||
"""
|
||||
Scheduler APScheduler pour les scans réseau périodiques
|
||||
"""
|
||||
from apscheduler.schedulers.asyncio import AsyncIOScheduler
|
||||
from apscheduler.triggers.interval import IntervalTrigger
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, Callable
|
||||
import asyncio
|
||||
|
||||
|
||||
class ScanScheduler:
|
||||
"""Gestionnaire de tâches planifiées pour les scans"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialise le scheduler"""
|
||||
self.scheduler = AsyncIOScheduler()
|
||||
self.is_running = False
|
||||
|
||||
def start(self):
|
||||
"""Démarre le scheduler"""
|
||||
if not self.is_running:
|
||||
self.scheduler.start()
|
||||
self.is_running = True
|
||||
print(f"[{datetime.now()}] Scheduler démarré")
|
||||
|
||||
def stop(self):
|
||||
"""Arrête le scheduler"""
|
||||
if self.is_running:
|
||||
self.scheduler.shutdown()
|
||||
self.is_running = False
|
||||
print(f"[{datetime.now()}] Scheduler arrêté")
|
||||
|
||||
def add_ping_scan_job(self, scan_function: Callable, interval_seconds: int = 60):
|
||||
"""
|
||||
Ajoute une tâche de scan ping périodique
|
||||
|
||||
Args:
|
||||
scan_function: Fonction async à exécuter
|
||||
interval_seconds: Intervalle en secondes
|
||||
"""
|
||||
self.scheduler.add_job(
|
||||
scan_function,
|
||||
trigger=IntervalTrigger(seconds=interval_seconds),
|
||||
id='ping_scan',
|
||||
name='Scan Ping périodique',
|
||||
replace_existing=True
|
||||
)
|
||||
print(f"Tâche ping_scan configurée: toutes les {interval_seconds}s")
|
||||
|
||||
def add_port_scan_job(self, scan_function: Callable, interval_seconds: int = 300):
|
||||
"""
|
||||
Ajoute une tâche de scan de ports périodique
|
||||
|
||||
Args:
|
||||
scan_function: Fonction async à exécuter
|
||||
interval_seconds: Intervalle en secondes
|
||||
"""
|
||||
self.scheduler.add_job(
|
||||
scan_function,
|
||||
trigger=IntervalTrigger(seconds=interval_seconds),
|
||||
id='port_scan',
|
||||
name='Scan ports périodique',
|
||||
replace_existing=True
|
||||
)
|
||||
print(f"Tâche port_scan configurée: toutes les {interval_seconds}s")
|
||||
|
||||
def add_cleanup_job(self, cleanup_function: Callable, interval_hours: int = 1):
|
||||
"""
|
||||
Ajoute une tâche de nettoyage de l'historique
|
||||
|
||||
Args:
|
||||
cleanup_function: Fonction async de nettoyage
|
||||
interval_hours: Intervalle en heures
|
||||
"""
|
||||
self.scheduler.add_job(
|
||||
cleanup_function,
|
||||
trigger=IntervalTrigger(hours=interval_hours),
|
||||
id='history_cleanup',
|
||||
name='Nettoyage historique',
|
||||
replace_existing=True
|
||||
)
|
||||
print(f"Tâche cleanup configurée: toutes les {interval_hours}h")
|
||||
|
||||
def remove_job(self, job_id: str):
|
||||
"""
|
||||
Supprime une tâche planifiée
|
||||
|
||||
Args:
|
||||
job_id: ID de la tâche
|
||||
"""
|
||||
try:
|
||||
self.scheduler.remove_job(job_id)
|
||||
print(f"Tâche {job_id} supprimée")
|
||||
except Exception as e:
|
||||
print(f"Erreur suppression tâche {job_id}: {e}")
|
||||
|
||||
def get_jobs(self):
|
||||
"""Retourne la liste des tâches planifiées"""
|
||||
return self.scheduler.get_jobs()
|
||||
|
||||
|
||||
# Instance globale du scheduler
|
||||
scan_scheduler = ScanScheduler()
|
||||
125
backend/app/services/websocket.py
Normal file
125
backend/app/services/websocket.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""
|
||||
Gestionnaire WebSocket pour notifications temps réel
|
||||
"""
|
||||
from fastapi import WebSocket
|
||||
from typing import List, Dict, Any
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class WebSocketManager:
|
||||
"""Gestionnaire de connexions WebSocket"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialise le gestionnaire"""
|
||||
self.active_connections: List[WebSocket] = []
|
||||
|
||||
async def connect(self, websocket: WebSocket):
|
||||
"""
|
||||
Accepte une nouvelle connexion WebSocket
|
||||
|
||||
Args:
|
||||
websocket: Instance WebSocket
|
||||
"""
|
||||
await websocket.accept()
|
||||
self.active_connections.append(websocket)
|
||||
print(f"[{datetime.now()}] Nouvelle connexion WebSocket. Total: {len(self.active_connections)}")
|
||||
|
||||
def disconnect(self, websocket: WebSocket):
|
||||
"""
|
||||
Déconnecte un client WebSocket
|
||||
|
||||
Args:
|
||||
websocket: Instance WebSocket à déconnecter
|
||||
"""
|
||||
if websocket in self.active_connections:
|
||||
self.active_connections.remove(websocket)
|
||||
print(f"[{datetime.now()}] Déconnexion WebSocket. Total: {len(self.active_connections)}")
|
||||
|
||||
async def send_personal_message(self, message: str, websocket: WebSocket):
|
||||
"""
|
||||
Envoie un message à un client spécifique
|
||||
|
||||
Args:
|
||||
message: Message à envoyer
|
||||
websocket: Client destinataire
|
||||
"""
|
||||
try:
|
||||
await websocket.send_text(message)
|
||||
except Exception as e:
|
||||
print(f"Erreur envoi message personnel: {e}")
|
||||
|
||||
async def broadcast(self, message: Dict[str, Any]):
|
||||
"""
|
||||
Diffuse un message à tous les clients connectés
|
||||
|
||||
Args:
|
||||
message: Dictionnaire du message (sera converti en JSON)
|
||||
"""
|
||||
# Ajouter un timestamp
|
||||
message["timestamp"] = datetime.utcnow().isoformat()
|
||||
|
||||
json_message = json.dumps(message)
|
||||
|
||||
# Liste des connexions à supprimer (déconnectées)
|
||||
disconnected = []
|
||||
|
||||
for connection in self.active_connections:
|
||||
try:
|
||||
await connection.send_text(json_message)
|
||||
except Exception as e:
|
||||
print(f"Erreur broadcast: {e}")
|
||||
disconnected.append(connection)
|
||||
|
||||
# Nettoyer les connexions mortes
|
||||
for conn in disconnected:
|
||||
self.disconnect(conn)
|
||||
|
||||
async def broadcast_scan_start(self):
|
||||
"""Notifie le début d'un scan"""
|
||||
await self.broadcast({
|
||||
"type": "scan_start",
|
||||
"message": "Scan réseau démarré"
|
||||
})
|
||||
|
||||
async def broadcast_scan_complete(self, stats: Dict[str, int]):
|
||||
"""
|
||||
Notifie la fin d'un scan avec statistiques
|
||||
|
||||
Args:
|
||||
stats: Statistiques du scan (total, online, offline, etc.)
|
||||
"""
|
||||
await self.broadcast({
|
||||
"type": "scan_complete",
|
||||
"message": "Scan réseau terminé",
|
||||
"stats": stats
|
||||
})
|
||||
|
||||
async def broadcast_ip_update(self, ip_data: Dict[str, Any]):
|
||||
"""
|
||||
Notifie un changement d'état d'IP
|
||||
|
||||
Args:
|
||||
ip_data: Données de l'IP mise à jour
|
||||
"""
|
||||
await self.broadcast({
|
||||
"type": "ip_update",
|
||||
"data": ip_data
|
||||
})
|
||||
|
||||
async def broadcast_new_ip(self, ip_data: Dict[str, Any]):
|
||||
"""
|
||||
Notifie la détection d'une nouvelle IP
|
||||
|
||||
Args:
|
||||
ip_data: Données de la nouvelle IP
|
||||
"""
|
||||
await self.broadcast({
|
||||
"type": "new_ip",
|
||||
"data": ip_data,
|
||||
"message": f"Nouvelle IP détectée: {ip_data.get('ip')}"
|
||||
})
|
||||
|
||||
|
||||
# Instance globale du gestionnaire WebSocket
|
||||
ws_manager = WebSocketManager()
|
||||
16
backend/requirements.txt
Normal file
16
backend/requirements.txt
Normal file
@@ -0,0 +1,16 @@
|
||||
fastapi==0.109.0
|
||||
uvicorn[standard]==0.27.0
|
||||
sqlalchemy==2.0.25
|
||||
pydantic==2.5.3
|
||||
pydantic-settings==2.1.0
|
||||
python-multipart==0.0.6
|
||||
websockets==12.0
|
||||
apscheduler==3.10.4
|
||||
pyyaml==6.0.1
|
||||
asyncio==3.4.3
|
||||
aiosqlite==0.19.0
|
||||
python-nmap==0.7.1
|
||||
scapy==2.5.0
|
||||
pytest==7.4.4
|
||||
pytest-asyncio==0.23.3
|
||||
httpx==0.26.0
|
||||
89
config.yaml
Normal file
89
config.yaml
Normal file
@@ -0,0 +1,89 @@
|
||||
# Configuration IPWatch
|
||||
# Basé sur consigne-parametrage.md
|
||||
|
||||
app:
|
||||
name: "IPWatch"
|
||||
version: "1.0.0"
|
||||
debug: true
|
||||
|
||||
network:
|
||||
cidr: "10.0.0.0/22"
|
||||
gateway: "10.0.0.1"
|
||||
dns:
|
||||
- "8.8.8.8"
|
||||
- "8.8.4.4"
|
||||
|
||||
# Sous-réseaux organisés en sections
|
||||
subnets:
|
||||
- name: "static_vm"
|
||||
cidr: "10.0.0.0/24"
|
||||
start: "10.0.0.1"
|
||||
end: "10.0.0.255"
|
||||
description: "Machines virtuelles statiques"
|
||||
- name: "dhcp"
|
||||
cidr: "10.0.1.0/24"
|
||||
start: "10.0.1.1"
|
||||
end: "10.0.1locations.255"
|
||||
description: "DHCP"
|
||||
- name: "iot"
|
||||
cidr: "10.0.2.0/24"
|
||||
start: "10.0.2.1"
|
||||
end: "10.0.2.255"
|
||||
description: "IoT"
|
||||
|
||||
# IPs connues avec métadonnées
|
||||
ip_classes:
|
||||
"10.0.0.1":
|
||||
name: "Gateway"
|
||||
location: "Réseau"
|
||||
host: "Routeur"
|
||||
|
||||
scan:
|
||||
ping_interval: 600 # Intervalle scan ping (secondes)
|
||||
port_scan_interval: 1200 # Intervalle scan ports (secondes)
|
||||
parallel_pings: 100 # Nombre de pings simultanés
|
||||
timeout: 1.0 # Timeout réseau (secondes)
|
||||
|
||||
ports:
|
||||
ranges:
|
||||
- "22" # SSH
|
||||
- "80" # HTTP
|
||||
- "443" # HTTPS
|
||||
- "3389" # RDP
|
||||
- "8080" # HTTP alternatif
|
||||
- "3306" # MySQL
|
||||
- "5432" # PostgreSQL
|
||||
|
||||
locations:
|
||||
- "Bureau"
|
||||
- "Salon"
|
||||
- "Comble"
|
||||
- "Bureau RdC"
|
||||
|
||||
# la localisation est herité de l'host il faudrait adapter config en consequence
|
||||
hosts:
|
||||
- "physique"
|
||||
- "elitedesk"
|
||||
- "m710Q"
|
||||
- "HP Proliant"
|
||||
- "pve MSI"
|
||||
- "HP Proxmox"
|
||||
|
||||
|
||||
history:
|
||||
retention_hours: 24 # Conserver 24h d'historique
|
||||
|
||||
ui:
|
||||
offline_transparency: 0.5 # Transparence des IPs offline
|
||||
show_mac: true
|
||||
show_vendor: true
|
||||
|
||||
colors:
|
||||
free: "#75715E" # IP libre (gris Monokai)
|
||||
online_known: "#A6E22E" # En ligne + connue (vert)
|
||||
online_unknown: "#66D9EF" # En ligne + inconnue (cyan)
|
||||
offline_known: "#F92672" # Hors ligne + connue (rose/rouge)
|
||||
offline_unknown: "#AE81FF" # Hors ligne + inconnue (violet)
|
||||
|
||||
database:
|
||||
path: "./data/db.sqlite"
|
||||
27
consigne-design_webui.md
Normal file
27
consigne-design_webui.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# consigne-design_webui.md
|
||||
|
||||
## Thème
|
||||
Monokai dark, contrastes forts, bordures arrondies.
|
||||
|
||||
## Layout général
|
||||
3 colonnes :
|
||||
- gauche : détail IP
|
||||
- centre : grille d’IP + légende + classes
|
||||
- droite : nouvelles détections
|
||||
|
||||
## États des IP
|
||||
Couleurs, bordure pleine/hors ligne, halo ping en cours.
|
||||
|
||||
## Composants
|
||||
- Header
|
||||
- Volet gauche
|
||||
- Grille IP
|
||||
- Volet droit
|
||||
- Onglet paramètres
|
||||
|
||||
## Interactions
|
||||
- sélection case IP
|
||||
- clic nouvelle IP
|
||||
- filtres à cocher
|
||||
- animation ping
|
||||
- transparence offline
|
||||
21
consigne-parametrage.md
Normal file
21
consigne-parametrage.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# consigne-parametrage.md
|
||||
|
||||
Ce document décrit toutes les règles du fichier YAML.
|
||||
|
||||
## Sections
|
||||
- app
|
||||
- network
|
||||
- ip_classes
|
||||
- scan
|
||||
- ports
|
||||
- locations
|
||||
- hosts
|
||||
- history
|
||||
- ui
|
||||
- colors
|
||||
- network_advanced
|
||||
- filters
|
||||
- database
|
||||
|
||||
## Exemple complet
|
||||
(… full YAML spec as defined previously …)
|
||||
40
docker-compose.yml
Normal file
40
docker-compose.yml
Normal file
@@ -0,0 +1,40 @@
|
||||
services:
|
||||
ipwatch:
|
||||
build: .
|
||||
container_name: ipwatch
|
||||
restart: unless-stopped
|
||||
|
||||
# Réseau host pour accès complet au réseau local
|
||||
network_mode: host
|
||||
|
||||
# Privilèges pour scan réseau (ping, ARP)
|
||||
privileged: true
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
- NET_RAW
|
||||
|
||||
volumes:
|
||||
# Volume pour la configuration
|
||||
- ./config.yaml:/app/config.yaml:ro
|
||||
|
||||
# Volume pour la base de données
|
||||
- ./data:/app/data
|
||||
|
||||
# Volume pour les logs (optionnel)
|
||||
- ./logs:/app/logs
|
||||
|
||||
environment:
|
||||
- TZ=Europe/Paris
|
||||
|
||||
# Healthcheck
|
||||
healthcheck:
|
||||
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
# Créer les volumes nommés si nécessaire
|
||||
volumes:
|
||||
ipwatch-data:
|
||||
ipwatch-logs:
|
||||
13
frontend/index.html
Normal file
13
frontend/index.html
Normal file
@@ -0,0 +1,13 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="fr">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<link rel="icon" type="image/svg+xml" href="/vite.svg">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>IPWatch - Scanner Réseau</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script type="module" src="/src/main.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
23
frontend/package.json
Normal file
23
frontend/package.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"name": "ipwatch-frontend",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "vite build",
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {
|
||||
"vue": "^3.4.15",
|
||||
"pinia": "^2.1.7",
|
||||
"axios": "^1.6.5"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@vitejs/plugin-vue": "^5.0.3",
|
||||
"vite": "^5.0.11",
|
||||
"tailwindcss": "^3.4.1",
|
||||
"autoprefixer": "^10.4.17",
|
||||
"postcss": "^8.4.33"
|
||||
}
|
||||
}
|
||||
6
frontend/postcss.config.js
Normal file
6
frontend/postcss.config.js
Normal file
@@ -0,0 +1,6 @@
|
||||
export default {
|
||||
plugins: {
|
||||
tailwindcss: {},
|
||||
autoprefixer: {},
|
||||
},
|
||||
}
|
||||
48
frontend/src/App.vue
Normal file
48
frontend/src/App.vue
Normal file
@@ -0,0 +1,48 @@
|
||||
<template>
|
||||
<div class="h-screen flex flex-col bg-monokai-bg">
|
||||
<!-- Header -->
|
||||
<AppHeader />
|
||||
|
||||
<!-- Layout 3 colonnes selon consigne-design_webui.md -->
|
||||
<div class="flex-1 flex overflow-hidden">
|
||||
<!-- Colonne gauche: Détails IP -->
|
||||
<div class="w-80 flex-shrink-0">
|
||||
<IPDetails />
|
||||
</div>
|
||||
|
||||
<!-- Colonne centrale: Grille d'IP organisée en arbre -->
|
||||
<div class="flex-1">
|
||||
<IPGridTree />
|
||||
</div>
|
||||
|
||||
<!-- Colonne droite: Nouvelles détections -->
|
||||
<div class="w-80 flex-shrink-0">
|
||||
<NewDetections />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { onMounted, onUnmounted } from 'vue'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
import AppHeader from '@/components/AppHeader.vue'
|
||||
import IPDetails from '@/components/IPDetails.vue'
|
||||
import IPGridTree from '@/components/IPGridTree.vue'
|
||||
import NewDetections from '@/components/NewDetections.vue'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
|
||||
onMounted(async () => {
|
||||
// Charger les données initiales
|
||||
await ipStore.fetchIPs()
|
||||
|
||||
// Connecter WebSocket
|
||||
ipStore.connectWebSocket()
|
||||
})
|
||||
|
||||
onUnmounted(() => {
|
||||
// Déconnecter WebSocket
|
||||
ipStore.disconnectWebSocket()
|
||||
})
|
||||
</script>
|
||||
147
frontend/src/assets/main.css
Normal file
147
frontend/src/assets/main.css
Normal file
@@ -0,0 +1,147 @@
|
||||
/* Styles principaux IPWatch - Thème Monokai */
|
||||
@tailwind base;
|
||||
@tailwind components;
|
||||
@tailwind utilities;
|
||||
|
||||
/* Variables CSS Monokai */
|
||||
:root {
|
||||
--monokai-bg: #272822;
|
||||
--monokai-text: #F8F8F2;
|
||||
--monokai-comment: #75715E;
|
||||
--monokai-green: #A6E22E;
|
||||
--monokai-pink: #F92672;
|
||||
--monokai-cyan: #66D9EF;
|
||||
--monokai-purple: #AE81FF;
|
||||
--monokai-yellow: #E6DB74;
|
||||
--monokai-orange: #FD971F;
|
||||
}
|
||||
|
||||
/* Base */
|
||||
body {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
background-color: var(--monokai-bg);
|
||||
color: var(--monokai-text);
|
||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
||||
}
|
||||
|
||||
/* Animation halo ping */
|
||||
@keyframes ping-pulse {
|
||||
0% {
|
||||
box-shadow: 0 0 0 0 rgba(102, 217, 239, 0.7);
|
||||
}
|
||||
50% {
|
||||
box-shadow: 0 0 20px 10px rgba(102, 217, 239, 0.3);
|
||||
}
|
||||
100% {
|
||||
box-shadow: 0 0 0 0 rgba(102, 217, 239, 0);
|
||||
}
|
||||
}
|
||||
|
||||
.ping-animation {
|
||||
animation: ping-pulse 1.5s ease-in-out infinite;
|
||||
}
|
||||
|
||||
/* Cases IP compactes - Version minimale */
|
||||
.ip-cell-compact {
|
||||
@apply rounded cursor-pointer transition-all duration-200 relative;
|
||||
border: 2px solid;
|
||||
width: 50px;
|
||||
height: 50px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
/* Cases IP - États selon guidelines-css.md */
|
||||
.ip-cell {
|
||||
@apply rounded-lg p-3 cursor-pointer transition-all duration-200;
|
||||
border: 2px solid;
|
||||
min-height: 80px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: space-between;
|
||||
}
|
||||
|
||||
/* IP libre */
|
||||
.ip-cell.free,
|
||||
.ip-cell-compact.free {
|
||||
background-color: rgba(117, 113, 94, 0.2);
|
||||
border-color: var(--monokai-comment);
|
||||
color: var(--monokai-comment);
|
||||
}
|
||||
|
||||
/* IP en ligne + connue (vert) */
|
||||
.ip-cell.online-known,
|
||||
.ip-cell-compact.online-known {
|
||||
background-color: rgba(166, 226, 46, 0.15);
|
||||
border-color: var(--monokai-green);
|
||||
border-style: solid;
|
||||
color: var(--monokai-text);
|
||||
}
|
||||
|
||||
.ip-cell.online-known:hover,
|
||||
.ip-cell-compact.online-known:hover {
|
||||
background-color: rgba(166, 226, 46, 0.25);
|
||||
}
|
||||
|
||||
/* IP en ligne + inconnue (cyan) */
|
||||
.ip-cell.online-unknown,
|
||||
.ip-cell-compact.online-unknown {
|
||||
background-color: rgba(102, 217, 239, 0.15);
|
||||
border-color: var(--monokai-cyan);
|
||||
border-style: solid;
|
||||
color: var(--monokai-text);
|
||||
}
|
||||
|
||||
.ip-cell.online-unknown:hover,
|
||||
.ip-cell-compact.online-unknown:hover {
|
||||
background-color: rgba(102, 217, 239, 0.25);
|
||||
}
|
||||
|
||||
/* IP hors ligne + connue (rose) */
|
||||
.ip-cell.offline-known,
|
||||
.ip-cell-compact.offline-known {
|
||||
background-color: rgba(249, 38, 114, 0.1);
|
||||
border-color: var(--monokai-pink);
|
||||
border-style: dashed;
|
||||
color: var(--monokai-text);
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
/* IP hors ligne + inconnue (violet) */
|
||||
.ip-cell.offline-unknown,
|
||||
.ip-cell-compact.offline-unknown {
|
||||
background-color: rgba(174, 129, 255, 0.1);
|
||||
border-color: var(--monokai-purple);
|
||||
border-style: dashed;
|
||||
color: var(--monokai-text);
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
/* Sélection */
|
||||
.ip-cell.selected {
|
||||
box-shadow: 0 0 20px rgba(230, 219, 116, 0.5);
|
||||
border-color: var(--monokai-yellow);
|
||||
}
|
||||
|
||||
/* Scrollbar custom Monokai */
|
||||
::-webkit-scrollbar {
|
||||
width: 10px;
|
||||
height: 10px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: #1e1f1c;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: var(--monokai-comment);
|
||||
border-radius: 5px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: var(--monokai-cyan);
|
||||
}
|
||||
68
frontend/src/components/AppHeader.vue
Normal file
68
frontend/src/components/AppHeader.vue
Normal file
@@ -0,0 +1,68 @@
|
||||
<template>
|
||||
<header class="bg-monokai-bg border-b-2 border-monokai-comment p-4">
|
||||
<div class="flex items-center justify-between">
|
||||
<!-- Logo et titre -->
|
||||
<div class="flex items-center gap-4">
|
||||
<h1 class="text-3xl font-bold text-monokai-green">IPWatch</h1>
|
||||
<span class="text-monokai-comment">Scanner Réseau</span>
|
||||
</div>
|
||||
|
||||
<!-- Stats et contrôles -->
|
||||
<div class="flex items-center gap-6">
|
||||
<!-- Statistiques -->
|
||||
<div class="flex gap-4 text-sm">
|
||||
<div class="flex items-center gap-2">
|
||||
<span class="text-monokai-comment">Total:</span>
|
||||
<span class="text-monokai-text font-bold">{{ stats.total }}</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<span class="w-3 h-3 rounded-full bg-monokai-green"></span>
|
||||
<span class="text-monokai-text">{{ stats.online }}</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<span class="w-3 h-3 rounded-full bg-monokai-pink opacity-50"></span>
|
||||
<span class="text-monokai-text">{{ stats.offline }}</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Bouton scan -->
|
||||
<button
|
||||
@click="triggerScan"
|
||||
:disabled="loading"
|
||||
class="px-4 py-2 rounded bg-monokai-cyan text-monokai-bg font-bold hover:bg-monokai-green transition-colors disabled:opacity-50"
|
||||
>
|
||||
{{ loading ? 'Scan en cours...' : 'Lancer Scan' }}
|
||||
</button>
|
||||
|
||||
<!-- Indicateur WebSocket -->
|
||||
<div class="flex items-center gap-2">
|
||||
<div
|
||||
:class="[
|
||||
'w-2 h-2 rounded-full',
|
||||
wsConnected ? 'bg-monokai-green' : 'bg-monokai-pink'
|
||||
]"
|
||||
></div>
|
||||
<span class="text-sm text-monokai-comment">
|
||||
{{ wsConnected ? 'Connecté' : 'Déconnecté' }}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { stats, loading, wsConnected } = storeToRefs(ipStore)
|
||||
|
||||
async function triggerScan() {
|
||||
try {
|
||||
await ipStore.startScan()
|
||||
} catch (err) {
|
||||
console.error('Erreur lancement scan:', err)
|
||||
}
|
||||
}
|
||||
</script>
|
||||
87
frontend/src/components/IPCell.vue
Normal file
87
frontend/src/components/IPCell.vue
Normal file
@@ -0,0 +1,87 @@
|
||||
<template>
|
||||
<div
|
||||
:class="[
|
||||
'ip-cell-compact',
|
||||
cellClass,
|
||||
{ 'selected': isSelected },
|
||||
{ 'ping-animation': isPinging }
|
||||
]"
|
||||
@click="selectThisIP"
|
||||
:title="getTooltip"
|
||||
>
|
||||
<!-- Afficher seulement le dernier octet -->
|
||||
<div class="font-mono font-bold text-2xl">
|
||||
{{ lastOctet }}
|
||||
</div>
|
||||
|
||||
<!-- Nom très court si connu -->
|
||||
<div v-if="ip.name" class="text-xs opacity-75 truncate mt-1">
|
||||
{{ ip.name }}
|
||||
</div>
|
||||
|
||||
<!-- Indicateur ports ouverts (petit badge) -->
|
||||
<div v-if="ip.open_ports && ip.open_ports.length > 0"
|
||||
class="absolute top-1 right-1 w-2 h-2 rounded-full bg-monokai-cyan">
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
|
||||
const props = defineProps({
|
||||
ip: {
|
||||
type: Object,
|
||||
required: true
|
||||
},
|
||||
isPinging: {
|
||||
type: Boolean,
|
||||
default: false
|
||||
}
|
||||
})
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { selectedIP } = storeToRefs(ipStore)
|
||||
|
||||
const isSelected = computed(() => {
|
||||
return selectedIP.value?.ip === props.ip.ip
|
||||
})
|
||||
|
||||
// Extraire le dernier octet de l'IP
|
||||
const lastOctet = computed(() => {
|
||||
const parts = props.ip.ip.split('.')
|
||||
return parts[parts.length - 1]
|
||||
})
|
||||
|
||||
// Tooltip avec infos complètes
|
||||
const getTooltip = computed(() => {
|
||||
let tooltip = `${props.ip.ip}`
|
||||
if (props.ip.name) tooltip += ` - ${props.ip.name}`
|
||||
if (props.ip.hostname) tooltip += `\nHostname: ${props.ip.hostname}`
|
||||
if (props.ip.mac) tooltip += `\nMAC: ${props.ip.mac}`
|
||||
if (props.ip.vendor) tooltip += ` (${props.ip.vendor})`
|
||||
if (props.ip.open_ports && props.ip.open_ports.length > 0) {
|
||||
tooltip += `\nPorts: ${props.ip.open_ports.join(', ')}`
|
||||
}
|
||||
return tooltip
|
||||
})
|
||||
|
||||
const cellClass = computed(() => {
|
||||
// Déterminer la classe selon l'état (guidelines-css.md)
|
||||
if (!props.ip.last_status) {
|
||||
return 'free'
|
||||
}
|
||||
|
||||
if (props.ip.last_status === 'online') {
|
||||
return props.ip.known ? 'online-known' : 'online-unknown'
|
||||
} else {
|
||||
return props.ip.known ? 'offline-known' : 'offline-unknown'
|
||||
}
|
||||
})
|
||||
|
||||
function selectThisIP() {
|
||||
ipStore.selectIP(props.ip)
|
||||
}
|
||||
</script>
|
||||
207
frontend/src/components/IPDetails.vue
Normal file
207
frontend/src/components/IPDetails.vue
Normal file
@@ -0,0 +1,207 @@
|
||||
<template>
|
||||
<div class="h-full flex flex-col bg-monokai-bg border-r border-monokai-comment">
|
||||
<!-- Header -->
|
||||
<div class="p-4 border-b border-monokai-comment">
|
||||
<h2 class="text-xl font-bold text-monokai-cyan">Détails IP</h2>
|
||||
</div>
|
||||
|
||||
<!-- Contenu -->
|
||||
<div class="flex-1 overflow-auto p-4">
|
||||
<div v-if="selectedIP" class="space-y-4">
|
||||
<!-- Adresse IP -->
|
||||
<div>
|
||||
<div class="text-sm text-monokai-comment mb-1">Adresse IP</div>
|
||||
<div class="text-2xl font-mono font-bold text-monokai-green">
|
||||
{{ selectedIP.ip }}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- État -->
|
||||
<div>
|
||||
<div class="text-sm text-monokai-comment mb-1">État</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div
|
||||
:class="[
|
||||
'w-3 h-3 rounded-full',
|
||||
selectedIP.last_status === 'online' ? 'bg-monokai-green' : 'bg-monokai-pink'
|
||||
]"
|
||||
></div>
|
||||
<span class="text-monokai-text capitalize">
|
||||
{{ selectedIP.last_status || 'Inconnu' }}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Formulaire d'édition -->
|
||||
<div class="space-y-3 pt-4 border-t border-monokai-comment">
|
||||
<!-- Nom -->
|
||||
<div>
|
||||
<label class="text-sm text-monokai-comment mb-1 block">Nom</label>
|
||||
<input
|
||||
v-model="formData.name"
|
||||
type="text"
|
||||
class="w-full px-3 py-2 bg-monokai-bg border border-monokai-comment rounded text-monokai-text"
|
||||
placeholder="Ex: Serveur Principal"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Connue -->
|
||||
<div>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input
|
||||
v-model="formData.known"
|
||||
type="checkbox"
|
||||
class="form-checkbox"
|
||||
/>
|
||||
<span class="text-sm text-monokai-text">IP connue</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<!-- Localisation -->
|
||||
<div>
|
||||
<label class="text-sm text-monokai-comment mb-1 block">Localisation</label>
|
||||
<input
|
||||
v-model="formData.location"
|
||||
type="text"
|
||||
class="w-full px-3 py-2 bg-monokai-bg border border-monokai-comment rounded text-monokai-text"
|
||||
placeholder="Ex: Bureau"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Type d'hôte -->
|
||||
<div>
|
||||
<label class="text-sm text-monokai-comment mb-1 block">Type d'hôte</label>
|
||||
<input
|
||||
v-model="formData.host"
|
||||
type="text"
|
||||
class="w-full px-3 py-2 bg-monokai-bg border border-monokai-comment rounded text-monokai-text"
|
||||
placeholder="Ex: PC, Serveur, Imprimante"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Boutons -->
|
||||
<div class="flex gap-2 pt-2">
|
||||
<button
|
||||
@click="saveChanges"
|
||||
class="px-4 py-2 bg-monokai-green text-monokai-bg rounded font-bold hover:bg-monokai-cyan transition-colors"
|
||||
>
|
||||
Enregistrer
|
||||
</button>
|
||||
<button
|
||||
@click="resetForm"
|
||||
class="px-4 py-2 bg-monokai-comment text-monokai-bg rounded hover:bg-monokai-text transition-colors"
|
||||
>
|
||||
Annuler
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Informations réseau -->
|
||||
<div class="pt-4 border-t border-monokai-comment space-y-2">
|
||||
<h3 class="text-monokai-cyan font-bold mb-2">Informations réseau</h3>
|
||||
|
||||
<div v-if="selectedIP.mac">
|
||||
<div class="text-sm text-monokai-comment">MAC</div>
|
||||
<div class="text-monokai-text font-mono">{{ selectedIP.mac }}</div>
|
||||
</div>
|
||||
|
||||
<div v-if="selectedIP.vendor">
|
||||
<div class="text-sm text-monokai-comment">Fabricant</div>
|
||||
<div class="text-monokai-text">{{ selectedIP.vendor }}</div>
|
||||
</div>
|
||||
|
||||
<div v-if="selectedIP.hostname">
|
||||
<div class="text-sm text-monokai-comment">Hostname</div>
|
||||
<div class="text-monokai-text font-mono">{{ selectedIP.hostname }}</div>
|
||||
</div>
|
||||
|
||||
<div v-if="selectedIP.open_ports && selectedIP.open_ports.length > 0">
|
||||
<div class="text-sm text-monokai-comment">Ports ouverts</div>
|
||||
<div class="flex flex-wrap gap-1 mt-1">
|
||||
<span
|
||||
v-for="port in selectedIP.open_ports"
|
||||
:key="port"
|
||||
class="px-2 py-1 bg-monokai-cyan/20 text-monokai-cyan rounded text-xs font-mono"
|
||||
>
|
||||
{{ port }}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Timestamps -->
|
||||
<div class="pt-4 border-t border-monokai-comment space-y-2 text-sm">
|
||||
<div v-if="selectedIP.first_seen">
|
||||
<span class="text-monokai-comment">Première vue:</span>
|
||||
<span class="text-monokai-text ml-2">{{ formatDate(selectedIP.first_seen) }}</span>
|
||||
</div>
|
||||
<div v-if="selectedIP.last_seen">
|
||||
<span class="text-monokai-comment">Dernière vue:</span>
|
||||
<span class="text-monokai-text ml-2">{{ formatDate(selectedIP.last_seen) }}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Placeholder si rien sélectionné -->
|
||||
<div v-else class="text-center text-monokai-comment mt-10">
|
||||
<p>Sélectionnez une IP pour voir les détails</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { ref, watch } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { selectedIP } = storeToRefs(ipStore)
|
||||
|
||||
const formData = ref({
|
||||
name: '',
|
||||
known: false,
|
||||
location: '',
|
||||
host: ''
|
||||
})
|
||||
|
||||
// Mettre à jour le formulaire quand l'IP change
|
||||
watch(selectedIP, (newIP) => {
|
||||
if (newIP) {
|
||||
formData.value = {
|
||||
name: newIP.name || '',
|
||||
known: newIP.known || false,
|
||||
location: newIP.location || '',
|
||||
host: newIP.host || ''
|
||||
}
|
||||
}
|
||||
}, { immediate: true })
|
||||
|
||||
function resetForm() {
|
||||
if (selectedIP.value) {
|
||||
formData.value = {
|
||||
name: selectedIP.value.name || '',
|
||||
known: selectedIP.value.known || false,
|
||||
location: selectedIP.value.location || '',
|
||||
host: selectedIP.value.host || ''
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function saveChanges() {
|
||||
if (!selectedIP.value) return
|
||||
|
||||
try {
|
||||
await ipStore.updateIP(selectedIP.value.ip, formData.value)
|
||||
console.log('IP mise à jour')
|
||||
} catch (err) {
|
||||
console.error('Erreur mise à jour IP:', err)
|
||||
}
|
||||
}
|
||||
|
||||
function formatDate(dateString) {
|
||||
if (!dateString) return 'N/A'
|
||||
const date = new Date(dateString)
|
||||
return date.toLocaleString('fr-FR')
|
||||
}
|
||||
</script>
|
||||
79
frontend/src/components/IPGrid.vue
Normal file
79
frontend/src/components/IPGrid.vue
Normal file
@@ -0,0 +1,79 @@
|
||||
<template>
|
||||
<div class="flex flex-col h-full">
|
||||
<!-- Filtres -->
|
||||
<div class="bg-monokai-bg border-b border-monokai-comment p-3 flex gap-4 flex-wrap">
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showOnline" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">En ligne</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showOffline" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Hors ligne</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showKnown" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Connues</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showUnknown" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Inconnues</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showFree" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Libres</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<!-- Grille d'IPs -->
|
||||
<div class="flex-1 overflow-auto p-4">
|
||||
<div class="grid grid-cols-4 gap-3">
|
||||
<IPCell
|
||||
v-for="ip in filteredIPs"
|
||||
:key="ip.ip"
|
||||
:ip="ip"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Message si vide -->
|
||||
<div v-if="filteredIPs.length === 0" class="text-center text-monokai-comment mt-10">
|
||||
<p>Aucune IP à afficher</p>
|
||||
<p class="text-sm mt-2">Ajustez les filtres ou lancez un scan</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Légende -->
|
||||
<div class="bg-monokai-bg border-t border-monokai-comment p-3">
|
||||
<div class="flex gap-6 text-xs">
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-green bg-monokai-green/15"></div>
|
||||
<span class="text-monokai-text">En ligne (connue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-cyan bg-monokai-cyan/15"></div>
|
||||
<span class="text-monokai-text">En ligne (inconnue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-dashed border-monokai-pink bg-monokai-pink/10 opacity-50"></div>
|
||||
<span class="text-monokai-text">Hors ligne (connue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-dashed border-monokai-purple bg-monokai-purple/10 opacity-50"></div>
|
||||
<span class="text-monokai-text">Hors ligne (inconnue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-comment bg-monokai-comment/20"></div>
|
||||
<span class="text-monokai-text">Libre</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
import IPCell from './IPCell.vue'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { filteredIPs, filters } = storeToRefs(ipStore)
|
||||
</script>
|
||||
129
frontend/src/components/IPGridTree.vue
Normal file
129
frontend/src/components/IPGridTree.vue
Normal file
@@ -0,0 +1,129 @@
|
||||
<template>
|
||||
<div class="flex flex-col h-full">
|
||||
<!-- Filtres -->
|
||||
<div class="bg-monokai-bg border-b border-monokai-comment p-3 flex gap-4 flex-wrap">
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showOnline" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">En ligne</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showOffline" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Hors ligne</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showKnown" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Connues</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showUnknown" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Inconnues</span>
|
||||
</label>
|
||||
<label class="flex items-center gap-2 cursor-pointer">
|
||||
<input type="checkbox" v-model="filters.showFree" class="form-checkbox" />
|
||||
<span class="text-sm text-monokai-text">Libres</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<!-- Organisation en arbre par sous-réseaux -->
|
||||
<div class="flex-1 overflow-auto p-4">
|
||||
<div v-for="subnet in organizedSubnets" :key="subnet.name" class="mb-6">
|
||||
<!-- Header du sous-réseau (style tree) -->
|
||||
<div class="flex items-center gap-2 mb-3 text-monokai-cyan border-l-4 border-monokai-cyan pl-3">
|
||||
<span class="font-bold text-lg">{{ subnet.name }}</span>
|
||||
<span class="text-sm text-monokai-comment">{{ subnet.cidr }}</span>
|
||||
<span class="text-sm text-monokai-comment ml-auto">
|
||||
{{ subnet.ips.length }} IPs
|
||||
({{ subnet.stats.online }} en ligne)
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<!-- Grille compacte des IPs du sous-réseau -->
|
||||
<div class="flex flex-wrap gap-2 pl-8">
|
||||
<IPCell
|
||||
v-for="ip in subnet.ips"
|
||||
:key="ip.ip"
|
||||
:ip="ip"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Message si vide -->
|
||||
<div v-if="organizedSubnets.length === 0" class="text-center text-monokai-comment mt-10">
|
||||
<p>Aucune IP à afficher</p>
|
||||
<p class="text-sm mt-2">Ajustez les filtres ou lancez un scan</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Légende -->
|
||||
<div class="bg-monokai-bg border-t border-monokai-comment p-3">
|
||||
<div class="flex gap-6 text-xs">
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-green bg-monokai-green/15"></div>
|
||||
<span class="text-monokai-text">En ligne (connue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-cyan bg-monokai-cyan/15"></div>
|
||||
<span class="text-monokai-text">En ligne (inconnue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-dashed border-monokai-pink bg-monokai-pink/10 opacity-50"></div>
|
||||
<span class="text-monokai-text">Hors ligne (connue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-dashed border-monokai-purple bg-monokai-purple/10 opacity-50"></div>
|
||||
<span class="text-monokai-text">Hors ligne (inconnue)</span>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<div class="w-4 h-4 rounded border-2 border-monokai-comment bg-monokai-comment/20"></div>
|
||||
<span class="text-monokai-text">Libre</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
import IPCell from './IPCell.vue'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { filteredIPs, filters } = storeToRefs(ipStore)
|
||||
|
||||
// Définition des sous-réseaux (devrait venir de la config mais en dur pour l'instant)
|
||||
const subnets = [
|
||||
{ name: 'static_vm', cidr: '10.0.0.0/24', description: 'Machines virtuelles statiques' },
|
||||
{ name: 'dhcp', cidr: '10.0.1.0/24', description: 'DHCP' },
|
||||
{ name: 'iot', cidr: '10.0.2.0/24', description: 'IoT' }
|
||||
]
|
||||
|
||||
// Organiser les IPs par sous-réseau
|
||||
const organizedSubnets = computed(() => {
|
||||
return subnets.map(subnet => {
|
||||
// Extraire le préfixe du sous-réseau (ex: "10.0.0" pour 10.0.0.0/24)
|
||||
const [oct1, oct2, oct3] = subnet.cidr.split('/')[0].split('.')
|
||||
const prefix = `${oct1}.${oct2}.${oct3}`
|
||||
|
||||
// Filtrer les IPs qui appartiennent à ce sous-réseau
|
||||
const subnetIPs = filteredIPs.value.filter(ip => {
|
||||
return ip.ip.startsWith(prefix + '.')
|
||||
})
|
||||
|
||||
// Calculer les stats
|
||||
const stats = {
|
||||
total: subnetIPs.length,
|
||||
online: subnetIPs.filter(ip => ip.last_status === 'online').length,
|
||||
offline: subnetIPs.filter(ip => ip.last_status === 'offline').length
|
||||
}
|
||||
|
||||
return {
|
||||
name: subnet.name,
|
||||
cidr: subnet.cidr,
|
||||
description: subnet.description,
|
||||
ips: subnetIPs,
|
||||
stats
|
||||
}
|
||||
}).filter(subnet => subnet.ips.length > 0) // Ne montrer que les sous-réseaux avec des IPs
|
||||
})
|
||||
</script>
|
||||
119
frontend/src/components/NewDetections.vue
Normal file
119
frontend/src/components/NewDetections.vue
Normal file
@@ -0,0 +1,119 @@
|
||||
<template>
|
||||
<div class="h-full flex flex-col bg-monokai-bg border-l border-monokai-comment">
|
||||
<!-- Header -->
|
||||
<div class="p-4 border-b border-monokai-comment">
|
||||
<h2 class="text-xl font-bold text-monokai-pink">Nouvelles Détections</h2>
|
||||
</div>
|
||||
|
||||
<!-- Liste -->
|
||||
<div class="flex-1 overflow-auto p-4">
|
||||
<div v-if="newIPs.length > 0" class="space-y-3">
|
||||
<div
|
||||
v-for="ip in newIPs"
|
||||
:key="ip.ip"
|
||||
@click="selectIP(ip)"
|
||||
class="p-3 rounded border-2 border-monokai-pink bg-monokai-pink/10 cursor-pointer hover:bg-monokai-pink/20 transition-colors"
|
||||
>
|
||||
<!-- IP -->
|
||||
<div class="font-mono font-bold text-monokai-text">
|
||||
{{ ip.ip }}
|
||||
</div>
|
||||
|
||||
<!-- État -->
|
||||
<div class="text-sm mt-1">
|
||||
<span
|
||||
:class="[
|
||||
'px-2 py-1 rounded text-xs',
|
||||
ip.last_status === 'online'
|
||||
? 'bg-monokai-green/20 text-monokai-green'
|
||||
: 'bg-monokai-comment/20 text-monokai-comment'
|
||||
]"
|
||||
>
|
||||
{{ ip.last_status || 'Inconnu' }}
|
||||
</span>
|
||||
<span
|
||||
v-if="!ip.known"
|
||||
class="ml-2 px-2 py-1 rounded text-xs bg-monokai-purple/20 text-monokai-purple"
|
||||
>
|
||||
Inconnue
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<!-- MAC et Vendor -->
|
||||
<div v-if="ip.mac" class="text-xs text-monokai-comment mt-2 font-mono">
|
||||
{{ ip.mac }}
|
||||
<span v-if="ip.vendor" class="ml-1">({{ ip.vendor }})</span>
|
||||
</div>
|
||||
|
||||
<!-- Timestamp -->
|
||||
<div class="text-xs text-monokai-comment mt-2">
|
||||
{{ formatTime(ip.first_seen) }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Placeholder -->
|
||||
<div v-else class="text-center text-monokai-comment mt-10">
|
||||
<p>Aucune nouvelle IP détectée</p>
|
||||
<p class="text-sm mt-2">Les nouvelles IPs apparaîtront ici</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { computed } from 'vue'
|
||||
import { storeToRefs } from 'pinia'
|
||||
import { useIPStore } from '@/stores/ipStore'
|
||||
|
||||
const ipStore = useIPStore()
|
||||
const { ips } = storeToRefs(ipStore)
|
||||
|
||||
// IPs nouvellement détectées (dans les dernières 24h)
|
||||
const newIPs = computed(() => {
|
||||
const oneDayAgo = new Date(Date.now() - 24 * 60 * 60 * 1000)
|
||||
|
||||
return ips.value
|
||||
.filter(ip => {
|
||||
if (!ip.first_seen) return false
|
||||
const firstSeen = new Date(ip.first_seen)
|
||||
return firstSeen > oneDayAgo
|
||||
})
|
||||
.sort((a, b) => {
|
||||
const dateA = new Date(a.first_seen)
|
||||
const dateB = new Date(b.first_seen)
|
||||
return dateB - dateA // Plus récent en premier
|
||||
})
|
||||
.slice(0, 20) // Limiter à 20
|
||||
})
|
||||
|
||||
function selectIP(ip) {
|
||||
ipStore.selectIP(ip)
|
||||
}
|
||||
|
||||
function formatTime(dateString) {
|
||||
if (!dateString) return ''
|
||||
const date = new Date(dateString)
|
||||
const now = new Date()
|
||||
const diff = now - date
|
||||
|
||||
// Moins d'une minute
|
||||
if (diff < 60000) {
|
||||
return 'À l\'instant'
|
||||
}
|
||||
|
||||
// Moins d'une heure
|
||||
if (diff < 3600000) {
|
||||
const minutes = Math.floor(diff / 60000)
|
||||
return `Il y a ${minutes} min`
|
||||
}
|
||||
|
||||
// Moins de 24h
|
||||
if (diff < 86400000) {
|
||||
const hours = Math.floor(diff / 3600000)
|
||||
return `Il y a ${hours}h`
|
||||
}
|
||||
|
||||
return date.toLocaleString('fr-FR')
|
||||
}
|
||||
</script>
|
||||
10
frontend/src/main.js
Normal file
10
frontend/src/main.js
Normal file
@@ -0,0 +1,10 @@
|
||||
import { createApp } from 'vue'
|
||||
import { createPinia } from 'pinia'
|
||||
import App from './App.vue'
|
||||
import './assets/main.css'
|
||||
|
||||
const app = createApp(App)
|
||||
const pinia = createPinia()
|
||||
|
||||
app.use(pinia)
|
||||
app.mount('#app')
|
||||
230
frontend/src/stores/ipStore.js
Normal file
230
frontend/src/stores/ipStore.js
Normal file
@@ -0,0 +1,230 @@
|
||||
/**
|
||||
* Store Pinia pour la gestion des IPs
|
||||
*/
|
||||
import { defineStore } from 'pinia'
|
||||
import { ref, computed } from 'vue'
|
||||
import axios from 'axios'
|
||||
|
||||
export const useIPStore = defineStore('ip', () => {
|
||||
// État
|
||||
const ips = ref([])
|
||||
const selectedIP = ref(null)
|
||||
const loading = ref(false)
|
||||
const error = ref(null)
|
||||
const stats = ref({
|
||||
total: 0,
|
||||
online: 0,
|
||||
offline: 0,
|
||||
known: 0,
|
||||
unknown: 0
|
||||
})
|
||||
|
||||
// Filtres
|
||||
const filters = ref({
|
||||
showOnline: true,
|
||||
showOffline: true,
|
||||
showKnown: true,
|
||||
showUnknown: true,
|
||||
showFree: true
|
||||
})
|
||||
|
||||
// WebSocket
|
||||
const ws = ref(null)
|
||||
const wsConnected = ref(false)
|
||||
|
||||
// Computed
|
||||
const filteredIPs = computed(() => {
|
||||
return ips.value.filter(ip => {
|
||||
// Filtrer par statut
|
||||
if (ip.last_status === 'online' && !filters.value.showOnline) return false
|
||||
if (ip.last_status === 'offline' && !filters.value.showOffline) return false
|
||||
|
||||
// Filtrer par connu/inconnu
|
||||
if (ip.known && !filters.value.showKnown) return false
|
||||
if (!ip.known && !filters.value.showUnknown) return false
|
||||
|
||||
// Filtrer IP libres (pas de last_status)
|
||||
if (!ip.last_status && !filters.value.showFree) return false
|
||||
|
||||
return true
|
||||
})
|
||||
})
|
||||
|
||||
// Actions
|
||||
async function fetchIPs() {
|
||||
loading.value = true
|
||||
error.value = null
|
||||
|
||||
try {
|
||||
const response = await axios.get('/api/ips/')
|
||||
ips.value = response.data
|
||||
await fetchStats()
|
||||
} catch (err) {
|
||||
error.value = err.message
|
||||
console.error('Erreur chargement IPs:', err)
|
||||
} finally {
|
||||
loading.value = false
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchStats() {
|
||||
try {
|
||||
const response = await axios.get('/api/ips/stats/summary')
|
||||
stats.value = response.data
|
||||
} catch (err) {
|
||||
console.error('Erreur chargement stats:', err)
|
||||
}
|
||||
}
|
||||
|
||||
async function updateIP(ipAddress, data) {
|
||||
try {
|
||||
const response = await axios.put(`/api/ips/${ipAddress}`, data)
|
||||
|
||||
// Mettre à jour dans le store
|
||||
const index = ips.value.findIndex(ip => ip.ip === ipAddress)
|
||||
if (index !== -1) {
|
||||
ips.value[index] = response.data
|
||||
}
|
||||
|
||||
if (selectedIP.value?.ip === ipAddress) {
|
||||
selectedIP.value = response.data
|
||||
}
|
||||
|
||||
return response.data
|
||||
} catch (err) {
|
||||
error.value = err.message
|
||||
throw err
|
||||
}
|
||||
}
|
||||
|
||||
async function getIPHistory(ipAddress, hours = 24) {
|
||||
try {
|
||||
const response = await axios.get(`/api/ips/${ipAddress}/history?hours=${hours}`)
|
||||
return response.data
|
||||
} catch (err) {
|
||||
console.error('Erreur chargement historique:', err)
|
||||
throw err
|
||||
}
|
||||
}
|
||||
|
||||
async function startScan() {
|
||||
try {
|
||||
await axios.post('/api/scan/start')
|
||||
} catch (err) {
|
||||
error.value = err.message
|
||||
throw err
|
||||
}
|
||||
}
|
||||
|
||||
function selectIP(ip) {
|
||||
selectedIP.value = ip
|
||||
}
|
||||
|
||||
function clearSelection() {
|
||||
selectedIP.value = null
|
||||
}
|
||||
|
||||
// WebSocket
|
||||
function connectWebSocket() {
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:'
|
||||
const wsUrl = `${protocol}//${window.location.host}/ws`
|
||||
|
||||
ws.value = new WebSocket(wsUrl)
|
||||
|
||||
ws.value.onopen = () => {
|
||||
console.log('WebSocket connecté')
|
||||
wsConnected.value = true
|
||||
|
||||
// Heartbeat toutes les 30s
|
||||
setInterval(() => {
|
||||
if (ws.value?.readyState === WebSocket.OPEN) {
|
||||
ws.value.send('ping')
|
||||
}
|
||||
}, 30000)
|
||||
}
|
||||
|
||||
ws.value.onmessage = (event) => {
|
||||
try {
|
||||
const message = JSON.parse(event.data)
|
||||
handleWebSocketMessage(message)
|
||||
} catch (err) {
|
||||
console.error('Erreur parsing WebSocket:', err)
|
||||
}
|
||||
}
|
||||
|
||||
ws.value.onerror = (error) => {
|
||||
console.error('Erreur WebSocket:', error)
|
||||
wsConnected.value = false
|
||||
}
|
||||
|
||||
ws.value.onclose = () => {
|
||||
console.log('WebSocket déconnecté')
|
||||
wsConnected.value = false
|
||||
|
||||
// Reconnexion après 5s
|
||||
setTimeout(connectWebSocket, 5000)
|
||||
}
|
||||
}
|
||||
|
||||
function handleWebSocketMessage(message) {
|
||||
console.log('Message WebSocket:', message)
|
||||
|
||||
switch (message.type) {
|
||||
case 'scan_start':
|
||||
// Notification début de scan
|
||||
break
|
||||
|
||||
case 'scan_complete':
|
||||
// Rafraîchir les données après scan
|
||||
fetchIPs()
|
||||
stats.value = message.stats
|
||||
break
|
||||
|
||||
case 'ip_update':
|
||||
// Mise à jour d'une IP
|
||||
const updatedIP = ips.value.find(ip => ip.ip === message.data.ip)
|
||||
if (updatedIP) {
|
||||
Object.assign(updatedIP, message.data)
|
||||
}
|
||||
break
|
||||
|
||||
case 'new_ip':
|
||||
// Nouvelle IP détectée
|
||||
fetchIPs() // Recharger pour être sûr
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
function disconnectWebSocket() {
|
||||
if (ws.value) {
|
||||
ws.value.close()
|
||||
ws.value = null
|
||||
wsConnected.value = false
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
// État
|
||||
ips,
|
||||
selectedIP,
|
||||
loading,
|
||||
error,
|
||||
stats,
|
||||
filters,
|
||||
wsConnected,
|
||||
|
||||
// Computed
|
||||
filteredIPs,
|
||||
|
||||
// Actions
|
||||
fetchIPs,
|
||||
fetchStats,
|
||||
updateIP,
|
||||
getIPHistory,
|
||||
startScan,
|
||||
selectIP,
|
||||
clearSelection,
|
||||
connectWebSocket,
|
||||
disconnectWebSocket
|
||||
}
|
||||
})
|
||||
26
frontend/tailwind.config.js
Normal file
26
frontend/tailwind.config.js
Normal file
@@ -0,0 +1,26 @@
|
||||
/** @type {import('tailwindcss').Config} */
|
||||
export default {
|
||||
content: [
|
||||
"./index.html",
|
||||
"./src/**/*.{vue,js,ts,jsx,tsx}",
|
||||
],
|
||||
theme: {
|
||||
extend: {
|
||||
colors: {
|
||||
// Palette Monokai (guidelines-css.md)
|
||||
monokai: {
|
||||
bg: '#272822',
|
||||
text: '#F8F8F2',
|
||||
comment: '#75715E',
|
||||
green: '#A6E22E',
|
||||
pink: '#F92672',
|
||||
cyan: '#66D9EF',
|
||||
purple: '#AE81FF',
|
||||
yellow: '#E6DB74',
|
||||
orange: '#FD971F',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
plugins: [],
|
||||
}
|
||||
25
frontend/vite.config.js
Normal file
25
frontend/vite.config.js
Normal file
@@ -0,0 +1,25 @@
|
||||
import { defineConfig } from 'vite'
|
||||
import vue from '@vitejs/plugin-vue'
|
||||
import { fileURLToPath, URL } from 'node:url'
|
||||
|
||||
export default defineConfig({
|
||||
plugins: [vue()],
|
||||
resolve: {
|
||||
alias: {
|
||||
'@': fileURLToPath(new URL('./src', import.meta.url))
|
||||
}
|
||||
},
|
||||
server: {
|
||||
port: 3000,
|
||||
proxy: {
|
||||
'/api': {
|
||||
target: 'http://localhost:8080',
|
||||
changeOrigin: true
|
||||
},
|
||||
'/ws': {
|
||||
target: 'ws://localhost:8080',
|
||||
ws: true
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
17
guidelines-css.md
Normal file
17
guidelines-css.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# guidelines-css.md
|
||||
|
||||
## Palette Monokai
|
||||
- backgrounds : #272822
|
||||
- text : #F8F8F2
|
||||
- accents : #A6E22E, #F92672, #66D9EF
|
||||
|
||||
## Cases IP
|
||||
- Couleurs selon état
|
||||
- Bordure en ligne : solid
|
||||
- Bordure hors ligne : dashed
|
||||
- Transparency offline configurable
|
||||
- Halo ping animé (CSS keyframes)
|
||||
|
||||
## Responsive
|
||||
- grille fluide
|
||||
- colonnes collapsibles
|
||||
28
modele-donnees.md
Normal file
28
modele-donnees.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# modele-donnees.md
|
||||
|
||||
## Tables SQLite
|
||||
|
||||
### Table ip
|
||||
- ip (PK)
|
||||
- name
|
||||
- known (bool)
|
||||
- location
|
||||
- host
|
||||
- first_seen
|
||||
- last_seen
|
||||
- last_status
|
||||
- mac
|
||||
- vendor
|
||||
- hostname
|
||||
- open_ports (JSON)
|
||||
|
||||
### Table ip_history
|
||||
- id
|
||||
- ip (FK)
|
||||
- timestamp
|
||||
- status
|
||||
- open_ports (JSON)
|
||||
|
||||
## Index recommandés
|
||||
- index sur last_status
|
||||
- index sur ip_history.timestamp
|
||||
24
prompt-claude-code.md
Normal file
24
prompt-claude-code.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# prompt-claude-code.md
|
||||
|
||||
## Rôle
|
||||
Tu es Claude Code. Tu génères un projet complet backend (FastAPI), frontend (Vue 3), Docker, basé sur les spécifications fournies dans:
|
||||
- consigne-parametrage.md
|
||||
- consigne-design_webui.md
|
||||
- modele-donnees.md
|
||||
- architecture-technique.md
|
||||
- workflow-scan.md
|
||||
- guidelines-css.md
|
||||
- tests-backend.md
|
||||
|
||||
## Objectif
|
||||
Créer l’application IPWatch : un scanner réseau WebUI permettant de visualiser les IP libres, connues, inconnues, états réseau, ports ouverts, historique 24h, configuration YAML.
|
||||
|
||||
## Livrables
|
||||
1. Structure complète du projet
|
||||
2. Code backend FastAPI
|
||||
3. Modèles SQLAlchemy
|
||||
4. Tâches de scan (ping, ARP, ports)
|
||||
5. WebSockets + API REST
|
||||
6. Frontend Vue 3 + Tailwind
|
||||
7. Dockerfile + docker-compose
|
||||
8. Tests backend
|
||||
20
pytest.ini
Normal file
20
pytest.ini
Normal file
@@ -0,0 +1,20 @@
|
||||
[pytest]
|
||||
# Configuration pytest pour IPWatch
|
||||
testpaths = tests
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
|
||||
# Options par défaut
|
||||
addopts =
|
||||
-v
|
||||
--tb=short
|
||||
--strict-markers
|
||||
|
||||
# Markers
|
||||
markers =
|
||||
asyncio: marque les tests asynchrones
|
||||
integration: marque les tests d'intégration
|
||||
unit: marque les tests unitaires
|
||||
|
||||
asyncio_mode = auto
|
||||
69
start.sh
Normal file
69
start.sh
Normal file
@@ -0,0 +1,69 @@
|
||||
#!/bin/bash
|
||||
# Script de démarrage rapide IPWatch
|
||||
|
||||
set -e
|
||||
|
||||
echo "========================================="
|
||||
echo " IPWatch - Scanner Réseau Temps Réel"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
|
||||
# Vérifier si Docker est installé
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "❌ Docker n'est pas installé"
|
||||
echo "Installez Docker depuis: https://docs.docker.com/get-docker/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Vérifier si docker-compose est installé
|
||||
if ! command -v docker compose &> /dev/null; then
|
||||
echo "❌ docker-compose n'est pas installé"
|
||||
echo "Installez docker-compose depuis: https://docs.docker.com/compose/install/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Créer les dossiers nécessaires
|
||||
echo "📁 Création des dossiers..."
|
||||
mkdir -p data logs
|
||||
|
||||
# Vérifier la config
|
||||
if [ ! -f config.yaml ]; then
|
||||
echo "⚠️ config.yaml non trouvé"
|
||||
echo "Veuillez créer un fichier config.yaml avec votre configuration réseau"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build de l'image
|
||||
echo ""
|
||||
echo "🔨 Construction de l'image Docker..."
|
||||
docker compose build
|
||||
|
||||
# Démarrage
|
||||
echo ""
|
||||
echo "🚀 Démarrage d'IPWatch..."
|
||||
docker compose up -d
|
||||
|
||||
# Attendre que le service soit prêt
|
||||
echo ""
|
||||
echo "⏳ Attente du démarrage du service..."
|
||||
sleep 5
|
||||
|
||||
# Vérifier l'état
|
||||
if docker-compose ps | grep -q "Up"; then
|
||||
echo ""
|
||||
echo "✅ IPWatch est démarré avec succès!"
|
||||
echo ""
|
||||
echo "📊 Accédez à l'interface web:"
|
||||
echo " 👉 http://localhost:8080"
|
||||
echo ""
|
||||
echo "📝 Commandes utiles:"
|
||||
echo " - Logs: docker-compose logs -f"
|
||||
echo " - Arrêter: docker-compose down"
|
||||
echo " - Redémarrer: docker-compose restart"
|
||||
echo ""
|
||||
else
|
||||
echo ""
|
||||
echo "❌ Erreur lors du démarrage"
|
||||
echo "Consultez les logs: docker-compose logs"
|
||||
exit 1
|
||||
fi
|
||||
14
tests-backend.md
Normal file
14
tests-backend.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# tests-backend.md
|
||||
|
||||
## Tests unitaires
|
||||
- test_ping()
|
||||
- test_port_scan()
|
||||
- test_classification()
|
||||
- test_sqlalchemy_models()
|
||||
- test_api_get_ip()
|
||||
- test_api_update_ip()
|
||||
- test_scheduler()
|
||||
|
||||
## Tests d'intégration
|
||||
- scan complet réseau simulé
|
||||
- WebSocket notifications
|
||||
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Tests IPWatch
|
||||
123
tests/test_api.py
Normal file
123
tests/test_api.py
Normal file
@@ -0,0 +1,123 @@
|
||||
"""
|
||||
Tests pour les endpoints API
|
||||
"""
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
from backend.app.main import app
|
||||
from backend.app.core.database import Base, get_db
|
||||
from backend.app.models.ip import IP
|
||||
|
||||
|
||||
# Setup DB de test
|
||||
@pytest.fixture
|
||||
def test_db():
|
||||
"""Fixture base de données de test"""
|
||||
engine = create_engine("sqlite:///:memory:")
|
||||
Base.metadata.create_all(engine)
|
||||
TestingSessionLocal = sessionmaker(bind=engine)
|
||||
return TestingSessionLocal
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(test_db):
|
||||
"""Fixture client de test"""
|
||||
def override_get_db():
|
||||
db = test_db()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
app.dependency_overrides[get_db] = override_get_db
|
||||
return TestClient(app)
|
||||
|
||||
|
||||
class TestAPIEndpoints:
|
||||
"""Tests pour les endpoints API"""
|
||||
|
||||
def test_root_endpoint(self, client):
|
||||
"""Test endpoint racine"""
|
||||
response = client.get("/")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "name" in data
|
||||
assert data["name"] == "IPWatch API"
|
||||
|
||||
def test_health_check(self, client):
|
||||
"""Test health check"""
|
||||
response = client.get("/health")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "status" in data
|
||||
assert data["status"] == "healthy"
|
||||
|
||||
def test_get_all_ips_empty(self, client):
|
||||
"""Test récupération IPs (vide)"""
|
||||
response = client.get("/api/ips/")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert isinstance(data, list)
|
||||
assert len(data) == 0
|
||||
|
||||
def test_get_stats_empty(self, client):
|
||||
"""Test stats avec DB vide"""
|
||||
response = client.get("/api/ips/stats/summary")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["total"] == 0
|
||||
assert data["online"] == 0
|
||||
assert data["offline"] == 0
|
||||
|
||||
def test_get_ip_not_found(self, client):
|
||||
"""Test récupération IP inexistante"""
|
||||
response = client.get("/api/ips/192.168.1.100")
|
||||
assert response.status_code == 404
|
||||
|
||||
def test_update_ip(self, client, test_db):
|
||||
"""Test mise à jour IP"""
|
||||
# Créer d'abord une IP
|
||||
db = test_db()
|
||||
ip = IP(
|
||||
ip="192.168.1.100",
|
||||
name="Test",
|
||||
known=False,
|
||||
last_status="online"
|
||||
)
|
||||
db.add(ip)
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
# Mettre à jour via API
|
||||
update_data = {
|
||||
"name": "Updated Name",
|
||||
"known": True,
|
||||
"location": "Bureau"
|
||||
}
|
||||
|
||||
response = client.put("/api/ips/192.168.1.100", json=update_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
data = response.json()
|
||||
assert data["name"] == "Updated Name"
|
||||
assert data["known"] is True
|
||||
assert data["location"] == "Bureau"
|
||||
|
||||
def test_delete_ip(self, client, test_db):
|
||||
"""Test suppression IP"""
|
||||
# Créer une IP
|
||||
db = test_db()
|
||||
ip = IP(ip="192.168.1.101", last_status="online")
|
||||
db.add(ip)
|
||||
db.commit()
|
||||
db.close()
|
||||
|
||||
# Supprimer via API
|
||||
response = client.delete("/api/ips/192.168.1.101")
|
||||
assert response.status_code == 200
|
||||
|
||||
# Vérifier suppression
|
||||
response = client.get("/api/ips/192.168.1.101")
|
||||
assert response.status_code == 404
|
||||
134
tests/test_models.py
Normal file
134
tests/test_models.py
Normal file
@@ -0,0 +1,134 @@
|
||||
"""
|
||||
Tests pour les modèles SQLAlchemy
|
||||
"""
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
from backend.app.core.database import Base
|
||||
from backend.app.models.ip import IP, IPHistory
|
||||
|
||||
|
||||
class TestSQLAlchemyModels:
|
||||
"""Tests pour les modèles de données"""
|
||||
|
||||
@pytest.fixture
|
||||
def db_session(self):
|
||||
"""Fixture session DB en mémoire"""
|
||||
# Créer une DB SQLite en mémoire
|
||||
engine = create_engine("sqlite:///:memory:")
|
||||
Base.metadata.create_all(engine)
|
||||
|
||||
Session = sessionmaker(bind=engine)
|
||||
session = Session()
|
||||
|
||||
yield session
|
||||
|
||||
session.close()
|
||||
|
||||
def test_create_ip(self, db_session):
|
||||
"""Test création d'une IP"""
|
||||
ip = IP(
|
||||
ip="192.168.1.100",
|
||||
name="Test Server",
|
||||
known=True,
|
||||
location="Bureau",
|
||||
host="Serveur",
|
||||
last_status="online",
|
||||
mac="00:11:22:33:44:55",
|
||||
vendor="Dell",
|
||||
open_ports=[22, 80, 443]
|
||||
)
|
||||
|
||||
db_session.add(ip)
|
||||
db_session.commit()
|
||||
|
||||
# Vérifier la création
|
||||
retrieved = db_session.query(IP).filter(IP.ip == "192.168.1.100").first()
|
||||
assert retrieved is not None
|
||||
assert retrieved.name == "Test Server"
|
||||
assert retrieved.known is True
|
||||
assert retrieved.last_status == "online"
|
||||
assert len(retrieved.open_ports) == 3
|
||||
|
||||
def test_create_ip_history(self, db_session):
|
||||
"""Test création d'historique IP"""
|
||||
# Créer d'abord une IP
|
||||
ip = IP(
|
||||
ip="192.168.1.101",
|
||||
last_status="online"
|
||||
)
|
||||
db_session.add(ip)
|
||||
db_session.commit()
|
||||
|
||||
# Créer entrée historique
|
||||
history = IPHistory(
|
||||
ip="192.168.1.101",
|
||||
timestamp=datetime.utcnow(),
|
||||
status="online",
|
||||
open_ports=[80, 443]
|
||||
)
|
||||
|
||||
db_session.add(history)
|
||||
db_session.commit()
|
||||
|
||||
# Vérifier
|
||||
retrieved = db_session.query(IPHistory).filter(
|
||||
IPHistory.ip == "192.168.1.101"
|
||||
).first()
|
||||
|
||||
assert retrieved is not None
|
||||
assert retrieved.status == "online"
|
||||
assert len(retrieved.open_ports) == 2
|
||||
|
||||
def test_ip_history_relationship(self, db_session):
|
||||
"""Test relation IP <-> IPHistory"""
|
||||
# Créer une IP
|
||||
ip = IP(
|
||||
ip="192.168.1.102",
|
||||
last_status="online"
|
||||
)
|
||||
db_session.add(ip)
|
||||
db_session.commit()
|
||||
|
||||
# Créer plusieurs entrées historiques
|
||||
for i in range(5):
|
||||
history = IPHistory(
|
||||
ip="192.168.1.102",
|
||||
status="online" if i % 2 == 0 else "offline",
|
||||
open_ports=[]
|
||||
)
|
||||
db_session.add(history)
|
||||
|
||||
db_session.commit()
|
||||
|
||||
# Vérifier la relation
|
||||
ip = db_session.query(IP).filter(IP.ip == "192.168.1.102").first()
|
||||
assert len(ip.history) == 5
|
||||
|
||||
def test_cascade_delete(self, db_session):
|
||||
"""Test suppression en cascade"""
|
||||
# Créer IP + historique
|
||||
ip = IP(ip="192.168.1.103", last_status="online")
|
||||
db_session.add(ip)
|
||||
db_session.commit()
|
||||
|
||||
history = IPHistory(
|
||||
ip="192.168.1.103",
|
||||
status="online",
|
||||
open_ports=[]
|
||||
)
|
||||
db_session.add(history)
|
||||
db_session.commit()
|
||||
|
||||
# Supprimer l'IP
|
||||
db_session.delete(ip)
|
||||
db_session.commit()
|
||||
|
||||
# Vérifier que l'historique est supprimé aussi
|
||||
history_count = db_session.query(IPHistory).filter(
|
||||
IPHistory.ip == "192.168.1.103"
|
||||
).count()
|
||||
|
||||
assert history_count == 0
|
||||
98
tests/test_network.py
Normal file
98
tests/test_network.py
Normal file
@@ -0,0 +1,98 @@
|
||||
"""
|
||||
Tests unitaires pour les modules réseau
|
||||
Basé sur tests-backend.md
|
||||
"""
|
||||
import pytest
|
||||
import asyncio
|
||||
from backend.app.services.network import NetworkScanner
|
||||
|
||||
|
||||
class TestNetworkScanner:
|
||||
"""Tests pour le scanner réseau"""
|
||||
|
||||
@pytest.fixture
|
||||
def scanner(self):
|
||||
"""Fixture scanner avec réseau de test"""
|
||||
return NetworkScanner(cidr="192.168.1.0/24", timeout=1.0)
|
||||
|
||||
def test_generate_ip_list(self, scanner):
|
||||
"""Test génération liste IP depuis CIDR"""
|
||||
ip_list = scanner.generate_ip_list()
|
||||
|
||||
# Vérifier le nombre d'IPs (254 pour un /24)
|
||||
assert len(ip_list) == 254
|
||||
|
||||
# Vérifier format
|
||||
assert "192.168.1.1" in ip_list
|
||||
assert "192.168.1.254" in ip_list
|
||||
assert "192.168.1.0" not in ip_list # Adresse réseau exclue
|
||||
assert "192.168.1.255" not in ip_list # Broadcast exclu
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ping(self, scanner):
|
||||
"""Test fonction ping"""
|
||||
# Ping localhost (devrait marcher)
|
||||
result = await scanner.ping("127.0.0.1")
|
||||
assert result is True
|
||||
|
||||
# Ping IP improbable (devrait échouer rapidement)
|
||||
result = await scanner.ping("192.0.2.1")
|
||||
assert result is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ping_parallel(self, scanner):
|
||||
"""Test ping parallélisé"""
|
||||
ip_list = ["127.0.0.1", "192.0.2.1", "192.0.2.2"]
|
||||
|
||||
results = await scanner.ping_parallel(ip_list, max_concurrent=10)
|
||||
|
||||
# Vérifier que tous les résultats sont présents
|
||||
assert len(results) == 3
|
||||
assert "127.0.0.1" in results
|
||||
assert results["127.0.0.1"] is True
|
||||
|
||||
def test_classification(self, scanner):
|
||||
"""Test classification d'état IP"""
|
||||
# IP en ligne + connue
|
||||
status = scanner.classify_ip_status(is_online=True, is_known=True)
|
||||
assert status == "online"
|
||||
|
||||
# IP hors ligne + connue
|
||||
status = scanner.classify_ip_status(is_online=False, is_known=True)
|
||||
assert status == "offline"
|
||||
|
||||
# IP en ligne + inconnue
|
||||
status = scanner.classify_ip_status(is_online=True, is_known=False)
|
||||
assert status == "online"
|
||||
|
||||
# IP hors ligne + inconnue
|
||||
status = scanner.classify_ip_status(is_online=False, is_known=False)
|
||||
assert status == "offline"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_port_scan(self, scanner):
|
||||
"""Test scan de ports"""
|
||||
# Scanner des ports communs sur localhost
|
||||
ports = [22, 80, 443, 9999] # 9999 probablement fermé
|
||||
|
||||
open_ports = await scanner.scan_ports("127.0.0.1", ports)
|
||||
|
||||
# Au moins vérifier que la fonction retourne une liste
|
||||
assert isinstance(open_ports, list)
|
||||
|
||||
# Tous les ports retournés doivent être dans la liste demandée
|
||||
for port in open_ports:
|
||||
assert port in ports
|
||||
|
||||
def test_get_mac_vendor(self, scanner):
|
||||
"""Test lookup fabricant MAC"""
|
||||
# Tester avec des MACs connus
|
||||
vendor = scanner._get_mac_vendor("00:0C:29:XX:XX:XX")
|
||||
assert vendor == "VMware"
|
||||
|
||||
vendor = scanner._get_mac_vendor("B8:27:EB:XX:XX:XX")
|
||||
assert vendor == "Raspberry Pi"
|
||||
|
||||
# MAC inconnu
|
||||
vendor = scanner._get_mac_vendor("AA:BB:CC:DD:EE:FF")
|
||||
assert vendor == "Unknown"
|
||||
76
tests/test_scheduler.py
Normal file
76
tests/test_scheduler.py
Normal file
@@ -0,0 +1,76 @@
|
||||
"""
|
||||
Tests pour le scheduler APScheduler
|
||||
"""
|
||||
import pytest
|
||||
import asyncio
|
||||
from backend.app.services.scheduler import ScanScheduler
|
||||
|
||||
|
||||
class TestScheduler:
|
||||
"""Tests pour le scheduler"""
|
||||
|
||||
@pytest.fixture
|
||||
def scheduler(self):
|
||||
"""Fixture scheduler"""
|
||||
sched = ScanScheduler()
|
||||
yield sched
|
||||
if sched.is_running:
|
||||
sched.stop()
|
||||
|
||||
def test_scheduler_start_stop(self, scheduler):
|
||||
"""Test démarrage/arrêt du scheduler"""
|
||||
assert scheduler.is_running is False
|
||||
|
||||
scheduler.start()
|
||||
assert scheduler.is_running is True
|
||||
|
||||
scheduler.stop()
|
||||
assert scheduler.is_running is False
|
||||
|
||||
def test_add_ping_scan_job(self, scheduler):
|
||||
"""Test ajout tâche ping scan"""
|
||||
scheduler.start()
|
||||
|
||||
async def dummy_scan():
|
||||
pass
|
||||
|
||||
scheduler.add_ping_scan_job(dummy_scan, interval_seconds=60)
|
||||
|
||||
jobs = scheduler.get_jobs()
|
||||
job_ids = [job.id for job in jobs]
|
||||
|
||||
assert 'ping_scan' in job_ids
|
||||
|
||||
def test_add_port_scan_job(self, scheduler):
|
||||
"""Test ajout tâche port scan"""
|
||||
scheduler.start()
|
||||
|
||||
async def dummy_scan():
|
||||
pass
|
||||
|
||||
scheduler.add_port_scan_job(dummy_scan, interval_seconds=300)
|
||||
|
||||
jobs = scheduler.get_jobs()
|
||||
job_ids = [job.id for job in jobs]
|
||||
|
||||
assert 'port_scan' in job_ids
|
||||
|
||||
def test_remove_job(self, scheduler):
|
||||
"""Test suppression de tâche"""
|
||||
scheduler.start()
|
||||
|
||||
async def dummy_scan():
|
||||
pass
|
||||
|
||||
scheduler.add_ping_scan_job(dummy_scan, interval_seconds=60)
|
||||
|
||||
# Vérifier présence
|
||||
jobs = scheduler.get_jobs()
|
||||
assert len(jobs) == 1
|
||||
|
||||
# Supprimer
|
||||
scheduler.remove_job('ping_scan')
|
||||
|
||||
# Vérifier absence
|
||||
jobs = scheduler.get_jobs()
|
||||
assert len(jobs) == 0
|
||||
14
workflow-scan.md
Normal file
14
workflow-scan.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# workflow-scan.md
|
||||
|
||||
## Pipeline
|
||||
|
||||
1. Charger configuration YAML
|
||||
2. Générer liste IP du CIDR
|
||||
3. Ping (parallélisé)
|
||||
4. ARP + MAC vendor
|
||||
5. Port scan selon intervalle
|
||||
6. Classification état
|
||||
7. Mise à jour SQLite
|
||||
8. Détection nouvelles IP
|
||||
9. Push WebSocket
|
||||
10. Mise à jour UI
|
||||
Reference in New Issue
Block a user