first commit

This commit is contained in:
Ladebeze66 2025-04-02 11:43:26 +02:00
parent e2e2b25ab4
commit d5256dd322
125 changed files with 31215 additions and 69 deletions

File diff suppressed because one or more lines are too long

236
README.md
View File

@ -1,107 +1,199 @@
# LLM-Ticket3
# Système d'Analyse de Tickets de Support
Système d'analyse de tickets de support avec LLM pour l'extraction d'informations pertinentes.
## Description
Ce projet fournit une solution pour analyser des tickets de support en utilisant des modèles de langage (LLM). Les fonctionnalités principales sont :
- Extraction de tickets depuis Odoo
- Filtrage des images pertinentes
- Analyse détaillée des images en contexte
- Extraction des questions et réponses des échanges
- Génération d'un rapport unifié au format JSON/Markdown
Ce système d'analyse de tickets de support permet de traiter les données des tickets pour en extraire des informations pertinentes, analyser les images, et générer des rapports d'analyse.
## Architecture
Le projet suit une architecture modulaire avec :
Le système est désormais structuré de manière modulaire, avec des étapes de traitement distinctes qui peuvent être exécutées indépendamment:
- **agents/** : Modules pour les différents agents spécialisés
- `agent_base.py` : Classe abstraite pour tous les agents
- `agent_filtre_images.py` : Agent pour filtrer les images pertinentes
- `agent_analyse_image.py` : Agent pour analyser les images en détail
- `agent_question_reponse.py` : Agent pour extraire questions/réponses
1. **Extraction des données** (`extract_ticket.py`) - Nettoie et prépare les données brutes des tickets
2. **Filtrage des images** (`filter_images.py`) - Identifie les images pertinentes dans les pièces jointes
3. **Analyse d'images** (`analyze_image_contexte.py`) - Analyse les images pertinentes en fonction du contexte
4. **Analyse de ticket** (`analyze_ticket.py`) - Analyse le contenu du ticket pour en extraire les informations clés
5. **Questions-Réponses** (`extract_question_reponse.py`) - Extrait les paires de questions et réponses du ticket
- **llm/** : Modules pour les interfaces avec les LLM
- `llm_base.py` : Classe abstraite pour les LLM
- `mistral.py` : Implémentation pour l'API Mistral
- `pixtral.py` : Implémentation pour l'API Pixtral (avec capacités vision)
Ces étapes peuvent être exécutées individuellement ou dans une séquence complète via le script principal (`processus_complet.py`).
- **utils/** : Utilitaires pour le traitement des tickets
- `ticket_manager.py` : Gestion de l'extraction des tickets depuis Odoo
- `ticket_analyzer.py` : Coordination de l'analyse par les agents
## Prérequis
- **main.py** : Script principal
- Python 3.9+
- Bibliothèques requises (listées dans `requirements.txt`)
- Clé API pour les modèles de langage utilisés (configurée dans `config.json`)
## Installation
1. Clonez le dépôt :
```bash
git clone https://github.com/votre-user/llm-ticket3.git
cd llm-ticket3
```
# Cloner le dépôt
git clone <URL_DU_DEPOT>
cd <NOM_DU_DOSSIER>
2. Installez les dépendances :
```bash
# Installer les dépendances
pip install -r requirements.txt
# Configurer la clé API
cp config.json.example config.json
# Éditer config.json pour ajouter votre clé API
```
3. Copiez et configurez le fichier de configuration :
```bash
cp config.json.example config.json
# Éditez config.json avec vos informations
## Configuration
Créez un fichier `config.json` à la racine du projet avec le contenu suivant:
```json
{
"llm": {
"api_key": "votre-clé-api-ici",
"api_base": "https://api.mistral.ai/v1",
"organization": "votre-organisation"
}
}
```
## Utilisation
### Analyser un ticket
### Processus complet
Pour exécuter l'ensemble du processus d'analyse sur un ticket:
```bash
python main.py T12345 --config config.json --output ./resultats
python scripts/processus_complet.py --ticket T0167
```
Où :
- `T12345` est le code du ticket à analyser
- `--config` (optionnel) spécifie le chemin du fichier de configuration
- `--output` (optionnel) spécifie le répertoire de sortie
Options disponibles:
- `--ticket` ou `-t`: Code du ticket à analyser (obligatoire)
- `--source` ou `-s`: Dossier source contenant les tickets bruts (par défaut: `output/`)
- `--output` ou `-o`: Dossier de sortie pour les résultats (par défaut: `output_processed/`)
- `--verbose` ou `-v`: Afficher plus d'informations
### Analyser un ticket déjà extrait
### Étapes individuelles
Si vous avez déjà extrait les données d'un ticket et souhaitez uniquement refaire l'analyse :
Vous pouvez exécuter uniquement une étape spécifique:
```bash
python main.py T12345 --skip-extraction
python scripts/processus_complet.py --ticket T0167 --etapes extraction
```
## Sortie
Étapes disponibles:
- `extraction`: Extraction et nettoyage des données du ticket
- `filtrage`: Filtrage des images pertinentes
- `analyse_images`: Analyse des images pertinentes
- `analyse_ticket`: Analyse du contenu du ticket
- `questions_reponses`: Extraction des questions et réponses
- `tout`: Exécute toutes les étapes (par défaut)
L'analyse génère :
### Scripts individuels
1. `ticket_T12345/rapport/ticket_analysis.json` - Données d'analyse complètes en JSON
2. `ticket_T12345/rapport/ticket_analysis.md` - Rapport formaté en Markdown
3. `ticket_T12345/questions_reponses.md` - Tableau des questions et réponses
Vous pouvez aussi exécuter directement les scripts individuels pour plus de contrôle:
## Dépendances
#### 1. Extraction des données
- Python 3.8+
- requests
- mistralai (API client)
## Configuration
Le fichier `config.json` contient :
```json
{
"odoo": {
"url": "https://example.odoo.com",
"db": "database_name",
"username": "user@example.com",
"api_key": "your_odoo_api_key_or_password"
},
"llm": {
"api_key": "your_mistral_api_key"
},
"output_dir": "output"
}
```bash
python scripts/extract_ticket.py output/ticket_T0167 --output-dir output_processed/ticket_T0167
```
#### 2. Filtrage des images
```bash
python scripts/filter_images.py --dossier-ticket output_processed/ticket_T0167
```
#### 3. Analyse d'images
```bash
python scripts/analyze_image_contexte.py --image chemin/vers/image.jpg --ticket-info output_processed/ticket_T0167/ticket_info.json
```
#### 4. Analyse de ticket
```bash
python scripts/analyze_ticket.py --messages output_processed/ticket_T0167/messages.json --images-rapport output_processed/ticket_T0167/filter_report.json
```
#### 5. Questions-Réponses
```bash
python scripts/extract_question_reponse.py --messages output_processed/ticket_T0167/messages.json
```
## Structure des dossiers
```
.
├── config.json # Configuration (clés API, etc.)
├── main.py # Script principal original (pour compatibilité)
├── post_process.py # Post-traitement original (pour compatibilité)
├── requirements.txt # Dépendances du projet
├── scripts/ # Scripts modulaires
│ ├── analyze_image_contexte.py # Analyse d'images avec contexte
│ ├── analyze_ticket.py # Analyse de ticket
│ ├── extract_question_reponse.py # Extraction de questions-réponses
│ ├── extract_ticket.py # Extraction et nettoyage de données
│ ├── filter_images.py # Filtrage d'images
│ └── processus_complet.py # Orchestration du processus complet
├── output/ # Données brutes des tickets
│ └── ticket_TXXXX/ # Dossier d'un ticket brut
├── output_processed/ # Données traitées et résultats
│ └── ticket_TXXXX/ # Dossier d'un ticket traité
│ ├── messages.json # Messages nettoyés
│ ├── ticket_info.json # Informations du ticket
│ ├── attachments/ # Pièces jointes
│ ├── filter_report.json # Rapport de filtrage d'images
│ ├── images_analyses/ # Analyses d'images
│ ├── questions_reponses.md # Questions et réponses extraites
│ └── rapport/ # Rapports d'analyse
├── agents/ # Agents d'analyse (pour compatibilité)
├── llm/ # Interfaces avec les modèles de langage
└── utils/ # Utilitaires communs
```
## Dépannage
### Problèmes courants
1. **Messages non traités correctement**:
- Exécutez `extract_ticket.py` avec l'option `--verbose` pour voir les détails du traitement
- Vérifiez que le fichier messages.json est correctement formaté
2. **Images non détectées**:
- Assurez-vous que les images sont dans le dossier `attachments/`
- Vérifiez les formats d'image supportés (.jpg, .png, .gif, etc.)
3. **Erreurs LLM**:
- Vérifiez que votre clé API est valide et correctement configurée dans `config.json`
- Assurez-vous d'avoir une connexion internet stable
### Journaux
Chaque script génère un fichier de journal dans le répertoire de travail:
- `extract_ticket.log`
- `filter_images.log`
- `analyze_image.log`
- `analyze_ticket.log`
- `extract_qr.log`
- `processus_complet.log`
Consultez ces fichiers pour plus de détails sur les erreurs rencontrées.
## Exemples
### Exemple 1: Analyser un ticket complet
```bash
python scripts/processus_complet.py --ticket T0167 --verbose
```
### Exemple 2: Extraire uniquement les questions-réponses
```bash
python scripts/extract_question_reponse.py --messages output/ticket_T0167/messages.json --output output/ticket_T0167/questions_reponses.md
```
### Exemple 3: Réanalyser un ticket avec des changements
```bash
# Nettoyer d'abord les données
python scripts/extract_ticket.py output/ticket_T0167 --output-dir output_processed/ticket_T0167
# Puis extraire les questions-réponses
python scripts/extract_question_reponse.py --messages output_processed/ticket_T0167/messages.json
```

4
extract_ticket.log Normal file
View File

@ -0,0 +1,4 @@
2025-04-02 11:39:56,293 - extract_ticket - INFO - Prétraitement du ticket: output/ticket_T0167 -> output_processed/ticket_T0167
2025-04-02 11:39:56,296 - extract_ticket - INFO - Ticket info prétraité et sauvegardé: output_processed/ticket_T0167/ticket_info.json
2025-04-02 11:39:56,297 - extract_ticket - INFO - Messages prétraités et sauvegardés: output_processed/ticket_T0167/messages.json (2 messages)
2025-04-02 11:39:56,297 - extract_ticket - INFO - Rapport de prétraitement sauvegardé: output_processed/ticket_T0167/pretraitement_rapport.json

1
filter_images.log Normal file
View File

@ -0,0 +1 @@
2025-04-02 11:41:31,283 - filter_images - ERROR - Module LLM non trouvé. Veuillez vous assurer que le répertoire parent est dans PYTHONPATH.

View File

@ -0,0 +1,14 @@
{
"folders": [
{
"path": "."
},
{
"path": "../odoo_toolkit"
},
{
"path": "../llm-ticket2"
}
],
"settings": {}
}

View File

@ -0,0 +1,38 @@
[
{
"id": "ticket_info",
"name": "Pb d'affaire/chantier/partie dans un programme d'essai",
"code": "T0167",
"description": "Je viens vers toi car Mr NOVO ma fait remonter un léger beug sur le numéro déchantillon B2020-0001 (Voir PJ). En effet, il narrive pas à mettre le nom de la partie dans la partie ( en rouge sur la PJ). Il faudrait mettre « joint de chaussée côté giberville » comme stipulé dans le numéro daffaire -> 20017 SETR -> LIAISON RD403 RD402 DESSERTE PORTUAIRE VIADUC -> JOINT DE CHAUSSEE COTE GIBERVILLE. Jai essayé de modifié la partie mais je ny arrive pas.",
"date_create": "2020-04-27 06:21:36",
"role": "system",
"type": "contexte",
"body": "TICKET T0167: Pb d'affaire/chantier/partie dans un programme d'essai.\n\nDESCRIPTION: Je viens vers toi car Mr NOVO ma fait remonter un léger beug sur le numéro déchantillon B2020-0001 (Voir PJ). En effet, il narrive pas à mettre le nom de la partie dans la partie ( en rouge sur la PJ). Il faudrait mettre « joint de chaussée côté giberville » comme stipulé dans le numéro daffaire -> 20017 SETR -> LIAISON RD403 RD402 DESSERTE PORTUAIRE VIADUC -> JOINT DE CHAUSSEE COTE GIBERVILLE. Jai essayé de modifié la partie mais je ny arrive pas."
},
{
"id": "ticket_info",
"author_id": [
0,
""
],
"role": "Client",
"type": "Question",
"date": "",
"email_from": "",
"subject": "",
"body": "TICKET T0167: Pb d'affaire/chantier/partie dans un programme d'essai. DESCRIPTION: Je viens vers toi car Mr NOVO ma fait remonter un léger beug sur le numéro déchantillon B2020-0001 (Voir PJ). En effet, il narrive pas à mettre le nom de la partie dans la partie ( en rouge sur la PJ). Il faudrait mettre « joint de chaussée côté giberville » comme stipulé dans le numéro daffaire -> 20017 SETR -> LIAISON RD403 RD402 DESSERTE PORTUAIRE VIADUC -> JOINT DE CHAUSSEE COTE GIBERVILLE. Jai essayé de modifié la partie mais je ny arrive pas."
},
{
"id": "11333",
"author_id": [
10288,
"CBAO S.A.R.L., Youness BENDEQ"
],
"role": "Support",
"type": "Réponse",
"date": "2020-04-27 06:20:22",
"email_from": "Youness BENDEQ <youness.bendeq@cbao.fr>",
"subject": "Pb d'affaire/chantier/partie dans un programme d'essai",
"body": "-------- Message transféré -------- Sujet : De retour ! Date : Mon, 20 Apr 2020 14:52:05 +0000 De : LENEVEU Guillaume Pour : Youness BENDEQ Bonjour Youness, Jespère que tu vas bien ainsi que toute léquipe BRG-LAB. Je viens vers toi car Mr NOVO ma fait remonter un léger beug sur le numéro déchantillon B2020-0001 (Voir PJ). En effet, il narrive pas à mettre le nom de la partie dans la partie ( en rouge sur la PJ). Il faudrait mettre « joint de chaussée côté giberville » comme stipulé dans le numéro daffaire -> 20017 SETR -> LIAISON RD403 RD402 DESSERTE PORTUAIRE VIADUC -> JOINT DE CHAUSSEE COTE GIBERVILLE. Jai essayé de modifié la partie mais je ny arrive pas. Merci de ta réponse. Bonne fin de journée."
}
]

View File

@ -0,0 +1,8 @@
{
"ticket_id": "ticket_T0167",
"fichiers_generes": [
"ticket_info.json",
"messages.json"
],
"erreurs": []
}

View File

@ -0,0 +1,145 @@
{
"id": 179,
"active": true,
"name": "Pb d'affaire/chantier/partie dans un programme d'essai",
"description": "Je viens vers toi car Mr NOVO ma fait remonter un léger beug sur le numéro déchantillon B2020-0001 (Voir PJ). En effet, il narrive pas à mettre le nom de la partie dans la partie ( en rouge sur la PJ). Il faudrait mettre « joint de chaussée côté giberville » comme stipulé dans le numéro daffaire -> 20017 SETR -> LIAISON RD403 RD402 DESSERTE PORTUAIRE VIADUC -> JOINT DE CHAUSSEE COTE GIBERVILLE. Jai essayé de modifié la partie mais je ny arrive pas.",
"sequence": 22,
"stage_id": [
8,
"Clôturé"
],
"kanban_state": "normal",
"create_date": "2020-04-27 06:21:36",
"write_date": "2024-10-03 13:10:50",
"date_start": "2020-04-20 14:52:00",
"date_end": false,
"date_assign": "2020-04-27 07:15:48",
"date_deadline": false,
"date_last_stage_update": "2020-04-27 07:24:40",
"project_id": [
3,
"Demandes"
],
"notes": false,
"planned_hours": 0.0,
"user_id": [
9,
"Youness BENDEQ"
],
"partner_id": [
8504,
"CONSEIL DEPARTEMENTAL DU CALVADOS (14), Guillaume LENEVEU"
],
"company_id": [
1,
"CBAO S.A.R.L."
],
"color": 0,
"displayed_image_id": false,
"parent_id": false,
"child_ids": [],
"email_from": "guillaume.leneveu@calvados.fr",
"email_cc": "",
"working_hours_open": 0.0,
"working_hours_close": 0.0,
"working_days_open": 0.0,
"working_days_close": 0.0,
"website_message_ids": [
11333
],
"remaining_hours": -0.5,
"effective_hours": 0.5,
"total_hours_spent": 0.5,
"progress": 0.0,
"subtask_effective_hours": 0.0,
"timesheet_ids": [
51
],
"priority": "0",
"code": "T0167",
"milestone_id": false,
"sale_line_id": false,
"sale_order_id": false,
"billable_type": "no",
"activity_ids": [],
"message_follower_ids": [
10972
],
"message_ids": [
11346,
11345,
11344,
11343,
11342,
11335,
11334,
11333,
11332
],
"message_main_attachment_id": [
32380,
"image001.png"
],
"failed_message_ids": [],
"rating_ids": [],
"rating_last_value": 0.0,
"access_token": "cd4fbf5c-27d3-48ed-8c9b-c07f20c3e2d4",
"create_uid": [
1,
"OdooBot"
],
"write_uid": [
1,
"OdooBot"
],
"x_CBAO_windows_maj_ID": false,
"x_CBAO_version_signalement": false,
"x_CBAO_version_correction": false,
"x_CBAO_DateCorrection": false,
"x_CBAO_Scoring_Facilite": 0,
"x_CBAO_Scoring_Importance": 0,
"x_CBAO_Scoring_Urgence": 0,
"x_CBAO_Scoring_Incidence": 0,
"x_CBAO_Scoring_Resultat": 0,
"x_CBAO_InformationsSup": false,
"kanban_state_label": "En cours",
"subtask_planned_hours": 0.0,
"manager_id": [
22,
"Fabien LAFAY"
],
"user_email": "youness@cbao.fr",
"attachment_ids": [],
"legend_blocked": "Bloqué",
"legend_done": "Prêt pour la prochaine étape",
"legend_normal": "En cours",
"subtask_project_id": [
3,
"Demandes"
],
"subtask_count": 0,
"analytic_account_active": true,
"allow_timesheets": true,
"use_milestones": false,
"show_time_control": "start",
"is_project_map_empty": true,
"activity_state": false,
"activity_user_id": false,
"activity_type_id": false,
"activity_date_deadline": false,
"activity_summary": false,
"message_is_follower": false,
"message_unread": false,
"message_unread_counter": 0,
"message_needaction": false,
"message_needaction_counter": 0,
"message_has_error": false,
"message_has_error_counter": 0,
"message_attachment_count": 2,
"rating_last_feedback": false,
"rating_count": 0,
"access_url": "/my/task/179",
"access_warning": "",
"display_name": "[T0167] Pb d'affaire/chantier/partie dans un programme d'essai",
"__last_update": "2024-10-03 13:10:50"
}

View File

@ -0,0 +1,384 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Script d'analyse d'image avec contexte pour les tickets de support.
Extrait des informations pertinentes d'une image en fonction du contexte du ticket.
"""
import os
import sys
import json
import argparse
import logging
from typing import Dict, Any, Optional
# Configuration du logger
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("analyze_image.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger("analyze_image")
try:
from llm import Pixtral
except ImportError:
logger.error("Module LLM non trouvé. Veuillez vous assurer que le répertoire parent est dans PYTHONPATH.")
sys.exit(1)
class ImageAnalyzer:
"""
Analyseur d'image qui extrait des informations pertinentes en fonction du contexte.
"""
def __init__(self, api_key: Optional[str] = None):
"""
Initialise l'analyseur d'image.
Args:
api_key: Clé API pour le modèle de vision
"""
self.llm = Pixtral(api_key=api_key)
# Configurer le modèle de vision
try:
self.llm.model = "pixtral-12b-2409"
self.llm.temperature = 0.3
self.llm.max_tokens = 1024
except Exception as e:
logger.warning(f"Impossible de configurer le modèle: {e}")
self.historique = []
def ajouter_historique(self, action: str, entree: str, resultat: str) -> None:
"""
Ajoute une entrée à l'historique des actions.
Args:
action: Type d'action effectuée
entree: Entrée de l'action
resultat: Résultat de l'action
"""
self.historique.append({
"action": action,
"entree": entree,
"resultat": resultat
})
def analyser_image(self, image_path: str, contexte: Optional[str] = None) -> Dict[str, Any]:
"""
Analyse une image en fonction du contexte donné.
Args:
image_path: Chemin vers l'image à analyser
contexte: Contexte du ticket pour aider à l'analyse
Returns:
Résultat de l'analyse de l'image
"""
if not os.path.exists(image_path):
logger.error(f"Image introuvable: {image_path}")
return {
"success": False,
"erreur": "Image introuvable",
"path": image_path
}
# Vérifier que le fichier est une image
_, extension = os.path.splitext(image_path)
if extension.lower() not in ['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp']:
logger.error(f"Format de fichier non supporté: {extension}")
return {
"success": False,
"erreur": f"Format de fichier non supporté: {extension}",
"path": image_path
}
# Préparer le prompt pour l'analyse
prompt_base = """
Tu es un expert en analyse technique d'interfaces utilisateur et de captures d'écran.
Analyse cette image en détail et extrait les informations suivantes:
1. Type d'image: capture d'écran, photo, schéma, etc.
2. Interface visible: nom du logiciel, type d'interface, fonctionnalités visibles
3. Éléments importants: boutons, menus, messages d'erreur, données visibles
4. Problème potentiel: erreurs, anomalies, incohérences visibles
5. Contexte technique: environnement logiciel, version potentielle, plateforme
Pour les captures d'écran, identifie précisément:
- Le nom exact de la fenêtre/dialogue
- Les champs/formulaires visibles
- Les valeurs/données affichées
- Les messages d'erreur ou d'avertissement
- Les boutons/actions disponibles
Réponds de manière structurée en format Markdown avec des sections claires.
Sois précis et factuel, en te concentrant sur les éléments techniques visibles.
"""
# Ajouter le contexte si disponible
if contexte:
prompt_base += f"""
CONTEXTE DU TICKET:
{contexte}
En tenant compte du contexte ci-dessus, explique également:
- En quoi cette image est pertinente pour le problème décrit
- Quels éléments de l'image correspondent au problème mentionné
- Comment cette image peut aider à résoudre le problème
"""
try:
# Appeler le modèle de vision
try:
resultat = self.llm.analyze_image(image_path, prompt_base)
self.ajouter_historique("analyze_image", os.path.basename(image_path), "Analyse effectuée")
except Exception as e:
logger.error(f"Erreur lors de l'appel au modèle de vision: {str(e)}")
return {
"success": False,
"erreur": f"Erreur lors de l'appel au modèle de vision: {str(e)}",
"path": image_path
}
# Extraire le contenu de la réponse
contenu = resultat.get("content", "")
if not contenu:
logger.error("Réponse vide du modèle de vision")
return {
"success": False,
"erreur": "Réponse vide du modèle de vision",
"path": image_path
}
# Créer le résultat final
resultat_analyse = {
"success": True,
"path": image_path,
"analyse": contenu,
"contexte_fourni": bool(contexte)
}
# Essayer d'extraire des informations structurées à partir de l'analyse
try:
# Rechercher le type d'image
import re
type_match = re.search(r"Type d['']image\s*:\s*([^\n\.]+)", contenu, re.IGNORECASE)
if type_match:
resultat_analyse["type_image"] = type_match.group(1).strip()
# Rechercher l'interface
interface_match = re.search(r'Interface\s*:\s*([^\n\.]+)', contenu, re.IGNORECASE)
interface_match2 = re.search(r'Interface visible\s*:\s*([^\n\.]+)', contenu, re.IGNORECASE)
if interface_match:
resultat_analyse["interface"] = interface_match.group(1).strip()
elif interface_match2:
resultat_analyse["interface"] = interface_match2.group(1).strip()
# Rechercher le problème
probleme_match = re.search(r'Problème\s*:\s*([^\n\.]+)', contenu, re.IGNORECASE)
probleme_match2 = re.search(r'Problème potentiel\s*:\s*([^\n\.]+)', contenu, re.IGNORECASE)
if probleme_match:
resultat_analyse["probleme"] = probleme_match.group(1).strip()
elif probleme_match2:
resultat_analyse["probleme"] = probleme_match2.group(1).strip()
except Exception as e:
logger.warning(f"Impossible d'extraire des informations structurées: {str(e)}")
return resultat_analyse
except Exception as e:
logger.error(f"Erreur lors de l'analyse de l'image {image_path}: {str(e)}")
return {
"success": False,
"erreur": str(e),
"path": image_path
}
def generer_rapport_markdown(self, analyse: Dict[str, Any]) -> str:
"""
Génère un rapport Markdown à partir de l'analyse d'image.
Args:
analyse: Résultat de l'analyse d'image
Returns:
Rapport au format Markdown
"""
if not analyse.get("success", False):
return f"# Échec de l'analyse d'image\n\nErreur: {analyse.get('erreur', 'Inconnue')}\n\nImage: {analyse.get('path', 'Inconnue')}"
# En-tête du rapport
image_path = analyse.get("path", "Inconnue")
image_name = os.path.basename(image_path)
rapport = f"# Analyse de l'image: {image_name}\n\n"
# Ajouter l'analyse brute
rapport += analyse.get("analyse", "Aucune analyse disponible")
# Ajouter des métadonnées
rapport += "\n\n## Métadonnées\n\n"
rapport += f"- **Chemin de l'image**: `{image_path}`\n"
rapport += f"- **Contexte fourni**: {'Oui' if analyse.get('contexte_fourni', False) else 'Non'}\n"
if "type_image" in analyse:
rapport += f"- **Type d'image détecté**: {analyse['type_image']}\n"
if "interface" in analyse:
rapport += f"- **Interface identifiée**: {analyse['interface']}\n"
if "probleme" in analyse:
rapport += f"- **Problème détecté**: {analyse['probleme']}\n"
# Ajouter les paramètres du modèle
rapport += "\n## Paramètres du modèle\n\n"
rapport += f"- **Modèle**: {getattr(self.llm, 'model', 'pixtral-12b-2409')}\n"
rapport += f"- **Température**: {getattr(self.llm, 'temperature', 0.3)}\n"
return rapport
def charger_config():
"""
Charge la configuration depuis config.json.
Returns:
Configuration chargée
"""
config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "config.json")
if not os.path.exists(config_path):
logger.warning(f"Fichier de configuration non trouvé: {config_path}")
return {"llm": {"api_key": None}}
try:
with open(config_path, 'r', encoding='utf-8') as f:
config = json.load(f)
return config
except Exception as e:
logger.error(f"Erreur lors du chargement de la configuration: {str(e)}")
return {"llm": {"api_key": None}}
def main():
"""
Point d'entrée du script.
"""
parser = argparse.ArgumentParser(description="Analyse une image en fonction du contexte du ticket.")
parser.add_argument("--image", "-i", required=True, help="Chemin vers l'image à analyser")
parser.add_argument("--contexte", "-c", help="Chemin vers un fichier contenant le contexte du ticket")
parser.add_argument("--ticket-info", "-t", help="Chemin vers un fichier ticket_info.json pour extraire le contexte")
parser.add_argument("--output", "-o", help="Chemin du fichier de sortie pour le rapport Markdown (par défaut: <image>_analyse.md)")
parser.add_argument("--format", "-f", choices=["json", "md", "both"], default="both",
help="Format de sortie (json, md, both)")
parser.add_argument("--verbose", "-v", action="store_true", help="Afficher plus d'informations")
args = parser.parse_args()
# Configurer le niveau de log
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
# Vérifier que l'image existe
if not os.path.exists(args.image):
logger.error(f"Image non trouvée: {args.image}")
sys.exit(1)
# Charger le contexte si disponible
contexte = None
if args.contexte and os.path.exists(args.contexte):
try:
with open(args.contexte, 'r', encoding='utf-8') as f:
contexte = f.read()
logger.info(f"Contexte chargé depuis {args.contexte}")
except Exception as e:
logger.warning(f"Impossible de charger le contexte depuis {args.contexte}: {str(e)}")
# Extraire le contexte depuis ticket_info.json si disponible
if not contexte and args.ticket_info and os.path.exists(args.ticket_info):
try:
with open(args.ticket_info, 'r', encoding='utf-8') as f:
ticket_info = json.load(f)
if isinstance(ticket_info, dict):
contexte = f"""
TICKET: {ticket_info.get('code', 'Inconnu')} - {ticket_info.get('name', 'Sans titre')}
DESCRIPTION:
{ticket_info.get('description', 'Aucune description')}
"""
logger.info(f"Contexte extrait depuis {args.ticket_info}")
except Exception as e:
logger.warning(f"Impossible de charger le contexte depuis {args.ticket_info}: {str(e)}")
# Déterminer les chemins de sortie
if not args.output:
output_base = os.path.splitext(args.image)[0]
output_md = f"{output_base}_analyse.md"
output_json = f"{output_base}_analyse.json"
else:
output_base = os.path.splitext(args.output)[0]
output_md = f"{output_base}.md"
output_json = f"{output_base}.json"
# Charger la configuration
config = charger_config()
api_key = config.get("llm", {}).get("api_key")
# Initialiser l'analyseur d'image
analyzer = ImageAnalyzer(api_key=api_key)
try:
# Analyser l'image
resultat = analyzer.analyser_image(args.image, contexte)
if not resultat.get("success", False):
logger.error(f"Échec de l'analyse: {resultat.get('erreur', 'Erreur inconnue')}")
sys.exit(1)
# Générer le rapport Markdown
rapport_md = analyzer.generer_rapport_markdown(resultat)
# Sauvegarder les résultats selon le format demandé
if args.format in ["json", "both"]:
with open(output_json, 'w', encoding='utf-8') as f:
json.dump(resultat, f, indent=2, ensure_ascii=False)
logger.info(f"Résultat JSON sauvegardé: {output_json}")
if args.format in ["md", "both"]:
with open(output_md, 'w', encoding='utf-8') as f:
f.write(rapport_md)
logger.info(f"Rapport Markdown sauvegardé: {output_md}")
# Afficher un résumé
print("\nRésumé de l'analyse:")
print(f"Image: {os.path.basename(args.image)}")
if "type_image" in resultat:
print(f"Type d'image: {resultat['type_image']}")
if "interface" in resultat:
print(f"Interface: {resultat['interface']}")
if "probleme" in resultat:
print(f"Problème: {resultat['probleme']}")
if args.format in ["json", "both"]:
print(f"Résultat JSON: {output_json}")
if args.format in ["md", "both"]:
print(f"Rapport Markdown: {output_md}")
except Exception as e:
logger.error(f"Erreur lors de l'analyse: {str(e)}")
import traceback
logger.debug(f"Détails: {traceback.format_exc()}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,383 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Script principal d'orchestration du processus d'analyse de tickets.
Ce script permet d'exécuter toutes les étapes du traitement ou des étapes individuelles.
"""
import os
import sys
import json
import argparse
import subprocess
import logging
from typing import Dict, List, Any, Optional
import shutil
# Configuration du logger
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("processus_complet.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger("processus_complet")
def executer_commande(commande: List[str], description: str) -> bool:
"""
Exécute une commande système et gère les erreurs.
Args:
commande: Liste des éléments de la commande à exécuter
description: Description de la commande pour le journal
Returns:
True si la commande s'est exécutée avec succès, False sinon
"""
try:
logger.info(f"Exécution: {description}")
logger.debug(f"Commande: {' '.join(commande)}")
resultat = subprocess.run(commande, check=True, capture_output=True, text=True)
logger.info(f"Succès: {description}")
logger.debug(f"Sortie: {resultat.stdout}")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Échec: {description}")
logger.error(f"Code de sortie: {e.returncode}")
logger.error(f"Erreur: {e.stderr}")
return False
except Exception as e:
logger.error(f"Erreur lors de l'exécution de la commande: {str(e)}")
return False
def etape_extraction(ticket_dir: str, output_dir: str) -> bool:
"""
Exécute l'étape d'extraction des données du ticket.
Args:
ticket_dir: Répertoire contenant les données brutes du ticket
output_dir: Répertoire sauvegarder les données extraites
Returns:
True si l'extraction a réussi, False sinon
"""
script_path = os.path.join("scripts", "extract_ticket.py")
if not os.path.exists(script_path):
logger.error(f"Script d'extraction non trouvé: {script_path}")
return False
commande = [
sys.executable,
script_path,
ticket_dir,
"--output-dir", output_dir,
"--verbose"
]
return executer_commande(commande, "Extraction des données du ticket")
def etape_filtrage_images(ticket_dir: str) -> bool:
"""
Exécute l'étape de filtrage des images pertinentes.
Args:
ticket_dir: Répertoire contenant les données du ticket
Returns:
True si le filtrage a réussi, False sinon
"""
script_path = os.path.join("scripts", "filter_images.py")
if not os.path.exists(script_path):
logger.error(f"Script de filtrage d'images non trouvé: {script_path}")
return False
commande = [
sys.executable,
script_path,
"--dossier-ticket", ticket_dir,
"--output", os.path.join(ticket_dir, "filter_report.json"),
"--verbose"
]
return executer_commande(commande, "Filtrage des images pertinentes")
def etape_analyse_images(ticket_dir: str, rapport_filtrage: str) -> bool:
"""
Exécute l'étape d'analyse des images pertinentes.
Args:
ticket_dir: Répertoire contenant les données du ticket
rapport_filtrage: Chemin vers le rapport de filtrage d'images
Returns:
True si l'analyse a réussi, False sinon
"""
script_path = os.path.join("scripts", "analyze_image_contexte.py")
ticket_info_path = os.path.join(ticket_dir, "ticket_info.json")
if not os.path.exists(script_path):
logger.error(f"Script d'analyse d'images non trouvé: {script_path}")
return False
# Charger le rapport de filtrage
try:
with open(rapport_filtrage, 'r', encoding='utf-8') as f:
filtre_data = json.load(f)
images_pertinentes = filtre_data.get("images_pertinentes", [])
if not images_pertinentes:
logger.info("Aucune image pertinente à analyser")
return True
except Exception as e:
logger.error(f"Erreur lors du chargement du rapport de filtrage: {str(e)}")
return False
# Créer le répertoire pour les rapports d'analyse d'images
images_analyses_dir = os.path.join(ticket_dir, "images_analyses")
os.makedirs(images_analyses_dir, exist_ok=True)
# Analyser chaque image pertinente
succes = True
for image_path in images_pertinentes:
image_name = os.path.basename(image_path)
output_base = os.path.join(images_analyses_dir, image_name)
commande = [
sys.executable,
script_path,
"--image", image_path,
"--ticket-info", ticket_info_path,
"--output", output_base + "_analyse",
"--verbose"
]
if not executer_commande(commande, f"Analyse de l'image {image_name}"):
succes = False
return succes
def etape_analyse_ticket(ticket_dir: str, rapport_filtrage: str) -> bool:
"""
Exécute l'étape d'analyse du contenu du ticket.
Args:
ticket_dir: Répertoire contenant les données du ticket
rapport_filtrage: Chemin vers le rapport de filtrage d'images
Returns:
True si l'analyse a réussi, False sinon
"""
script_path = os.path.join("scripts", "analyze_ticket.py")
messages_path = os.path.join(ticket_dir, "messages.json")
if not os.path.exists(script_path):
logger.error(f"Script d'analyse de ticket non trouvé: {script_path}")
return False
commande = [
sys.executable,
script_path,
"--messages", messages_path,
"--images-rapport", rapport_filtrage,
"--output", ticket_dir,
"--verbose"
]
return executer_commande(commande, "Analyse du contenu du ticket")
def etape_questions_reponses(ticket_dir: str) -> bool:
"""
Exécute l'étape d'extraction des questions et réponses.
Args:
ticket_dir: Répertoire contenant les données du ticket
Returns:
True si l'extraction a réussi, False sinon
"""
script_path = os.path.join("scripts", "extract_question_reponse.py")
messages_path = os.path.join(ticket_dir, "messages.json")
output_path = os.path.join(ticket_dir, "questions_reponses.md")
if not os.path.exists(script_path):
logger.error(f"Script d'extraction des questions-réponses non trouvé: {script_path}")
return False
commande = [
sys.executable,
script_path,
"--messages", messages_path,
"--output", output_path,
"--verbose"
]
return executer_commande(commande, "Extraction des questions et réponses")
def processus_complet(ticket_code: str, dossier_source: str = None, dossier_sortie: str = None) -> bool:
"""
Exécute le processus complet d'analyse d'un ticket.
Args:
ticket_code: Code du ticket à analyser
dossier_source: Dossier contenant les tickets bruts (par défaut: output/)
dossier_sortie: Dossier sauvegarder les résultats (par défaut: output_processed/)
Returns:
True si le processus s'est exécuté avec succès, False sinon
"""
# Définir les dossiers par défaut si non spécifiés
if dossier_source is None:
dossier_source = "output"
if dossier_sortie is None:
dossier_sortie = "output_processed"
# Construire les chemins
ticket_dir_source = os.path.join(dossier_source, f"ticket_{ticket_code}")
ticket_dir_sortie = os.path.join(dossier_sortie, f"ticket_{ticket_code}")
# Vérifier que le dossier source existe
if not os.path.exists(ticket_dir_source):
logger.error(f"Dossier source non trouvé: {ticket_dir_source}")
return False
# Créer le dossier de sortie s'il n'existe pas
os.makedirs(ticket_dir_sortie, exist_ok=True)
# 1. Extraction des données
if not etape_extraction(ticket_dir_source, ticket_dir_sortie):
logger.error("Échec de l'étape d'extraction")
return False
# 2. Filtrage des images
if not etape_filtrage_images(ticket_dir_sortie):
logger.error("Échec de l'étape de filtrage des images")
return False
# 3. Analyse des images pertinentes
rapport_filtrage = os.path.join(ticket_dir_sortie, "filter_report.json")
if not etape_analyse_images(ticket_dir_sortie, rapport_filtrage):
logger.error("Échec de l'étape d'analyse des images")
return False
# 4. Analyse du contenu du ticket
if not etape_analyse_ticket(ticket_dir_sortie, rapport_filtrage):
logger.error("Échec de l'étape d'analyse du ticket")
return False
# 5. Extraction des questions et réponses
if not etape_questions_reponses(ticket_dir_sortie):
logger.error("Échec de l'étape d'extraction des questions et réponses")
return False
logger.info(f"Processus complet terminé avec succès pour le ticket {ticket_code}")
logger.info(f"Résultats disponibles dans: {ticket_dir_sortie}")
return True
def main():
"""
Point d'entrée du script.
"""
parser = argparse.ArgumentParser(description="Exécute le processus d'analyse de tickets de support.")
parser.add_argument("--ticket", "-t", required=True, help="Code du ticket à analyser (ex: T0167)")
parser.add_argument("--source", "-s", help="Dossier source contenant les tickets bruts (par défaut: output/)")
parser.add_argument("--output", "-o", help="Dossier de sortie pour les résultats (par défaut: output_processed/)")
parser.add_argument("--etapes", "-e", choices=["extraction", "filtrage", "analyse_images", "analyse_ticket", "questions_reponses", "tout"],
default="tout", help="Étapes à exécuter")
parser.add_argument("--verbose", "-v", action="store_true", help="Afficher plus d'informations")
args = parser.parse_args()
# Configurer le niveau de log
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
# Récupérer le code du ticket
ticket_code = args.ticket
if ticket_code.startswith("ticket_"):
ticket_code = ticket_code[7:]
# Définir les dossiers source et sortie
dossier_source = args.source or "output"
dossier_sortie = args.output or "output_processed"
# Construire les chemins
ticket_dir_source = os.path.join(dossier_source, f"ticket_{ticket_code}")
ticket_dir_sortie = os.path.join(dossier_sortie, f"ticket_{ticket_code}")
# Vérifier que le dossier source existe
if not os.path.exists(ticket_dir_source):
logger.error(f"Dossier source non trouvé: {ticket_dir_source}")
sys.exit(1)
# Exécuter les étapes demandées
if args.etapes == "tout":
if processus_complet(ticket_code, dossier_source, dossier_sortie):
print(f"Processus complet terminé avec succès pour le ticket {ticket_code}")
print(f"Résultats disponibles dans: {ticket_dir_sortie}")
else:
print(f"Échec du processus pour le ticket {ticket_code}")
sys.exit(1)
else:
# Créer le dossier de sortie s'il n'existe pas
os.makedirs(ticket_dir_sortie, exist_ok=True)
# Exécuter l'étape spécifique
if args.etapes == "extraction":
if etape_extraction(ticket_dir_source, ticket_dir_sortie):
print("Étape d'extraction terminée avec succès")
else:
print("Échec de l'étape d'extraction")
sys.exit(1)
elif args.etapes == "filtrage":
if etape_filtrage_images(ticket_dir_sortie):
print("Étape de filtrage des images terminée avec succès")
else:
print("Échec de l'étape de filtrage des images")
sys.exit(1)
elif args.etapes == "analyse_images":
rapport_filtrage = os.path.join(ticket_dir_sortie, "filter_report.json")
if not os.path.exists(rapport_filtrage):
logger.error(f"Rapport de filtrage non trouvé: {rapport_filtrage}")
print("Veuillez d'abord exécuter l'étape de filtrage des images")
sys.exit(1)
if etape_analyse_images(ticket_dir_sortie, rapport_filtrage):
print("Étape d'analyse des images terminée avec succès")
else:
print("Échec de l'étape d'analyse des images")
sys.exit(1)
elif args.etapes == "analyse_ticket":
rapport_filtrage = os.path.join(ticket_dir_sortie, "filter_report.json")
if not os.path.exists(rapport_filtrage):
logger.error(f"Rapport de filtrage non trouvé: {rapport_filtrage}")
print("Veuillez d'abord exécuter l'étape de filtrage des images")
sys.exit(1)
if etape_analyse_ticket(ticket_dir_sortie, rapport_filtrage):
print("Étape d'analyse du ticket terminée avec succès")
else:
print("Échec de l'étape d'analyse du ticket")
sys.exit(1)
elif args.etapes == "questions_reponses":
if etape_questions_reponses(ticket_dir_sortie):
print("Étape d'extraction des questions et réponses terminée avec succès")
else:
print("Échec de l'étape d'extraction des questions et réponses")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1 @@
This is a dummy package designed to prevent namesquatting on PyPI. You should install `beautifulsoup4 <https://pypi.python.org/pypi/beautifulsoup4>`_ instead.

View File

@ -0,0 +1,123 @@
Metadata-Version: 2.4
Name: beautifulsoup4
Version: 4.13.3
Summary: Screen-scraping library
Project-URL: Download, https://www.crummy.com/software/BeautifulSoup/bs4/download/
Project-URL: Homepage, https://www.crummy.com/software/BeautifulSoup/bs4/
Author-email: Leonard Richardson <leonardr@segfault.org>
License: MIT License
License-File: AUTHORS
License-File: LICENSE
Keywords: HTML,XML,parse,soup
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Text Processing :: Markup :: HTML
Classifier: Topic :: Text Processing :: Markup :: SGML
Classifier: Topic :: Text Processing :: Markup :: XML
Requires-Python: >=3.7.0
Requires-Dist: soupsieve>1.2
Requires-Dist: typing-extensions>=4.0.0
Provides-Extra: cchardet
Requires-Dist: cchardet; extra == 'cchardet'
Provides-Extra: chardet
Requires-Dist: chardet; extra == 'chardet'
Provides-Extra: charset-normalizer
Requires-Dist: charset-normalizer; extra == 'charset-normalizer'
Provides-Extra: html5lib
Requires-Dist: html5lib; extra == 'html5lib'
Provides-Extra: lxml
Requires-Dist: lxml; extra == 'lxml'
Description-Content-Type: text/markdown
Beautiful Soup is a library that makes it easy to scrape information
from web pages. It sits atop an HTML or XML parser, providing Pythonic
idioms for iterating, searching, and modifying the parse tree.
# Quick start
```
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup("<p>Some<b>bad<i>HTML")
>>> print(soup.prettify())
<html>
<body>
<p>
Some
<b>
bad
<i>
HTML
</i>
</b>
</p>
</body>
</html>
>>> soup.find(string="bad")
'bad'
>>> soup.i
<i>HTML</i>
#
>>> soup = BeautifulSoup("<tag1>Some<tag2/>bad<tag3>XML", "xml")
#
>>> print(soup.prettify())
<?xml version="1.0" encoding="utf-8"?>
<tag1>
Some
<tag2/>
bad
<tag3>
XML
</tag3>
</tag1>
```
To go beyond the basics, [comprehensive documentation is available](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
# Links
* [Homepage](https://www.crummy.com/software/BeautifulSoup/bs4/)
* [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
* [Discussion group](https://groups.google.com/group/beautifulsoup/)
* [Development](https://code.launchpad.net/beautifulsoup/)
* [Bug tracker](https://bugs.launchpad.net/beautifulsoup/)
* [Complete changelog](https://git.launchpad.net/beautifulsoup/tree/CHANGELOG)
# Note on Python 2 sunsetting
Beautiful Soup's support for Python 2 was discontinued on December 31,
2020: one year after the sunset date for Python 2 itself. From this
point onward, new Beautiful Soup development will exclusively target
Python 3. The final release of Beautiful Soup 4 to support Python 2
was 4.9.3.
# Supporting the project
If you use Beautiful Soup as part of your professional work, please consider a
[Tidelift subscription](https://tidelift.com/subscription/pkg/pypi-beautifulsoup4?utm_source=pypi-beautifulsoup4&utm_medium=referral&utm_campaign=readme).
This will support many of the free software projects your organization
depends on, not just Beautiful Soup.
If you use Beautiful Soup for personal projects, the best way to say
thank you is to read
[Tool Safety](https://www.crummy.com/software/BeautifulSoup/zine/), a zine I
wrote about what Beautiful Soup has taught me about software
development.
# Building the documentation
The bs4/doc/ directory contains full documentation in Sphinx
format. Run `make html` in that directory to create HTML
documentation.
# Running the unit tests
Beautiful Soup supports unit test discovery using Pytest:
```
$ pytest
```

View File

@ -0,0 +1,89 @@
beautifulsoup4-4.13.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
beautifulsoup4-4.13.3.dist-info/METADATA,sha256=o692i819qmuScSS6UxoBFAi2xPSl8bk2V6TuQ3zBofs,3809
beautifulsoup4-4.13.3.dist-info/RECORD,,
beautifulsoup4-4.13.3.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
beautifulsoup4-4.13.3.dist-info/licenses/AUTHORS,sha256=6-a5uw17L-xMAg7-R3iVPGKH_OwwacpjRkuOVPjAeyw,2198
beautifulsoup4-4.13.3.dist-info/licenses/LICENSE,sha256=VbTY1LHlvIbRDvrJG3TIe8t3UmsPW57a-LnNKtxzl7I,1441
bs4/__init__.py,sha256=-jvrE9GBtzsOF3wIrIOALQTqu99mf9_gEhNFJMCQLeg,44212
bs4/__pycache__/__init__.cpython-312.pyc,,
bs4/__pycache__/_deprecation.cpython-312.pyc,,
bs4/__pycache__/_typing.cpython-312.pyc,,
bs4/__pycache__/_warnings.cpython-312.pyc,,
bs4/__pycache__/css.cpython-312.pyc,,
bs4/__pycache__/dammit.cpython-312.pyc,,
bs4/__pycache__/diagnose.cpython-312.pyc,,
bs4/__pycache__/element.cpython-312.pyc,,
bs4/__pycache__/exceptions.cpython-312.pyc,,
bs4/__pycache__/filter.cpython-312.pyc,,
bs4/__pycache__/formatter.cpython-312.pyc,,
bs4/_deprecation.py,sha256=ucZjfBAUF1B0f5ldNIIhlkHsYjHtvwELWlE3_pAR6Vs,2394
bs4/_typing.py,sha256=3FgPPPrdsTa-kvn1R36o1k_2SfilcUWm4M9i7G4qFl8,7118
bs4/_warnings.py,sha256=ZuOETgcnEbZgw2N0nnNXn6wvtrn2ut7AF0d98bvkMFc,4711
bs4/builder/__init__.py,sha256=TYAKmGFuVfTsI53reHijcZKETnPuvse57KZ6LsZsJRo,31130
bs4/builder/__pycache__/__init__.cpython-312.pyc,,
bs4/builder/__pycache__/_html5lib.cpython-312.pyc,,
bs4/builder/__pycache__/_htmlparser.cpython-312.pyc,,
bs4/builder/__pycache__/_lxml.cpython-312.pyc,,
bs4/builder/_html5lib.py,sha256=3MXq29SYg9XoS9gu2hgTDU02IQkv8kIBx3rW1QWY3fg,22846
bs4/builder/_htmlparser.py,sha256=cu9PFkxkqVIIe9nU3fVy-JJhINEhY8cGbsuCwZCnQCA,17872
bs4/builder/_lxml.py,sha256=XRzCA4WzvIUjJk9_U4kWzMBvGokr_UaIvoGUmtLtTYI,18538
bs4/css.py,sha256=XGQq7HQUDyYEbDorFMGIGek7QGPiFuZYnvNEQ59GyxM,12685
bs4/dammit.py,sha256=oHd1elJ44kMobBGSQRuG7Wln6M-BLz1unOuUscaL9h0,51472
bs4/diagnose.py,sha256=zy7_GPQHsTtNf8s10WWIRcC5xH5_8LKs295Aa7iFUyI,7832
bs4/element.py,sha256=8CXiRqz2DZJyga2igCVGaXdP7urNEDvDnsRid3SNNw4,109331
bs4/exceptions.py,sha256=Q9FOadNe8QRvzDMaKSXe2Wtl8JK_oAZW7mbFZBVP_GE,951
bs4/filter.py,sha256=2_ydSe978oLVmVyNLBi09Cc1VJEXYVjuO6K4ALq6XFk,28819
bs4/formatter.py,sha256=5O4gBxTTi5TLU6TdqsgYI9Io0Gc_6-oCAWpfHI3Thn0,10464
bs4/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
bs4/tests/__init__.py,sha256=Heh-lB8w8mzpaWcgs7MRwkBnDcf1YxAvqvePmsej1Pc,52268
bs4/tests/__pycache__/__init__.cpython-312.pyc,,
bs4/tests/__pycache__/test_builder.cpython-312.pyc,,
bs4/tests/__pycache__/test_builder_registry.cpython-312.pyc,,
bs4/tests/__pycache__/test_css.cpython-312.pyc,,
bs4/tests/__pycache__/test_dammit.cpython-312.pyc,,
bs4/tests/__pycache__/test_element.cpython-312.pyc,,
bs4/tests/__pycache__/test_filter.cpython-312.pyc,,
bs4/tests/__pycache__/test_formatter.cpython-312.pyc,,
bs4/tests/__pycache__/test_fuzz.cpython-312.pyc,,
bs4/tests/__pycache__/test_html5lib.cpython-312.pyc,,
bs4/tests/__pycache__/test_htmlparser.cpython-312.pyc,,
bs4/tests/__pycache__/test_lxml.cpython-312.pyc,,
bs4/tests/__pycache__/test_navigablestring.cpython-312.pyc,,
bs4/tests/__pycache__/test_pageelement.cpython-312.pyc,,
bs4/tests/__pycache__/test_soup.cpython-312.pyc,,
bs4/tests/__pycache__/test_tag.cpython-312.pyc,,
bs4/tests/__pycache__/test_tree.cpython-312.pyc,,
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4670634698080256.testcase,sha256=yUdXkbpNK7LVOQ0LBHMoqZ1rWaBfSXWytoO_xdSm7Ho,15
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase,sha256=Uv_dx4a43TSfoNkjU-jHW2nSXkqHFg4XdAw7SWVObUk,23
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase,sha256=OEyVA0Ej4FxswOElrUNt0In4s4YhrmtaxE_NHGZvGtg,30
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5000587759190016.testcase,sha256=G4vpNBOz-RwMpi6ewEgNEa13zX0sXhmL7VHOyIcdKVQ,15347
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase,sha256=3d8z65o4p7Rur-RmCHoOjzqaYQ8EAtjmiBYTHNyAdl4,19469
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5270998950477824.testcase,sha256=NfGIlit1k40Ip3mlnBkYOkIDJX6gHtjlErwl7gsBjAQ,12
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5375146639360000.testcase,sha256=xy4i1U0nhFHcnyc5pRKS6JRMvuoCNUur-Scor6UxIGw,4317
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5492400320282624.testcase,sha256=Q-UTYpQBUsWoMgIUspUlzveSI-41s4ABC3jajRb-K0o,11502
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase,sha256=2bq3S8KxZgk8EajLReHD8m4_0Lj_nrkyJAxB_z_U0D0,5
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase,sha256=MZDu31LPLfgu6jP9IZkrlwNes3f_sL8WFP5BChkUKdY,35
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase,sha256=w58r-s6besG5JwPXpnz37W2YTj9-_qxFbk6hiEnKeIQ,51495
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase,sha256=q8rkdMECEXKcqVhOf5zWHkSBTQeOPt0JiLg2TZiPCuk,10380
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase,sha256=QfzoOxKwNuqG-4xIrea6MOQLXhfAAOQJ0r9u-J6kSNs,19
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6306874195312640.testcase,sha256=MJ2pHFuuCQUiQz1Kor2sof7LWeRERQ6QK43YNqQHg9o,47
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase,sha256=EItOpSdeD4ewK-qgJ9vtxennwn_huguzXgctrUT7fqE,3546
bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase,sha256=a2aJTG4FceGSJXsjtxoS8S4jk_8rZsS3aznLkeO2_dY,124
bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase,sha256=jRFRtCKlP3-3EDLc_iVRTcE6JNymv0rYcVM6qRaPrxI,2607
bs4/tests/fuzz/crash-ffbdfa8a2b26f13537b68d3794b0478a4090ee4a.testcase,sha256=7NsdCiXWAhNkmoW1pvF7rbZExyLAQIWtDtSHXIsH6YU,103
bs4/tests/test_builder.py,sha256=BBMBirb4mb-fVdJj4dxQCxrdcjQeulKSKBFrPFVpVOk,1095
bs4/tests/test_builder_registry.py,sha256=tpJ5Niva_cF49SdzIb1gMo0W4Tiodr8BYSOE3O6P_g8,5064
bs4/tests/test_css.py,sha256=T_HCMzpe6hTr8d2YFXm0DScr8gT8d6h0MYlhZfo6A4U,18625
bs4/tests/test_dammit.py,sha256=TQCVe6kKVYSuYjwTtIvIaOYYmWYPMnR_3PK45kimLg4,17840
bs4/tests/test_element.py,sha256=u7FbTtKE6pYJetD1PgS3fCU1-QQXfB7GaLwfI3s4ROY,4373
bs4/tests/test_filter.py,sha256=Sie2l-vepWTAqlXJJpG0Qp4HD8HHSi2TC1XymCxws70,27032
bs4/tests/test_formatter.py,sha256=a6TaeNOVeg_ZYseiP7atmFyYJkQJqlk-jlVxMlyJC2o,6943
bs4/tests/test_fuzz.py,sha256=zyaoWgCt8hnRkXecBYM9x91fI_Ao9eQUcsBi76ooJ08,7123
bs4/tests/test_html5lib.py,sha256=ljMOAds__k9zhfT4jVnxxhZkLEggaT7wqDexzDNwus4,9206
bs4/tests/test_htmlparser.py,sha256=iDHEI69GcisNP48BeHdLAWlqPGhrBwxftnUM8_3nsR4,6662
bs4/tests/test_lxml.py,sha256=4fZIsNVbm2zdRQFNNwD-lqwf_QtUtiU4QbtLXISQZBw,7453
bs4/tests/test_navigablestring.py,sha256=ntfnbp8-sRAOoCCVbm4cCXatS7kmCOaIRFDj-v5-l0s,5096
bs4/tests/test_pageelement.py,sha256=lAw-sVP3zJX0VdHXXN1Ia3tci5dgK10Gac5o9G46IIk,16195
bs4/tests/test_soup.py,sha256=I-mhNheo2-PTvfJToDI43EO4RmGlpKJsYOS19YoQ7-8,22669
bs4/tests/test_tag.py,sha256=ue32hxQs_a1cMuzyu7MNjK42t0IOGMA6POPLIArMOts,9690
bs4/tests/test_tree.py,sha256=vgUa6x8AJFEvHQ7RQu0973wrsLCRdRpdtq4oZAa_ANA,54839

View File

@ -0,0 +1,4 @@
Wheel-Version: 1.0
Generator: hatchling 1.27.0
Root-Is-Purelib: true
Tag: py3-none-any

View File

@ -0,0 +1,49 @@
Behold, mortal, the origins of Beautiful Soup...
================================================
Leonard Richardson is the primary maintainer.
Aaron DeVore, Isaac Muse and Chris Papademetrious have made
significant contributions to the code base.
Mark Pilgrim provided the encoding detection code that forms the base
of UnicodeDammit.
Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful
Soup 4 working under Python 3.
Simon Willison wrote soupselect, which was used to make Beautiful Soup
support CSS selectors. Isaac Muse wrote SoupSieve, which made it
possible to _remove_ the CSS selector code from Beautiful Soup.
Sam Ruby helped with a lot of edge cases.
Jonathan Ellis was awarded the prestigious Beau Potage D'Or for his
work in solving the nestable tags conundrum.
An incomplete list of people have contributed patches to Beautiful
Soup:
Istvan Albert, Andrew Lin, Anthony Baxter, Oliver Beattie, Andrew
Boyko, Tony Chang, Francisco Canas, "Delong", Zephyr Fang, Fuzzy,
Roman Gaufman, Yoni Gilad, Richie Hindle, Toshihiro Kamiya, Peteris
Krumins, Kent Johnson, Marek Kapolka, Andreas Kostyrka, Roel Kramer,
Ben Last, Robert Leftwich, Stefaan Lippens, "liquider", Staffan
Malmgren, Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon",
Ed Oskiewicz, Martijn Peters, Greg Phillips, Giles Radford, Stefano
Revera, Arthur Rudolph, Marko Samastur, James Salter, Jouni Seppänen,
Alexander Schmolck, Tim Shirley, Geoffrey Sneddon, Ville Skyttä,
"Vikas", Jens Svalgaard, Andy Theyers, Eric Weiser, Glyn Webster, John
Wiseman, Paul Wright, Danny Yoo
An incomplete list of people who made suggestions or found bugs or
found ways to break Beautiful Soup:
Hanno Böck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel,
Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes,
Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams,
warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison,
Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed
Summers, Dennis Sutch, Chris Smith, Aaron Swartz, Stuart
Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de
Sousa Rocha, Yichun Wei, Per Vognsen

View File

@ -0,0 +1,31 @@
Beautiful Soup is made available under the MIT license:
Copyright (c) Leonard Richardson
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Beautiful Soup incorporates code from the html5lib library, which is
also made available under the MIT license. Copyright (c) James Graham
and other contributors
Beautiful Soup has an optional dependency on the soupsieve library,
which is also made available under the MIT license. Copyright (c)
Isaac Muse

View File

@ -0,0 +1 @@
pip

View File

@ -0,0 +1,10 @@
Metadata-Version: 2.1
Name: bs4
Version: 0.0.2
Summary: Dummy package for Beautiful Soup (beautifulsoup4)
Author-email: Leonard Richardson <leonardr@segfault.org>
License: MIT License
Requires-Dist: beautifulsoup4
Description-Content-Type: text/x-rst
This is a dummy package designed to prevent namesquatting on PyPI. You should install `beautifulsoup4 <https://pypi.python.org/pypi/beautifulsoup4>`_ instead.

View File

@ -0,0 +1,6 @@
README.rst,sha256=KMs4D-t40JC-oge8vGS3O5gueksurGqAIFxPtHZAMXQ,159
bs4-0.0.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
bs4-0.0.2.dist-info/METADATA,sha256=GEwOSFCOYLu11XQR3O2dMO7ZTpKFZpGoIUG0gkFVgA8,411
bs4-0.0.2.dist-info/RECORD,,
bs4-0.0.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
bs4-0.0.2.dist-info/WHEEL,sha256=VYAwk8D_V6zmIA2XKK-k7Fem_KAtVk3hugaRru3yjGc,105

View File

@ -0,0 +1,5 @@
Wheel-Version: 1.0
Generator: hatchling 1.21.0
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,80 @@
"""Helper functions for deprecation.
This interface is itself unstable and may change without warning. Do
not use these functions yourself, even as a joke. The underscores are
there for a reason. No support will be given.
In particular, most of this will go away without warning once
Beautiful Soup drops support for Python 3.11, since Python 3.12
defines a `@typing.deprecated()
decorator. <https://peps.python.org/pep-0702/>`_
"""
import functools
import warnings
from typing import (
Any,
Callable,
)
def _deprecated_alias(old_name: str, new_name: str, version: str):
"""Alias one attribute name to another for backward compatibility
:meta private:
"""
@property
def alias(self) -> Any:
":meta private:"
warnings.warn(
f"Access to deprecated property {old_name}. (Replaced by {new_name}) -- Deprecated since version {version}.",
DeprecationWarning,
stacklevel=2,
)
return getattr(self, new_name)
@alias.setter
def alias(self, value: str) -> None:
":meta private:"
warnings.warn(
f"Write to deprecated property {old_name}. (Replaced by {new_name}) -- Deprecated since version {version}.",
DeprecationWarning,
stacklevel=2,
)
return setattr(self, new_name, value)
return alias
def _deprecated_function_alias(
old_name: str, new_name: str, version: str
) -> Callable[[Any], Any]:
def alias(self, *args: Any, **kwargs: Any) -> Any:
":meta private:"
warnings.warn(
f"Call to deprecated method {old_name}. (Replaced by {new_name}) -- Deprecated since version {version}.",
DeprecationWarning,
stacklevel=2,
)
return getattr(self, new_name)(*args, **kwargs)
return alias
def _deprecated(replaced_by: str, version: str) -> Callable:
def deprecate(func: Callable) -> Callable:
@functools.wraps(func)
def with_warning(*args: Any, **kwargs: Any) -> Any:
":meta private:"
warnings.warn(
f"Call to deprecated method {func.__name__}. (Replaced by {replaced_by}) -- Deprecated since version {version}.",
DeprecationWarning,
stacklevel=2,
)
return func(*args, **kwargs)
return with_warning
return deprecate

View File

@ -0,0 +1,196 @@
# Custom type aliases used throughout Beautiful Soup to improve readability.
# Notes on improvements to the type system in newer versions of Python
# that can be used once Beautiful Soup drops support for older
# versions:
#
# * ClassVar can be put on class variables now.
# * In 3.10, x|y is an accepted shorthand for Union[x,y].
# * In 3.10, TypeAlias gains capabilities that can be used to
# improve the tree matching types (I don't remember what, exactly).
# * In 3.9 it's possible to specialize the re.Match type,
# e.g. re.Match[str]. In 3.8 there's a typing.re namespace for this,
# but it's removed in 3.12, so to support the widest possible set of
# versions I'm not using it.
from typing_extensions import (
runtime_checkable,
Protocol,
TypeAlias,
)
from typing import (
Any,
Callable,
Dict,
IO,
Iterable,
Mapping,
Optional,
Pattern,
TYPE_CHECKING,
Union,
)
if TYPE_CHECKING:
from bs4.element import (
AttributeValueList,
NamespacedAttribute,
NavigableString,
PageElement,
ResultSet,
Tag,
)
@runtime_checkable
class _RegularExpressionProtocol(Protocol):
"""A protocol object which can accept either Python's built-in
`re.Pattern` objects, or the similar ``Regex`` objects defined by the
third-party ``regex`` package.
"""
def search(
self, string: str, pos: int = ..., endpos: int = ...
) -> Optional[Any]: ...
@property
def pattern(self) -> str: ...
# Aliases for markup in various stages of processing.
#
#: The rawest form of markup: either a string, bytestring, or an open filehandle.
_IncomingMarkup: TypeAlias = Union[str, bytes, IO[str], IO[bytes]]
#: Markup that is in memory but has (potentially) yet to be converted
#: to Unicode.
_RawMarkup: TypeAlias = Union[str, bytes]
# Aliases for character encodings
#
#: A data encoding.
_Encoding: TypeAlias = str
#: One or more data encodings.
_Encodings: TypeAlias = Iterable[_Encoding]
# Aliases for XML namespaces
#
#: The prefix for an XML namespace.
_NamespacePrefix: TypeAlias = str
#: The URL of an XML namespace
_NamespaceURL: TypeAlias = str
#: A mapping of prefixes to namespace URLs.
_NamespaceMapping: TypeAlias = Dict[_NamespacePrefix, _NamespaceURL]
#: A mapping of namespace URLs to prefixes
_InvertedNamespaceMapping: TypeAlias = Dict[_NamespaceURL, _NamespacePrefix]
# Aliases for the attribute values associated with HTML/XML tags.
#
#: The value associated with an HTML or XML attribute. This is the
#: relatively unprocessed value Beautiful Soup expects to come from a
#: `TreeBuilder`.
_RawAttributeValue: TypeAlias = str
#: A dictionary of names to `_RawAttributeValue` objects. This is how
#: Beautiful Soup expects a `TreeBuilder` to represent a tag's
#: attribute values.
_RawAttributeValues: TypeAlias = (
"Mapping[Union[str, NamespacedAttribute], _RawAttributeValue]"
)
#: An attribute value in its final form, as stored in the
# `Tag` class, after it has been processed and (in some cases)
# split into a list of strings.
_AttributeValue: TypeAlias = Union[str, "AttributeValueList"]
#: A dictionary of names to :py:data:`_AttributeValue` objects. This is what
#: a tag's attributes look like after processing.
_AttributeValues: TypeAlias = Dict[str, _AttributeValue]
#: The methods that deal with turning :py:data:`_RawAttributeValue` into
#: :py:data:`_AttributeValue` may be called several times, even after the values
#: are already processed (e.g. when cloning a tag), so they need to
#: be able to acommodate both possibilities.
_RawOrProcessedAttributeValues: TypeAlias = Union[_RawAttributeValues, _AttributeValues]
#: A number of tree manipulation methods can take either a `PageElement` or a
#: normal Python string (which will be converted to a `NavigableString`).
_InsertableElement: TypeAlias = Union["PageElement", str]
# Aliases to represent the many possibilities for matching bits of a
# parse tree.
#
# This is very complicated because we're applying a formal type system
# to some very DWIM code. The types we end up with will be the types
# of the arguments to the SoupStrainer constructor and (more
# familiarly to Beautiful Soup users) the find* methods.
#: A function that takes a PageElement and returns a yes-or-no answer.
_PageElementMatchFunction: TypeAlias = Callable[["PageElement"], bool]
#: A function that takes the raw parsed ingredients of a markup tag
#: and returns a yes-or-no answer.
# Not necessary at the moment.
# _AllowTagCreationFunction:TypeAlias = Callable[[Optional[str], str, Optional[_RawAttributeValues]], bool]
#: A function that takes the raw parsed ingredients of a markup string node
#: and returns a yes-or-no answer.
# Not necessary at the moment.
# _AllowStringCreationFunction:TypeAlias = Callable[[Optional[str]], bool]
#: A function that takes a `Tag` and returns a yes-or-no answer.
#: A `TagNameMatchRule` expects this kind of function, if you're
#: going to pass it a function.
_TagMatchFunction: TypeAlias = Callable[["Tag"], bool]
#: A function that takes a single string and returns a yes-or-no
#: answer. An `AttributeValueMatchRule` expects this kind of function, if
#: you're going to pass it a function. So does a `StringMatchRule`.
_StringMatchFunction: TypeAlias = Callable[[str], bool]
#: Either a tag name, an attribute value or a string can be matched
#: against a string, bytestring, regular expression, or a boolean.
_BaseStrainable: TypeAlias = Union[str, bytes, Pattern[str], bool]
#: A tag can be matched either with the `_BaseStrainable` options, or
#: using a function that takes the `Tag` as its sole argument.
_BaseStrainableElement: TypeAlias = Union[_BaseStrainable, _TagMatchFunction]
#: A tag's attribute vgalue can be matched either with the
#: `_BaseStrainable` options, or using a function that takes that
#: value as its sole argument.
_BaseStrainableAttribute: TypeAlias = Union[_BaseStrainable, _StringMatchFunction]
#: A tag can be matched using either a single criterion or a list of
#: criteria.
_StrainableElement: TypeAlias = Union[
_BaseStrainableElement, Iterable[_BaseStrainableElement]
]
#: An attribute value can be matched using either a single criterion
#: or a list of criteria.
_StrainableAttribute: TypeAlias = Union[
_BaseStrainableAttribute, Iterable[_BaseStrainableAttribute]
]
#: An string can be matched using the same techniques as
#: an attribute value.
_StrainableString: TypeAlias = _StrainableAttribute
#: A dictionary may be used to match against multiple attribute vlaues at once.
_StrainableAttributes: TypeAlias = Dict[str, _StrainableAttribute]
#: Many Beautiful soup methods return a PageElement or an ResultSet of
#: PageElements. A PageElement is either a Tag or a NavigableString.
#: These convenience aliases make it easier for IDE users to see which methods
#: are available on the objects they're dealing with.
_OneElement: TypeAlias = Union["PageElement", "Tag", "NavigableString"]
_AtMostOneElement: TypeAlias = Optional[_OneElement]
_QueryResults: TypeAlias = "ResultSet[_OneElement]"

View File

@ -0,0 +1,98 @@
"""Define some custom warnings."""
class GuessedAtParserWarning(UserWarning):
"""The warning issued when BeautifulSoup has to guess what parser to
use -- probably because no parser was specified in the constructor.
"""
MESSAGE: str = """No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system ("%(parser)s"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features="%(parser)s"' to the BeautifulSoup constructor.
"""
class UnusualUsageWarning(UserWarning):
"""A superclass for warnings issued when Beautiful Soup sees
something that is typically the result of a mistake in the calling
code, but might be intentional on the part of the user. If it is
in fact intentional, you can filter the individual warning class
to get rid of the warning. If you don't like Beautiful Soup
second-guessing what you are doing, you can filter the
UnusualUsageWarningclass itself and get rid of these entirely.
"""
class MarkupResemblesLocatorWarning(UnusualUsageWarning):
"""The warning issued when BeautifulSoup is given 'markup' that
actually looks like a resource locator -- a URL or a path to a file
on disk.
"""
#: :meta private:
GENERIC_MESSAGE: str = """
However, if you want to parse some data that happens to look like a %(what)s, then nothing has gone wrong: you are using Beautiful Soup correctly, and this warning is spurious and can be filtered. To make this warning go away, run this code before calling the BeautifulSoup constructor:
from bs4 import MarkupResemblesLocatorWarning
import warnings
warnings.filterwarnings("ignore", category=MarkupResemblesLocatorWarning)
"""
URL_MESSAGE: str = (
"""The input passed in on this line looks more like a URL than HTML or XML.
If you meant to use Beautiful Soup to parse the web page found at a certain URL, then something has gone wrong. You should use an Python package like 'requests' to fetch the content behind the URL. Once you have the content as a string, you can feed that string into Beautiful Soup."""
+ GENERIC_MESSAGE
)
FILENAME_MESSAGE: str = (
"""The input passed in on this line looks more like a filename than HTML or XML.
If you meant to use Beautiful Soup to parse the contents of a file on disk, then something has gone wrong. You should open the file first, using code like this:
filehandle = open(your filename)
You can then feed the open filehandle into Beautiful Soup instead of using the filename."""
+ GENERIC_MESSAGE
)
class AttributeResemblesVariableWarning(UnusualUsageWarning, SyntaxWarning):
"""The warning issued when Beautiful Soup suspects a provided
attribute name may actually be the misspelled name of a Beautiful
Soup variable. Generally speaking, this is only used in cases like
"_class" where it's very unlikely the user would be referencing an
XML attribute with that name.
"""
MESSAGE: str = """%(original)r is an unusual attribute name and is a common misspelling for %(autocorrect)r.
If you meant %(autocorrect)r, change your code to use it, and this warning will go away.
If you really did mean to check the %(original)r attribute, this warning is spurious and can be filtered. To make it go away, run this code before creating your BeautifulSoup object:
from bs4 import AttributeResemblesVariableWarning
import warnings
warnings.filterwarnings("ignore", category=AttributeResemblesVariableWarning)
"""
class XMLParsedAsHTMLWarning(UnusualUsageWarning):
"""The warning issued when an HTML parser is used to parse
XML that is not (as far as we can tell) XHTML.
"""
MESSAGE: str = """It looks like you're using an HTML parser to parse an XML document.
Assuming this really is an XML document, what you're doing might work, but you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the Python package 'lxml' installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
If you want or need to use an HTML parser on this document, you can make this warning go away by filtering it. To do that, run this code before calling the BeautifulSoup constructor:
from bs4 import XMLParsedAsHTMLWarning
import warnings
warnings.filterwarnings("ignore", category=XMLParsedAsHTMLWarning)
"""

View File

@ -0,0 +1,848 @@
from __future__ import annotations
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
from collections import defaultdict
import re
from types import ModuleType
from typing import (
Any,
cast,
Dict,
Iterable,
List,
Optional,
Pattern,
Set,
Tuple,
Type,
TYPE_CHECKING,
)
import warnings
import sys
from bs4.element import (
AttributeDict,
AttributeValueList,
CharsetMetaAttributeValue,
ContentMetaAttributeValue,
RubyParenthesisString,
RubyTextString,
Stylesheet,
Script,
TemplateString,
nonwhitespace_re,
)
# Exceptions were moved to their own module in 4.13. Import here for
# backwards compatibility.
from bs4.exceptions import ParserRejectedMarkup
from bs4._typing import (
_AttributeValues,
_RawAttributeValue,
)
from bs4._warnings import XMLParsedAsHTMLWarning
if TYPE_CHECKING:
from bs4 import BeautifulSoup
from bs4.element import (
NavigableString,
Tag,
)
from bs4._typing import (
_AttributeValue,
_Encoding,
_Encodings,
_RawOrProcessedAttributeValues,
_RawMarkup,
)
__all__ = [
"HTMLTreeBuilder",
"SAXTreeBuilder",
"TreeBuilder",
"TreeBuilderRegistry",
]
# Some useful features for a TreeBuilder to have.
FAST = "fast"
PERMISSIVE = "permissive"
STRICT = "strict"
XML = "xml"
HTML = "html"
HTML_5 = "html5"
__all__ = [
"TreeBuilderRegistry",
"TreeBuilder",
"HTMLTreeBuilder",
"DetectsXMLParsedAsHTML",
"ParserRejectedMarkup", # backwards compatibility only as of 4.13.0
]
class TreeBuilderRegistry(object):
"""A way of looking up TreeBuilder subclasses by their name or by desired
features.
"""
builders_for_feature: Dict[str, List[Type[TreeBuilder]]]
builders: List[Type[TreeBuilder]]
def __init__(self) -> None:
self.builders_for_feature = defaultdict(list)
self.builders = []
def register(self, treebuilder_class: type[TreeBuilder]) -> None:
"""Register a treebuilder based on its advertised features.
:param treebuilder_class: A subclass of `TreeBuilder`. its
`TreeBuilder.features` attribute should list its features.
"""
for feature in treebuilder_class.features:
self.builders_for_feature[feature].insert(0, treebuilder_class)
self.builders.insert(0, treebuilder_class)
def lookup(self, *features: str) -> Optional[Type[TreeBuilder]]:
"""Look up a TreeBuilder subclass with the desired features.
:param features: A list of features to look for. If none are
provided, the most recently registered TreeBuilder subclass
will be used.
:return: A TreeBuilder subclass, or None if there's no
registered subclass with all the requested features.
"""
if len(self.builders) == 0:
# There are no builders at all.
return None
if len(features) == 0:
# They didn't ask for any features. Give them the most
# recently registered builder.
return self.builders[0]
# Go down the list of features in order, and eliminate any builders
# that don't match every feature.
feature_list = list(features)
feature_list.reverse()
candidates = None
candidate_set = None
while len(feature_list) > 0:
feature = feature_list.pop()
we_have_the_feature = self.builders_for_feature.get(feature, [])
if len(we_have_the_feature) > 0:
if candidates is None:
candidates = we_have_the_feature
candidate_set = set(candidates)
else:
# Eliminate any candidates that don't have this feature.
candidate_set = candidate_set.intersection(set(we_have_the_feature))
# The only valid candidates are the ones in candidate_set.
# Go through the original list of candidates and pick the first one
# that's in candidate_set.
if candidate_set is None or candidates is None:
return None
for candidate in candidates:
if candidate in candidate_set:
return candidate
return None
#: The `BeautifulSoup` constructor will take a list of features
#: and use it to look up `TreeBuilder` classes in this registry.
builder_registry: TreeBuilderRegistry = TreeBuilderRegistry()
class TreeBuilder(object):
"""Turn a textual document into a Beautiful Soup object tree.
This is an abstract superclass which smooths out the behavior of
different parser libraries into a single, unified interface.
:param multi_valued_attributes: If this is set to None, the
TreeBuilder will not turn any values for attributes like
'class' into lists. Setting this to a dictionary will
customize this behavior; look at :py:attr:`bs4.builder.HTMLTreeBuilder.DEFAULT_CDATA_LIST_ATTRIBUTES`
for an example.
Internally, these are called "CDATA list attributes", but that
probably doesn't make sense to an end-user, so the argument name
is ``multi_valued_attributes``.
:param preserve_whitespace_tags: A set of tags to treat
the way <pre> tags are treated in HTML. Tags in this set
are immune from pretty-printing; their contents will always be
output as-is.
:param string_containers: A dictionary mapping tag names to
the classes that should be instantiated to contain the textual
contents of those tags. The default is to use NavigableString
for every tag, no matter what the name. You can override the
default by changing :py:attr:`DEFAULT_STRING_CONTAINERS`.
:param store_line_numbers: If the parser keeps track of the line
numbers and positions of the original markup, that information
will, by default, be stored in each corresponding
:py:class:`bs4.element.Tag` object. You can turn this off by
passing store_line_numbers=False; then Tag.sourcepos and
Tag.sourceline will always be None. If the parser you're using
doesn't keep track of this information, then store_line_numbers
is irrelevant.
:param attribute_dict_class: The value of a multi-valued attribute
(such as HTML's 'class') willl be stored in an instance of this
class. The default is Beautiful Soup's built-in
`AttributeValueList`, which is a normal Python list, and you
will probably never need to change it.
"""
USE_DEFAULT: Any = object() #: :meta private:
def __init__(
self,
multi_valued_attributes: Dict[str, Set[str]] = USE_DEFAULT,
preserve_whitespace_tags: Set[str] = USE_DEFAULT,
store_line_numbers: bool = USE_DEFAULT,
string_containers: Dict[str, Type[NavigableString]] = USE_DEFAULT,
empty_element_tags: Set[str] = USE_DEFAULT,
attribute_dict_class: Type[AttributeDict] = AttributeDict,
attribute_value_list_class: Type[AttributeValueList] = AttributeValueList,
):
self.soup = None
if multi_valued_attributes is self.USE_DEFAULT:
multi_valued_attributes = self.DEFAULT_CDATA_LIST_ATTRIBUTES
self.cdata_list_attributes = multi_valued_attributes
if preserve_whitespace_tags is self.USE_DEFAULT:
preserve_whitespace_tags = self.DEFAULT_PRESERVE_WHITESPACE_TAGS
self.preserve_whitespace_tags = preserve_whitespace_tags
if empty_element_tags is self.USE_DEFAULT:
self.empty_element_tags = self.DEFAULT_EMPTY_ELEMENT_TAGS
else:
self.empty_element_tags = empty_element_tags
# TODO: store_line_numbers is probably irrelevant now that
# the behavior of sourceline and sourcepos has been made consistent
# everywhere.
if store_line_numbers == self.USE_DEFAULT:
store_line_numbers = self.TRACKS_LINE_NUMBERS
self.store_line_numbers = store_line_numbers
if string_containers == self.USE_DEFAULT:
string_containers = self.DEFAULT_STRING_CONTAINERS
self.string_containers = string_containers
self.attribute_dict_class = attribute_dict_class
self.attribute_value_list_class = attribute_value_list_class
NAME: str = "[Unknown tree builder]"
ALTERNATE_NAMES: Iterable[str] = []
features: Iterable[str] = []
is_xml: bool = False
picklable: bool = False
soup: Optional[BeautifulSoup] #: :meta private:
#: A tag will be considered an empty-element
#: tag when and only when it has no contents.
empty_element_tags: Optional[Set[str]] = None #: :meta private:
cdata_list_attributes: Dict[str, Set[str]] #: :meta private:
preserve_whitespace_tags: Set[str] #: :meta private:
string_containers: Dict[str, Type[NavigableString]] #: :meta private:
tracks_line_numbers: bool #: :meta private:
#: A value for these tag/attribute combinations is a space- or
#: comma-separated list of CDATA, rather than a single CDATA.
DEFAULT_CDATA_LIST_ATTRIBUTES: Dict[str, Set[str]] = defaultdict(set)
#: Whitespace should be preserved inside these tags.
DEFAULT_PRESERVE_WHITESPACE_TAGS: Set[str] = set()
#: The textual contents of tags with these names should be
#: instantiated with some class other than `bs4.element.NavigableString`.
DEFAULT_STRING_CONTAINERS: Dict[str, Type[bs4.element.NavigableString]] = {}
#: By default, tags are treated as empty-element tags if they have
#: no contents--that is, using XML rules. HTMLTreeBuilder
#: defines a different set of DEFAULT_EMPTY_ELEMENT_TAGS based on the
#: HTML 4 and HTML5 standards.
DEFAULT_EMPTY_ELEMENT_TAGS: Optional[Set[str]] = None
#: Most parsers don't keep track of line numbers.
TRACKS_LINE_NUMBERS: bool = False
def initialize_soup(self, soup: BeautifulSoup) -> None:
"""The BeautifulSoup object has been initialized and is now
being associated with the TreeBuilder.
:param soup: A BeautifulSoup object.
"""
self.soup = soup
def reset(self) -> None:
"""Do any work necessary to reset the underlying parser
for a new document.
By default, this does nothing.
"""
pass
def can_be_empty_element(self, tag_name: str) -> bool:
"""Might a tag with this name be an empty-element tag?
The final markup may or may not actually present this tag as
self-closing.
For instance: an HTMLBuilder does not consider a <p> tag to be
an empty-element tag (it's not in
HTMLBuilder.empty_element_tags). This means an empty <p> tag
will be presented as "<p></p>", not "<p/>" or "<p>".
The default implementation has no opinion about which tags are
empty-element tags, so a tag will be presented as an
empty-element tag if and only if it has no children.
"<foo></foo>" will become "<foo/>", and "<foo>bar</foo>" will
be left alone.
:param tag_name: The name of a markup tag.
"""
if self.empty_element_tags is None:
return True
return tag_name in self.empty_element_tags
def feed(self, markup: _RawMarkup) -> None:
"""Run incoming markup through some parsing process."""
raise NotImplementedError()
def prepare_markup(
self,
markup: _RawMarkup,
user_specified_encoding: Optional[_Encoding] = None,
document_declared_encoding: Optional[_Encoding] = None,
exclude_encodings: Optional[_Encodings] = None,
) -> Iterable[Tuple[_RawMarkup, Optional[_Encoding], Optional[_Encoding], bool]]:
"""Run any preliminary steps necessary to make incoming markup
acceptable to the parser.
:param markup: The markup that's about to be parsed.
:param user_specified_encoding: The user asked to try this encoding
to convert the markup into a Unicode string.
:param document_declared_encoding: The markup itself claims to be
in this encoding. NOTE: This argument is not used by the
calling code and can probably be removed.
:param exclude_encodings: The user asked *not* to try any of
these encodings.
:yield: A series of 4-tuples: (markup, encoding, declared encoding,
has undergone character replacement)
Each 4-tuple represents a strategy that the parser can try
to convert the document to Unicode and parse it. Each
strategy will be tried in turn.
By default, the only strategy is to parse the markup
as-is. See `LXMLTreeBuilderForXML` and
`HTMLParserTreeBuilder` for implementations that take into
account the quirks of particular parsers.
:meta private:
"""
yield markup, None, None, False
def test_fragment_to_document(self, fragment: str) -> str:
"""Wrap an HTML fragment to make it look like a document.
Different parsers do this differently. For instance, lxml
introduces an empty <head> tag, and html5lib
doesn't. Abstracting this away lets us write simple tests
which run HTML fragments through the parser and compare the
results against other HTML fragments.
This method should not be used outside of unit tests.
:param fragment: A fragment of HTML.
:return: A full HTML document.
:meta private:
"""
return fragment
def set_up_substitutions(self, tag: Tag) -> bool:
"""Set up any substitutions that will need to be performed on
a `Tag` when it's output as a string.
By default, this does nothing. See `HTMLTreeBuilder` for a
case where this is used.
:return: Whether or not a substitution was performed.
:meta private:
"""
return False
def _replace_cdata_list_attribute_values(
self, tag_name: str, attrs: _RawOrProcessedAttributeValues
) -> _AttributeValues:
"""When an attribute value is associated with a tag that can
have multiple values for that attribute, convert the string
value to a list of strings.
Basically, replaces class="foo bar" with class=["foo", "bar"]
NOTE: This method modifies its input in place.
:param tag_name: The name of a tag.
:param attrs: A dictionary containing the tag's attributes.
Any appropriate attribute values will be modified in place.
:return: The modified dictionary that was originally passed in.
"""
# First, cast the attrs dict to _AttributeValues. This might
# not be accurate yet, but it will be by the time this method
# returns.
modified_attrs = cast(_AttributeValues, attrs)
if not modified_attrs or not self.cdata_list_attributes:
# Nothing to do.
return modified_attrs
# There is at least a possibility that we need to modify one of
# the attribute values.
universal: Set[str] = self.cdata_list_attributes.get("*", set())
tag_specific = self.cdata_list_attributes.get(tag_name.lower(), None)
for attr in list(modified_attrs.keys()):
modified_value: _AttributeValue
if attr in universal or (tag_specific and attr in tag_specific):
# We have a "class"-type attribute whose string
# value is a whitespace-separated list of
# values. Split it into a list.
original_value: _AttributeValue = modified_attrs[attr]
if isinstance(original_value, _RawAttributeValue):
# This is a _RawAttributeValue (a string) that
# needs to be split and converted to a
# AttributeValueList so it can be an
# _AttributeValue.
modified_value = self.attribute_value_list_class(
nonwhitespace_re.findall(original_value)
)
else:
# html5lib calls setAttributes twice for the
# same tag when rearranging the parse tree. On
# the second call the attribute value here is
# already a list. This can also happen when a
# Tag object is cloned. If this happens, leave
# the value alone rather than trying to split
# it again.
modified_value = original_value
modified_attrs[attr] = modified_value
return modified_attrs
class SAXTreeBuilder(TreeBuilder):
"""A Beautiful Soup treebuilder that listens for SAX events.
This is not currently used for anything, and it will be removed
soon. It was a good idea, but it wasn't properly integrated into the
rest of Beautiful Soup, so there have been long stretches where it
hasn't worked properly.
"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
warnings.warn(
"The SAXTreeBuilder class was deprecated in 4.13.0 and will be removed soon thereafter. It is completely untested and probably doesn't work; do not use it.",
DeprecationWarning,
stacklevel=2,
)
super(SAXTreeBuilder, self).__init__(*args, **kwargs)
def feed(self, markup: _RawMarkup) -> None:
raise NotImplementedError()
def close(self) -> None:
pass
def startElement(self, name: str, attrs: Dict[str, str]) -> None:
attrs = AttributeDict((key[1], value) for key, value in list(attrs.items()))
# print("Start %s, %r" % (name, attrs))
assert self.soup is not None
self.soup.handle_starttag(name, None, None, attrs)
def endElement(self, name: str) -> None:
# print("End %s" % name)
assert self.soup is not None
self.soup.handle_endtag(name)
def startElementNS(
self, nsTuple: Tuple[str, str], nodeName: str, attrs: Dict[str, str]
) -> None:
# Throw away (ns, nodeName) for now.
self.startElement(nodeName, attrs)
def endElementNS(self, nsTuple: Tuple[str, str], nodeName: str) -> None:
# Throw away (ns, nodeName) for now.
self.endElement(nodeName)
# handler.endElementNS((ns, node.nodeName), node.nodeName)
def startPrefixMapping(self, prefix: str, nodeValue: str) -> None:
# Ignore the prefix for now.
pass
def endPrefixMapping(self, prefix: str) -> None:
# Ignore the prefix for now.
# handler.endPrefixMapping(prefix)
pass
def characters(self, content: str) -> None:
assert self.soup is not None
self.soup.handle_data(content)
def startDocument(self) -> None:
pass
def endDocument(self) -> None:
pass
class HTMLTreeBuilder(TreeBuilder):
"""This TreeBuilder knows facts about HTML, such as which tags are treated
specially by the HTML standard.
"""
#: Some HTML tags are defined as having no contents. Beautiful Soup
#: treats these specially.
DEFAULT_EMPTY_ELEMENT_TAGS: Set[str] = set(
[
# These are from HTML5.
"area",
"base",
"br",
"col",
"embed",
"hr",
"img",
"input",
"keygen",
"link",
"menuitem",
"meta",
"param",
"source",
"track",
"wbr",
# These are from earlier versions of HTML and are removed in HTML5.
"basefont",
"bgsound",
"command",
"frame",
"image",
"isindex",
"nextid",
"spacer",
]
)
#: The HTML standard defines these tags as block-level elements. Beautiful
#: Soup does not treat these elements differently from other elements,
#: but it may do so eventually, and this information is available if
#: you need to use it.
DEFAULT_BLOCK_ELEMENTS: Set[str] = set(
[
"address",
"article",
"aside",
"blockquote",
"canvas",
"dd",
"div",
"dl",
"dt",
"fieldset",
"figcaption",
"figure",
"footer",
"form",
"h1",
"h2",
"h3",
"h4",
"h5",
"h6",
"header",
"hr",
"li",
"main",
"nav",
"noscript",
"ol",
"output",
"p",
"pre",
"section",
"table",
"tfoot",
"ul",
"video",
]
)
#: These HTML tags need special treatment so they can be
#: represented by a string class other than `bs4.element.NavigableString`.
#:
#: For some of these tags, it's because the HTML standard defines
#: an unusual content model for them. I made this list by going
#: through the HTML spec
#: (https://html.spec.whatwg.org/#metadata-content) and looking for
#: "metadata content" elements that can contain strings.
#:
#: The Ruby tags (<rt> and <rp>) are here despite being normal
#: "phrasing content" tags, because the content they contain is
#: qualitatively different from other text in the document, and it
#: can be useful to be able to distinguish it.
#:
#: TODO: Arguably <noscript> could go here but it seems
#: qualitatively different from the other tags.
DEFAULT_STRING_CONTAINERS: Dict[str, Type[bs4.element.NavigableString]] = {
"rt": RubyTextString,
"rp": RubyParenthesisString,
"style": Stylesheet,
"script": Script,
"template": TemplateString,
}
#: The HTML standard defines these attributes as containing a
#: space-separated list of values, not a single value. That is,
#: class="foo bar" means that the 'class' attribute has two values,
#: 'foo' and 'bar', not the single value 'foo bar'. When we
#: encounter one of these attributes, we will parse its value into
#: a list of values if possible. Upon output, the list will be
#: converted back into a string.
DEFAULT_CDATA_LIST_ATTRIBUTES: Dict[str, Set[str]] = {
"*": {"class", "accesskey", "dropzone"},
"a": {"rel", "rev"},
"link": {"rel", "rev"},
"td": {"headers"},
"th": {"headers"},
"form": {"accept-charset"},
"object": {"archive"},
# These are HTML5 specific, as are *.accesskey and *.dropzone above.
"area": {"rel"},
"icon": {"sizes"},
"iframe": {"sandbox"},
"output": {"for"},
}
#: By default, whitespace inside these HTML tags will be
#: preserved rather than being collapsed.
DEFAULT_PRESERVE_WHITESPACE_TAGS: set[str] = set(["pre", "textarea"])
def set_up_substitutions(self, tag: Tag) -> bool:
"""Replace the declared encoding in a <meta> tag with a placeholder,
to be substituted when the tag is output to a string.
An HTML document may come in to Beautiful Soup as one
encoding, but exit in a different encoding, and the <meta> tag
needs to be changed to reflect this.
:return: Whether or not a substitution was performed.
:meta private:
"""
# We are only interested in <meta> tags
if tag.name != "meta":
return False
# TODO: This cast will fail in the (very unlikely) scenario
# that the programmer who instantiates the TreeBuilder
# specifies meta['content'] or meta['charset'] as
# cdata_list_attributes.
content: Optional[str] = cast(Optional[str], tag.get("content"))
charset: Optional[str] = cast(Optional[str], tag.get("charset"))
# But we can accommodate meta['http-equiv'] being made a
# cdata_list_attribute (again, very unlikely) without much
# trouble.
http_equiv: List[str] = tag.get_attribute_list("http-equiv")
# We are interested in <meta> tags that say what encoding the
# document was originally in. This means HTML 5-style <meta>
# tags that provide the "charset" attribute. It also means
# HTML 4-style <meta> tags that provide the "content"
# attribute and have "http-equiv" set to "content-type".
#
# In both cases we will replace the value of the appropriate
# attribute with a standin object that can take on any
# encoding.
substituted = False
if charset is not None:
# HTML 5 style:
# <meta charset="utf8">
tag["charset"] = CharsetMetaAttributeValue(charset)
substituted = True
elif content is not None and any(
x.lower() == "content-type" for x in http_equiv
):
# HTML 4 style:
# <meta http-equiv="content-type" content="text/html; charset=utf8">
tag["content"] = ContentMetaAttributeValue(content)
substituted = True
return substituted
class DetectsXMLParsedAsHTML(object):
"""A mixin class for any class (a TreeBuilder, or some class used by a
TreeBuilder) that's in a position to detect whether an XML
document is being incorrectly parsed as HTML, and issue an
appropriate warning.
This requires being able to observe an incoming processing
instruction that might be an XML declaration, and also able to
observe tags as they're opened. If you can't do that for a given
`TreeBuilder`, there's a less reliable implementation based on
examining the raw markup.
"""
#: Regular expression for seeing if string markup has an <html> tag.
LOOKS_LIKE_HTML: Pattern[str] = re.compile("<[^ +]html", re.I)
#: Regular expression for seeing if byte markup has an <html> tag.
LOOKS_LIKE_HTML_B: Pattern[bytes] = re.compile(b"<[^ +]html", re.I)
#: The start of an XML document string.
XML_PREFIX: str = "<?xml"
#: The start of an XML document bytestring.
XML_PREFIX_B: bytes = b"<?xml"
# This is typed as str, not `ProcessingInstruction`, because this
# check may be run before any Beautiful Soup objects are created.
_first_processing_instruction: Optional[str] #: :meta private:
_root_tag_name: Optional[str] #: :meta private:
@classmethod
def warn_if_markup_looks_like_xml(
cls, markup: Optional[_RawMarkup], stacklevel: int = 3
) -> bool:
"""Perform a check on some markup to see if it looks like XML
that's not XHTML. If so, issue a warning.
This is much less reliable than doing the check while parsing,
but some of the tree builders can't do that.
:param stacklevel: The stacklevel of the code calling this\
function.
:return: True if the markup looks like non-XHTML XML, False
otherwise.
"""
if markup is None:
return False
markup = markup[:500]
if isinstance(markup, bytes):
markup_b: bytes = markup
looks_like_xml = markup_b.startswith(
cls.XML_PREFIX_B
) and not cls.LOOKS_LIKE_HTML_B.search(markup)
else:
markup_s: str = markup
looks_like_xml = markup_s.startswith(
cls.XML_PREFIX
) and not cls.LOOKS_LIKE_HTML.search(markup)
if looks_like_xml:
cls._warn(stacklevel=stacklevel + 2)
return True
return False
@classmethod
def _warn(cls, stacklevel: int = 5) -> None:
"""Issue a warning about XML being parsed as HTML."""
warnings.warn(
XMLParsedAsHTMLWarning.MESSAGE,
XMLParsedAsHTMLWarning,
stacklevel=stacklevel,
)
def _initialize_xml_detector(self) -> None:
"""Call this method before parsing a document."""
self._first_processing_instruction = None
self._root_tag_name = None
def _document_might_be_xml(self, processing_instruction: str) -> None:
"""Call this method when encountering an XML declaration, or a
"processing instruction" that might be an XML declaration.
This helps Beautiful Soup detect potential issues later, if
the XML document turns out to be a non-XHTML document that's
being parsed as XML.
"""
if (
self._first_processing_instruction is not None
or self._root_tag_name is not None
):
# The document has already started. Don't bother checking
# anymore.
return
self._first_processing_instruction = processing_instruction
# We won't know until we encounter the first tag whether or
# not this is actually a problem.
def _root_tag_encountered(self, name: str) -> None:
"""Call this when you encounter the document's root tag.
This is where we actually check whether an XML document is
being incorrectly parsed as HTML, and issue the warning.
"""
if self._root_tag_name is not None:
# This method was incorrectly called multiple times. Do
# nothing.
return
self._root_tag_name = name
if (
name != "html"
and self._first_processing_instruction is not None
and self._first_processing_instruction.lower().startswith("xml ")
):
# We encountered an XML declaration and then a tag other
# than 'html'. This is a reliable indicator that a
# non-XHTML document is being parsed as XML.
self._warn(stacklevel=10)
def register_treebuilders_from(module: ModuleType) -> None:
"""Copy TreeBuilders from the given module into this module."""
this_module = sys.modules[__name__]
for name in module.__all__:
obj = getattr(module, name)
if issubclass(obj, TreeBuilder):
setattr(this_module, name, obj)
this_module.__all__.append(name)
# Register the builder while we're at it.
this_module.builder_registry.register(obj)
# Builders are registered in reverse order of priority, so that custom
# builder registrations will take precedence. In general, we want lxml
# to take precedence over html5lib, because it's faster. And we only
# want to use HTMLParser as a last resort.
from . import _htmlparser # noqa: E402
register_treebuilders_from(_htmlparser)
try:
from . import _html5lib
register_treebuilders_from(_html5lib)
except ImportError:
# They don't have html5lib installed.
pass
try:
from . import _lxml
register_treebuilders_from(_lxml)
except ImportError:
# They don't have lxml installed.
pass

View File

@ -0,0 +1,594 @@
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
__all__ = [
"HTML5TreeBuilder",
]
from typing import (
Any,
cast,
Dict,
Iterable,
Optional,
Sequence,
TYPE_CHECKING,
Tuple,
Union,
)
from typing_extensions import TypeAlias
from bs4._typing import (
_AttributeValue,
_AttributeValues,
_Encoding,
_Encodings,
_NamespaceURL,
_RawMarkup,
)
import warnings
from bs4.builder import (
DetectsXMLParsedAsHTML,
PERMISSIVE,
HTML,
HTML_5,
HTMLTreeBuilder,
)
from bs4.element import (
NamespacedAttribute,
PageElement,
nonwhitespace_re,
)
import html5lib
from html5lib.constants import (
namespaces,
)
from bs4.element import (
Comment,
Doctype,
NavigableString,
Tag,
)
if TYPE_CHECKING:
from bs4 import BeautifulSoup
from html5lib.treebuilders import base as treebuilder_base
class HTML5TreeBuilder(HTMLTreeBuilder):
"""Use `html5lib <https://github.com/html5lib/html5lib-python>`_ to
build a tree.
Note that `HTML5TreeBuilder` does not support some common HTML
`TreeBuilder` features. Some of these features could theoretically
be implemented, but at the very least it's quite difficult,
because html5lib moves the parse tree around as it's being built.
Specifically:
* This `TreeBuilder` doesn't use different subclasses of
`NavigableString` (e.g. `Script`) based on the name of the tag
in which the string was found.
* You can't use a `SoupStrainer` to parse only part of a document.
"""
NAME: str = "html5lib"
features: Sequence[str] = [NAME, PERMISSIVE, HTML_5, HTML]
#: html5lib can tell us which line number and position in the
#: original file is the source of an element.
TRACKS_LINE_NUMBERS: bool = True
underlying_builder: "TreeBuilderForHtml5lib" #: :meta private:
user_specified_encoding: Optional[_Encoding]
def prepare_markup(
self,
markup: _RawMarkup,
user_specified_encoding: Optional[_Encoding] = None,
document_declared_encoding: Optional[_Encoding] = None,
exclude_encodings: Optional[_Encodings] = None,
) -> Iterable[Tuple[_RawMarkup, Optional[_Encoding], Optional[_Encoding], bool]]:
# Store the user-specified encoding for use later on.
self.user_specified_encoding = user_specified_encoding
# document_declared_encoding and exclude_encodings aren't used
# ATM because the html5lib TreeBuilder doesn't use
# UnicodeDammit.
for variable, name in (
(document_declared_encoding, "document_declared_encoding"),
(exclude_encodings, "exclude_encodings"),
):
if variable:
warnings.warn(
f"You provided a value for {name}, but the html5lib tree builder doesn't support {name}.",
stacklevel=3,
)
# html5lib only parses HTML, so if it's given XML that's worth
# noting.
DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(markup, stacklevel=3)
yield (markup, None, None, False)
# These methods are defined by Beautiful Soup.
def feed(self, markup: _RawMarkup) -> None:
"""Run some incoming markup through some parsing process,
populating the `BeautifulSoup` object in `HTML5TreeBuilder.soup`.
"""
if self.soup is not None and self.soup.parse_only is not None:
warnings.warn(
"You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.",
stacklevel=4,
)
# self.underlying_builder is probably None now, but it'll be set
# when html5lib calls self.create_treebuilder().
parser = html5lib.HTMLParser(tree=self.create_treebuilder)
assert self.underlying_builder is not None
self.underlying_builder.parser = parser
extra_kwargs = dict()
if not isinstance(markup, str):
# kwargs, specifically override_encoding, will eventually
# be passed in to html5lib's
# HTMLBinaryInputStream.__init__.
extra_kwargs["override_encoding"] = self.user_specified_encoding
doc = parser.parse(markup, **extra_kwargs)
# Set the character encoding detected by the tokenizer.
if isinstance(markup, str):
# We need to special-case this because html5lib sets
# charEncoding to UTF-8 if it gets Unicode input.
doc.original_encoding = None
else:
original_encoding = parser.tokenizer.stream.charEncoding[0]
# The encoding is an html5lib Encoding object. We want to
# use a string for compatibility with other tree builders.
original_encoding = original_encoding.name
doc.original_encoding = original_encoding
self.underlying_builder.parser = None
def create_treebuilder(
self, namespaceHTMLElements: bool
) -> "TreeBuilderForHtml5lib":
"""Called by html5lib to instantiate the kind of class it
calls a 'TreeBuilder'.
:param namespaceHTMLElements: Whether or not to namespace HTML elements.
:meta private:
"""
self.underlying_builder = TreeBuilderForHtml5lib(
namespaceHTMLElements, self.soup, store_line_numbers=self.store_line_numbers
)
return self.underlying_builder
def test_fragment_to_document(self, fragment: str) -> str:
"""See `TreeBuilder`."""
return "<html><head></head><body>%s</body></html>" % fragment
class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):
soup: "BeautifulSoup" #: :meta private:
parser: Optional[html5lib.HTMLParser] #: :meta private:
def __init__(
self,
namespaceHTMLElements: bool,
soup: Optional["BeautifulSoup"] = None,
store_line_numbers: bool = True,
**kwargs: Any,
):
if soup:
self.soup = soup
else:
warnings.warn(
"The optionality of the 'soup' argument to the TreeBuilderForHtml5lib constructor is deprecated as of Beautiful Soup 4.13.0: 'soup' is now required. If you can't pass in a BeautifulSoup object here, or you get this warning and it seems mysterious to you, please contact the Beautiful Soup developer team for possible un-deprecation.",
DeprecationWarning,
stacklevel=2,
)
from bs4 import BeautifulSoup
# TODO: Why is the parser 'html.parser' here? Using
# html5lib doesn't cause an infinite loop and is more
# accurate. Best to get rid of this entire section, I think.
self.soup = BeautifulSoup(
"", "html.parser", store_line_numbers=store_line_numbers, **kwargs
)
# TODO: What are **kwargs exactly? Should they be passed in
# here in addition to/instead of being passed to the BeautifulSoup
# constructor?
super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements)
# This will be set later to a real html5lib HTMLParser object,
# which we can use to track the current line number.
self.parser = None
self.store_line_numbers = store_line_numbers
def documentClass(self) -> "Element":
self.soup.reset()
return Element(self.soup, self.soup, None)
def insertDoctype(self, token: Dict[str, Any]) -> None:
name: str = cast(str, token["name"])
publicId: Optional[str] = cast(Optional[str], token["publicId"])
systemId: Optional[str] = cast(Optional[str], token["systemId"])
doctype = Doctype.for_name_and_ids(name, publicId, systemId)
self.soup.object_was_parsed(doctype)
def elementClass(self, name: str, namespace: str) -> "Element":
sourceline: Optional[int] = None
sourcepos: Optional[int] = None
if self.parser is not None and self.store_line_numbers:
# This represents the point immediately after the end of the
# tag. We don't know when the tag started, but we do know
# where it ended -- the character just before this one.
sourceline, sourcepos = self.parser.tokenizer.stream.position()
assert sourcepos is not None
sourcepos = sourcepos - 1
tag = self.soup.new_tag(
name, namespace, sourceline=sourceline, sourcepos=sourcepos
)
return Element(tag, self.soup, namespace)
def commentClass(self, data: str) -> "TextNode":
return TextNode(Comment(data), self.soup)
def fragmentClass(self) -> "Element":
"""This is only used by html5lib HTMLParser.parseFragment(),
which is never used by Beautiful Soup, only by the html5lib
unit tests. Since we don't currently hook into those tests,
the implementation is left blank.
"""
raise NotImplementedError()
def getFragment(self) -> "Element":
"""This is only used by the html5lib unit tests. Since we
don't currently hook into those tests, the implementation is
left blank.
"""
raise NotImplementedError()
def appendChild(self, node: "Element") -> None:
# TODO: This code is not covered by the BS4 tests, and
# apparently not triggered by the html5lib test suite either.
# But it doesn't seem test-specific and there are calls to it
# (or a method with the same name) all over html5lib, so I'm
# leaving the implementation in place rather than replacing it
# with NotImplementedError()
self.soup.append(node.element)
def getDocument(self) -> "BeautifulSoup":
return self.soup
def testSerializer(self, element: "Element") -> str:
"""This is only used by the html5lib unit tests. Since we
don't currently hook into those tests, the implementation is
left blank.
"""
raise NotImplementedError()
class AttrList(object):
"""Represents a Tag's attributes in a way compatible with html5lib."""
element: Tag
attrs: _AttributeValues
def __init__(self, element: Tag):
self.element = element
self.attrs = dict(self.element.attrs)
def __iter__(self) -> Iterable[Tuple[str, _AttributeValue]]:
return list(self.attrs.items()).__iter__()
def __setitem__(self, name: str, value: _AttributeValue) -> None:
# If this attribute is a multi-valued attribute for this element,
# turn its value into a list.
list_attr = self.element.cdata_list_attributes or {}
if name in list_attr.get("*", []) or (
self.element.name in list_attr
and name in list_attr.get(self.element.name, [])
):
# A node that is being cloned may have already undergone
# this procedure. Check for this and skip it.
if not isinstance(value, list):
assert isinstance(value, str)
value = self.element.attribute_value_list_class(
nonwhitespace_re.findall(value)
)
self.element[name] = value
def items(self) -> Iterable[Tuple[str, _AttributeValue]]:
return list(self.attrs.items())
def keys(self) -> Iterable[str]:
return list(self.attrs.keys())
def __len__(self) -> int:
return len(self.attrs)
def __getitem__(self, name: str) -> _AttributeValue:
return self.attrs[name]
def __contains__(self, name: str) -> bool:
return name in list(self.attrs.keys())
class BeautifulSoupNode(treebuilder_base.Node):
element: PageElement
soup: "BeautifulSoup"
namespace: Optional[_NamespaceURL]
@property
def nodeType(self) -> int:
"""Return the html5lib constant corresponding to the type of
the underlying DOM object.
NOTE: This property is only accessed by the html5lib test
suite, not by Beautiful Soup proper.
"""
raise NotImplementedError()
# TODO-TYPING: typeshed stubs are incorrect about this;
# cloneNode returns a new Node, not None.
def cloneNode(self) -> treebuilder_base.Node:
raise NotImplementedError()
class Element(BeautifulSoupNode):
element: Tag
namespace: Optional[_NamespaceURL]
def __init__(
self, element: Tag, soup: "BeautifulSoup", namespace: Optional[_NamespaceURL]
):
treebuilder_base.Node.__init__(self, element.name)
self.element = element
self.soup = soup
self.namespace = namespace
def appendChild(self, node: "BeautifulSoupNode") -> None:
string_child: Optional[NavigableString] = None
child: PageElement
if type(node.element) is NavigableString:
string_child = child = node.element
else:
child = node.element
node.parent = self
if (
child is not None
and child.parent is not None
and not isinstance(child, str)
):
node.element.extract()
if (
string_child is not None
and self.element.contents
and type(self.element.contents[-1]) is NavigableString
):
# We are appending a string onto another string.
# TODO This has O(n^2) performance, for input like
# "a</a>a</a>a</a>..."
old_element = self.element.contents[-1]
new_element = self.soup.new_string(old_element + string_child)
old_element.replace_with(new_element)
self.soup._most_recent_element = new_element
else:
if isinstance(node, str):
# Create a brand new NavigableString from this string.
child = self.soup.new_string(node)
# Tell Beautiful Soup to act as if it parsed this element
# immediately after the parent's last descendant. (Or
# immediately after the parent, if it has no children.)
if self.element.contents:
most_recent_element = self.element._last_descendant(False)
elif self.element.next_element is not None:
# Something from further ahead in the parse tree is
# being inserted into this earlier element. This is
# very annoying because it means an expensive search
# for the last element in the tree.
most_recent_element = self.soup._last_descendant()
else:
most_recent_element = self.element
self.soup.object_was_parsed(
child, parent=self.element, most_recent_element=most_recent_element
)
def getAttributes(self) -> AttrList:
if isinstance(self.element, Comment):
return {}
return AttrList(self.element)
# An HTML5lib attribute name may either be a single string,
# or a tuple (namespace, name).
_Html5libAttributeName: TypeAlias = Union[str, Tuple[str, str]]
# Now we can define the type this method accepts as a dictionary
# mapping those attribute names to single string values.
_Html5libAttributes: TypeAlias = Dict[_Html5libAttributeName, str]
def setAttributes(self, attributes: Optional[_Html5libAttributes]) -> None:
if attributes is not None and len(attributes) > 0:
# Replace any namespaced attributes with
# NamespacedAttribute objects.
for name, value in list(attributes.items()):
if isinstance(name, tuple):
new_name = NamespacedAttribute(*name)
del attributes[name]
attributes[new_name] = value
# We can now cast attributes to the type of Dict
# used by Beautiful Soup.
normalized_attributes = cast(_AttributeValues, attributes)
# Values for tags like 'class' came in as single strings;
# replace them with lists of strings as appropriate.
self.soup.builder._replace_cdata_list_attribute_values(
self.name, normalized_attributes
)
# Then set the attributes on the Tag associated with this
# BeautifulSoupNode.
for name, value_or_values in list(normalized_attributes.items()):
self.element[name] = value_or_values
# The attributes may contain variables that need substitution.
# Call set_up_substitutions manually.
#
# The Tag constructor called this method when the Tag was created,
# but we just set/changed the attributes, so call it again.
self.soup.builder.set_up_substitutions(self.element)
attributes = property(getAttributes, setAttributes)
def insertText(
self, data: str, insertBefore: Optional["BeautifulSoupNode"] = None
) -> None:
text = TextNode(self.soup.new_string(data), self.soup)
if insertBefore:
self.insertBefore(text, insertBefore)
else:
self.appendChild(text)
def insertBefore(
self, node: "BeautifulSoupNode", refNode: "BeautifulSoupNode"
) -> None:
index = self.element.index(refNode.element)
if (
type(node.element) is NavigableString
and self.element.contents
and type(self.element.contents[index - 1]) is NavigableString
):
# (See comments in appendChild)
old_node = self.element.contents[index - 1]
assert type(old_node) is NavigableString
new_str = self.soup.new_string(old_node + node.element)
old_node.replace_with(new_str)
else:
self.element.insert(index, node.element)
node.parent = self
def removeChild(self, node: "Element") -> None:
node.element.extract()
def reparentChildren(self, new_parent: "Element") -> None:
"""Move all of this tag's children into another tag."""
# print("MOVE", self.element.contents)
# print("FROM", self.element)
# print("TO", new_parent.element)
element = self.element
new_parent_element = new_parent.element
# Determine what this tag's next_element will be once all the children
# are removed.
final_next_element = element.next_sibling
new_parents_last_descendant = new_parent_element._last_descendant(False, False)
if len(new_parent_element.contents) > 0:
# The new parent already contains children. We will be
# appending this tag's children to the end.
# We can make this assertion since we know new_parent has
# children.
assert new_parents_last_descendant is not None
new_parents_last_child = new_parent_element.contents[-1]
new_parents_last_descendant_next_element = (
new_parents_last_descendant.next_element
)
else:
# The new parent contains no children.
new_parents_last_child = None
new_parents_last_descendant_next_element = new_parent_element.next_element
to_append = element.contents
if len(to_append) > 0:
# Set the first child's previous_element and previous_sibling
# to elements within the new parent
first_child = to_append[0]
if new_parents_last_descendant is not None:
first_child.previous_element = new_parents_last_descendant
else:
first_child.previous_element = new_parent_element
first_child.previous_sibling = new_parents_last_child
if new_parents_last_descendant is not None:
new_parents_last_descendant.next_element = first_child
else:
new_parent_element.next_element = first_child
if new_parents_last_child is not None:
new_parents_last_child.next_sibling = first_child
# Find the very last element being moved. It is now the
# parent's last descendant. It has no .next_sibling and
# its .next_element is whatever the previous last
# descendant had.
last_childs_last_descendant = to_append[-1]._last_descendant(
is_initialized=False, accept_self=True
)
# Since we passed accept_self=True into _last_descendant,
# there's no possibility that the result is None.
assert last_childs_last_descendant is not None
last_childs_last_descendant.next_element = (
new_parents_last_descendant_next_element
)
if new_parents_last_descendant_next_element is not None:
# TODO-COVERAGE: This code has no test coverage and
# I'm not sure how to get html5lib to go through this
# path, but it's just the other side of the previous
# line.
new_parents_last_descendant_next_element.previous_element = (
last_childs_last_descendant
)
last_childs_last_descendant.next_sibling = None
for child in to_append:
child.parent = new_parent_element
new_parent_element.contents.append(child)
# Now that this element has no children, change its .next_element.
element.contents = []
element.next_element = final_next_element
# print("DONE WITH MOVE")
# print("FROM", self.element)
# print("TO", new_parent_element)
# TODO-TYPING: typeshed stubs are incorrect about this;
# hasContent returns a boolean, not None.
def hasContent(self) -> bool:
return len(self.element.contents) > 0
# TODO-TYPING: typeshed stubs are incorrect about this;
# cloneNode returns a new Node, not None.
def cloneNode(self) -> treebuilder_base.Node:
tag = self.soup.new_tag(self.element.name, self.namespace)
node = Element(tag, self.soup, self.namespace)
for key, value in self.attributes:
node.attributes[key] = value
return node
def getNameTuple(self) -> Tuple[Optional[_NamespaceURL], str]:
if self.namespace is None:
return namespaces["html"], self.name
else:
return self.namespace, self.name
nameTuple = property(getNameTuple)
class TextNode(BeautifulSoupNode):
element: NavigableString
def __init__(self, element: NavigableString, soup: "BeautifulSoup"):
treebuilder_base.Node.__init__(self, None)
self.element = element
self.soup = soup

View File

@ -0,0 +1,474 @@
# encoding: utf-8
"""Use the HTMLParser library to parse HTML files that aren't too bad."""
from __future__ import annotations
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
__all__ = [
"HTMLParserTreeBuilder",
]
from html.parser import HTMLParser
from typing import (
Any,
Callable,
cast,
Dict,
Iterable,
List,
Optional,
TYPE_CHECKING,
Tuple,
Type,
Union,
)
from bs4.element import (
AttributeDict,
CData,
Comment,
Declaration,
Doctype,
ProcessingInstruction,
)
from bs4.dammit import EntitySubstitution, UnicodeDammit
from bs4.builder import (
DetectsXMLParsedAsHTML,
HTML,
HTMLTreeBuilder,
STRICT,
)
from bs4.exceptions import ParserRejectedMarkup
if TYPE_CHECKING:
from bs4 import BeautifulSoup
from bs4.element import NavigableString
from bs4._typing import (
_Encoding,
_Encodings,
_RawMarkup,
)
HTMLPARSER = "html.parser"
_DuplicateAttributeHandler = Callable[[Dict[str, str], str, str], None]
class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML):
#: Constant to handle duplicate attributes by ignoring later values
#: and keeping the earlier ones.
REPLACE: str = "replace"
#: Constant to handle duplicate attributes by replacing earlier values
#: with later ones.
IGNORE: str = "ignore"
"""A subclass of the Python standard library's HTMLParser class, which
listens for HTMLParser events and translates them into calls
to Beautiful Soup's tree construction API.
:param on_duplicate_attribute: A strategy for what to do if a
tag includes the same attribute more than once. Accepted
values are: REPLACE (replace earlier values with later
ones, the default), IGNORE (keep the earliest value
encountered), or a callable. A callable must take three
arguments: the dictionary of attributes already processed,
the name of the duplicate attribute, and the most recent value
encountered.
"""
def __init__(
self,
soup: BeautifulSoup,
*args: Any,
on_duplicate_attribute: Union[str, _DuplicateAttributeHandler] = REPLACE,
**kwargs: Any,
):
self.soup = soup
self.on_duplicate_attribute = on_duplicate_attribute
self.attribute_dict_class = soup.builder.attribute_dict_class
HTMLParser.__init__(self, *args, **kwargs)
# Keep a list of empty-element tags that were encountered
# without an explicit closing tag. If we encounter a closing tag
# of this type, we'll associate it with one of those entries.
#
# This isn't a stack because we don't care about the
# order. It's a list of closing tags we've already handled and
# will ignore, assuming they ever show up.
self.already_closed_empty_element = []
self._initialize_xml_detector()
on_duplicate_attribute: Union[str, _DuplicateAttributeHandler]
already_closed_empty_element: List[str]
soup: BeautifulSoup
def error(self, message: str) -> None:
# NOTE: This method is required so long as Python 3.9 is
# supported. The corresponding code is removed from HTMLParser
# in 3.5, but not removed from ParserBase until 3.10.
# https://github.com/python/cpython/issues/76025
#
# The original implementation turned the error into a warning,
# but in every case I discovered, this made HTMLParser
# immediately crash with an error message that was less
# helpful than the warning. The new implementation makes it
# more clear that html.parser just can't parse this
# markup. The 3.10 implementation does the same, though it
# raises AssertionError rather than calling a method. (We
# catch this error and wrap it in a ParserRejectedMarkup.)
raise ParserRejectedMarkup(message)
def handle_startendtag(
self, name: str, attrs: List[Tuple[str, Optional[str]]]
) -> None:
"""Handle an incoming empty-element tag.
html.parser only calls this method when the markup looks like
<tag/>.
"""
# `handle_empty_element` tells handle_starttag not to close the tag
# just because its name matches a known empty-element tag. We
# know that this is an empty-element tag, and we want to call
# handle_endtag ourselves.
self.handle_starttag(name, attrs, handle_empty_element=False)
self.handle_endtag(name)
def handle_starttag(
self,
name: str,
attrs: List[Tuple[str, Optional[str]]],
handle_empty_element: bool = True,
) -> None:
"""Handle an opening tag, e.g. '<tag>'
:param handle_empty_element: True if this tag is known to be
an empty-element tag (i.e. there is not expected to be any
closing tag).
"""
# TODO: handle namespaces here?
attr_dict: AttributeDict = self.attribute_dict_class()
for key, value in attrs:
# Change None attribute values to the empty string
# for consistency with the other tree builders.
if value is None:
value = ""
if key in attr_dict:
# A single attribute shows up multiple times in this
# tag. How to handle it depends on the
# on_duplicate_attribute setting.
on_dupe = self.on_duplicate_attribute
if on_dupe == self.IGNORE:
pass
elif on_dupe in (None, self.REPLACE):
attr_dict[key] = value
else:
on_dupe = cast(_DuplicateAttributeHandler, on_dupe)
on_dupe(attr_dict, key, value)
else:
attr_dict[key] = value
# print("START", name)
sourceline: Optional[int]
sourcepos: Optional[int]
if self.soup.builder.store_line_numbers:
sourceline, sourcepos = self.getpos()
else:
sourceline = sourcepos = None
tag = self.soup.handle_starttag(
name, None, None, attr_dict, sourceline=sourceline, sourcepos=sourcepos
)
if tag and tag.is_empty_element and handle_empty_element:
# Unlike other parsers, html.parser doesn't send separate end tag
# events for empty-element tags. (It's handled in
# handle_startendtag, but only if the original markup looked like
# <tag/>.)
#
# So we need to call handle_endtag() ourselves. Since we
# know the start event is identical to the end event, we
# don't want handle_endtag() to cross off any previous end
# events for tags of this name.
self.handle_endtag(name, check_already_closed=False)
# But we might encounter an explicit closing tag for this tag
# later on. If so, we want to ignore it.
self.already_closed_empty_element.append(name)
if self._root_tag_name is None:
self._root_tag_encountered(name)
def handle_endtag(self, name: str, check_already_closed: bool = True) -> None:
"""Handle a closing tag, e.g. '</tag>'
:param name: A tag name.
:param check_already_closed: True if this tag is expected to
be the closing portion of an empty-element tag,
e.g. '<tag></tag>'.
"""
# print("END", name)
if check_already_closed and name in self.already_closed_empty_element:
# This is a redundant end tag for an empty-element tag.
# We've already called handle_endtag() for it, so just
# check it off the list.
# print("ALREADY CLOSED", name)
self.already_closed_empty_element.remove(name)
else:
self.soup.handle_endtag(name)
def handle_data(self, data: str) -> None:
"""Handle some textual data that shows up between tags."""
self.soup.handle_data(data)
def handle_charref(self, name: str) -> None:
"""Handle a numeric character reference by converting it to the
corresponding Unicode character and treating it as textual
data.
:param name: Character number, possibly in hexadecimal.
"""
# TODO: This was originally a workaround for a bug in
# HTMLParser. (http://bugs.python.org/issue13633) The bug has
# been fixed, but removing this code still makes some
# Beautiful Soup tests fail. This needs investigation.
if name.startswith("x"):
real_name = int(name.lstrip("x"), 16)
elif name.startswith("X"):
real_name = int(name.lstrip("X"), 16)
else:
real_name = int(name)
data = None
if real_name < 256:
# HTML numeric entities are supposed to reference Unicode
# code points, but sometimes they reference code points in
# some other encoding (ahem, Windows-1252). E.g. &#147;
# instead of &#201; for LEFT DOUBLE QUOTATION MARK. This
# code tries to detect this situation and compensate.
for encoding in (self.soup.original_encoding, "windows-1252"):
if not encoding:
continue
try:
data = bytearray([real_name]).decode(encoding)
except UnicodeDecodeError:
pass
if not data:
try:
data = chr(real_name)
except (ValueError, OverflowError):
pass
data = data or "\N{REPLACEMENT CHARACTER}"
self.handle_data(data)
def handle_entityref(self, name: str) -> None:
"""Handle a named entity reference by converting it to the
corresponding Unicode character(s) and treating it as textual
data.
:param name: Name of the entity reference.
"""
character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name)
if character is not None:
data = character
else:
# If this were XML, it would be ambiguous whether "&foo"
# was an character entity reference with a missing
# semicolon or the literal string "&foo". Since this is
# HTML, we have a complete list of all character entity references,
# and this one wasn't found, so assume it's the literal string "&foo".
data = "&%s" % name
self.handle_data(data)
def handle_comment(self, data: str) -> None:
"""Handle an HTML comment.
:param data: The text of the comment.
"""
self.soup.endData()
self.soup.handle_data(data)
self.soup.endData(Comment)
def handle_decl(self, data: str) -> None:
"""Handle a DOCTYPE declaration.
:param data: The text of the declaration.
"""
self.soup.endData()
data = data[len("DOCTYPE ") :]
self.soup.handle_data(data)
self.soup.endData(Doctype)
def unknown_decl(self, data: str) -> None:
"""Handle a declaration of unknown type -- probably a CDATA block.
:param data: The text of the declaration.
"""
cls: Type[NavigableString]
if data.upper().startswith("CDATA["):
cls = CData
data = data[len("CDATA[") :]
else:
cls = Declaration
self.soup.endData()
self.soup.handle_data(data)
self.soup.endData(cls)
def handle_pi(self, data: str) -> None:
"""Handle a processing instruction.
:param data: The text of the instruction.
"""
self.soup.endData()
self.soup.handle_data(data)
self._document_might_be_xml(data)
self.soup.endData(ProcessingInstruction)
class HTMLParserTreeBuilder(HTMLTreeBuilder):
"""A Beautiful soup `bs4.builder.TreeBuilder` that uses the
:py:class:`html.parser.HTMLParser` parser, found in the Python
standard library.
"""
is_xml: bool = False
picklable: bool = True
NAME: str = HTMLPARSER
features: Iterable[str] = [NAME, HTML, STRICT]
parser_args: Tuple[Iterable[Any], Dict[str, Any]]
#: The html.parser knows which line number and position in the
#: original file is the source of an element.
TRACKS_LINE_NUMBERS: bool = True
def __init__(
self,
parser_args: Optional[Iterable[Any]] = None,
parser_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Any,
):
"""Constructor.
:param parser_args: Positional arguments to pass into
the BeautifulSoupHTMLParser constructor, once it's
invoked.
:param parser_kwargs: Keyword arguments to pass into
the BeautifulSoupHTMLParser constructor, once it's
invoked.
:param kwargs: Keyword arguments for the superclass constructor.
"""
# Some keyword arguments will be pulled out of kwargs and placed
# into parser_kwargs.
extra_parser_kwargs = dict()
for arg in ("on_duplicate_attribute",):
if arg in kwargs:
value = kwargs.pop(arg)
extra_parser_kwargs[arg] = value
super(HTMLParserTreeBuilder, self).__init__(**kwargs)
parser_args = parser_args or []
parser_kwargs = parser_kwargs or {}
parser_kwargs.update(extra_parser_kwargs)
parser_kwargs["convert_charrefs"] = False
self.parser_args = (parser_args, parser_kwargs)
def prepare_markup(
self,
markup: _RawMarkup,
user_specified_encoding: Optional[_Encoding] = None,
document_declared_encoding: Optional[_Encoding] = None,
exclude_encodings: Optional[_Encodings] = None,
) -> Iterable[Tuple[str, Optional[_Encoding], Optional[_Encoding], bool]]:
"""Run any preliminary steps necessary to make incoming markup
acceptable to the parser.
:param markup: Some markup -- probably a bytestring.
:param user_specified_encoding: The user asked to try this encoding.
:param document_declared_encoding: The markup itself claims to be
in this encoding.
:param exclude_encodings: The user asked _not_ to try any of
these encodings.
:yield: A series of 4-tuples: (markup, encoding, declared encoding,
has undergone character replacement)
Each 4-tuple represents a strategy for parsing the document.
This TreeBuilder uses Unicode, Dammit to convert the markup
into Unicode, so the ``markup`` element of the tuple will
always be a string.
"""
if isinstance(markup, str):
# Parse Unicode as-is.
yield (markup, None, None, False)
return
# Ask UnicodeDammit to sniff the most likely encoding.
known_definite_encodings: List[_Encoding] = []
if user_specified_encoding:
# This was provided by the end-user; treat it as a known
# definite encoding per the algorithm laid out in the
# HTML5 spec. (See the EncodingDetector class for
# details.)
known_definite_encodings.append(user_specified_encoding)
user_encodings: List[_Encoding] = []
if document_declared_encoding:
# This was found in the document; treat it as a slightly
# lower-priority user encoding.
user_encodings.append(document_declared_encoding)
dammit = UnicodeDammit(
markup,
known_definite_encodings=known_definite_encodings,
user_encodings=user_encodings,
is_html=True,
exclude_encodings=exclude_encodings,
)
if dammit.unicode_markup is None:
# In every case I've seen, Unicode, Dammit is able to
# convert the markup into Unicode, even if it needs to use
# REPLACEMENT CHARACTER. But there is a code path that
# could result in unicode_markup being None, and
# HTMLParser can only parse Unicode, so here we handle
# that code path.
raise ParserRejectedMarkup(
"Could not convert input to Unicode, and html.parser will not accept bytestrings."
)
else:
yield (
dammit.unicode_markup,
dammit.original_encoding,
dammit.declared_html_encoding,
dammit.contains_replacement_characters,
)
def feed(self, markup: _RawMarkup) -> None:
args, kwargs = self.parser_args
# HTMLParser.feed will only handle str, but
# BeautifulSoup.markup is allowed to be _RawMarkup, because
# it's set by the yield value of
# TreeBuilder.prepare_markup. Fortunately,
# HTMLParserTreeBuilder.prepare_markup always yields a str
# (UnicodeDammit.unicode_markup).
assert isinstance(markup, str)
# We know BeautifulSoup calls TreeBuilder.initialize_soup
# before calling feed(), so we can assume self.soup
# is set.
assert self.soup is not None
parser = BeautifulSoupHTMLParser(self.soup, *args, **kwargs)
try:
parser.feed(markup)
parser.close()
except AssertionError as e:
# html.parser raises AssertionError in rare cases to
# indicate a fatal problem with the markup, especially
# when there's an error in the doctype declaration.
raise ParserRejectedMarkup(e)
parser.already_closed_empty_element = []

View File

@ -0,0 +1,490 @@
# encoding: utf-8
from __future__ import annotations
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
__all__ = [
"LXMLTreeBuilderForXML",
"LXMLTreeBuilder",
]
from typing import (
Any,
Dict,
Iterable,
List,
Optional,
Set,
Tuple,
Type,
TYPE_CHECKING,
Union,
)
from typing_extensions import TypeAlias
from io import BytesIO
from io import StringIO
from lxml import etree
from bs4.element import (
AttributeDict,
XMLAttributeDict,
Comment,
Doctype,
NamespacedAttribute,
ProcessingInstruction,
XMLProcessingInstruction,
)
from bs4.builder import (
DetectsXMLParsedAsHTML,
FAST,
HTML,
HTMLTreeBuilder,
PERMISSIVE,
TreeBuilder,
XML,
)
from bs4.dammit import EncodingDetector
from bs4.exceptions import ParserRejectedMarkup
if TYPE_CHECKING:
from bs4._typing import (
_Encoding,
_Encodings,
_NamespacePrefix,
_NamespaceURL,
_NamespaceMapping,
_InvertedNamespaceMapping,
_RawMarkup,
)
from bs4 import BeautifulSoup
LXML: str = "lxml"
def _invert(d: dict[Any, Any]) -> dict[Any, Any]:
"Invert a dictionary."
return dict((v, k) for k, v in list(d.items()))
_LXMLParser: TypeAlias = Union[etree.XMLParser, etree.HTMLParser]
_ParserOrParserClass: TypeAlias = Union[
_LXMLParser, Type[etree.XMLParser], Type[etree.HTMLParser]
]
class LXMLTreeBuilderForXML(TreeBuilder):
DEFAULT_PARSER_CLASS: Type[etree.XMLParser] = etree.XMLParser
is_xml: bool = True
processing_instruction_class: Type[ProcessingInstruction]
NAME: str = "lxml-xml"
ALTERNATE_NAMES: Iterable[str] = ["xml"]
# Well, it's permissive by XML parser standards.
features: Iterable[str] = [NAME, LXML, XML, FAST, PERMISSIVE]
CHUNK_SIZE: int = 512
# This namespace mapping is specified in the XML Namespace
# standard.
DEFAULT_NSMAPS: _NamespaceMapping = dict(xml="http://www.w3.org/XML/1998/namespace")
DEFAULT_NSMAPS_INVERTED: _InvertedNamespaceMapping = _invert(DEFAULT_NSMAPS)
nsmaps: List[Optional[_InvertedNamespaceMapping]]
empty_element_tags: Set[str]
parser: Any
_default_parser: Optional[etree.XMLParser]
# NOTE: If we parsed Element objects and looked at .sourceline,
# we'd be able to see the line numbers from the original document.
# But instead we build an XMLParser or HTMLParser object to serve
# as the target of parse messages, and those messages don't include
# line numbers.
# See: https://bugs.launchpad.net/lxml/+bug/1846906
def initialize_soup(self, soup: BeautifulSoup) -> None:
"""Let the BeautifulSoup object know about the standard namespace
mapping.
:param soup: A `BeautifulSoup`.
"""
# Beyond this point, self.soup is set, so we can assume (and
# assert) it's not None whenever necessary.
super(LXMLTreeBuilderForXML, self).initialize_soup(soup)
self._register_namespaces(self.DEFAULT_NSMAPS)
def _register_namespaces(self, mapping: Dict[str, str]) -> None:
"""Let the BeautifulSoup object know about namespaces encountered
while parsing the document.
This might be useful later on when creating CSS selectors.
This will track (almost) all namespaces, even ones that were
only in scope for part of the document. If two namespaces have
the same prefix, only the first one encountered will be
tracked. Un-prefixed namespaces are not tracked.
:param mapping: A dictionary mapping namespace prefixes to URIs.
"""
assert self.soup is not None
for key, value in list(mapping.items()):
# This is 'if key' and not 'if key is not None' because we
# don't track un-prefixed namespaces. Soupselect will
# treat an un-prefixed namespace as the default, which
# causes confusion in some cases.
if key and key not in self.soup._namespaces:
# Let the BeautifulSoup object know about a new namespace.
# If there are multiple namespaces defined with the same
# prefix, the first one in the document takes precedence.
self.soup._namespaces[key] = value
def default_parser(self, encoding: Optional[_Encoding]) -> _ParserOrParserClass:
"""Find the default parser for the given encoding.
:return: Either a parser object or a class, which
will be instantiated with default arguments.
"""
if self._default_parser is not None:
return self._default_parser
return self.DEFAULT_PARSER_CLASS(target=self, recover=True, encoding=encoding)
def parser_for(self, encoding: Optional[_Encoding]) -> _LXMLParser:
"""Instantiate an appropriate parser for the given encoding.
:param encoding: A string.
:return: A parser object such as an `etree.XMLParser`.
"""
# Use the default parser.
parser = self.default_parser(encoding)
if callable(parser):
# Instantiate the parser with default arguments
parser = parser(target=self, recover=True, encoding=encoding)
return parser
def __init__(
self,
parser: Optional[etree.XMLParser] = None,
empty_element_tags: Optional[Set[str]] = None,
**kwargs: Any,
):
# TODO: Issue a warning if parser is present but not a
# callable, since that means there's no way to create new
# parsers for different encodings.
self._default_parser = parser
self.soup = None
self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
self.active_namespace_prefixes = [dict(self.DEFAULT_NSMAPS)]
if "attribute_dict_class" not in kwargs:
kwargs["attribute_dict_class"] = XMLAttributeDict
super(LXMLTreeBuilderForXML, self).__init__(**kwargs)
def _getNsTag(self, tag: str) -> Tuple[Optional[str], str]:
# Split the namespace URL out of a fully-qualified lxml tag
# name. Copied from lxml's src/lxml/sax.py.
if tag[0] == "{":
namespace, name = tag[1:].split("}", 1)
return (namespace, name)
else:
return (None, tag)
def prepare_markup(
self,
markup: _RawMarkup,
user_specified_encoding: Optional[_Encoding] = None,
document_declared_encoding: Optional[_Encoding] = None,
exclude_encodings: Optional[_Encodings] = None,
) -> Iterable[
Tuple[Union[str, bytes], Optional[_Encoding], Optional[_Encoding], bool]
]:
"""Run any preliminary steps necessary to make incoming markup
acceptable to the parser.
lxml really wants to get a bytestring and convert it to
Unicode itself. So instead of using UnicodeDammit to convert
the bytestring to Unicode using different encodings, this
implementation uses EncodingDetector to iterate over the
encodings, and tell lxml to try to parse the document as each
one in turn.
:param markup: Some markup -- hopefully a bytestring.
:param user_specified_encoding: The user asked to try this encoding.
:param document_declared_encoding: The markup itself claims to be
in this encoding.
:param exclude_encodings: The user asked _not_ to try any of
these encodings.
:yield: A series of 4-tuples: (markup, encoding, declared encoding,
has undergone character replacement)
Each 4-tuple represents a strategy for converting the
document to Unicode and parsing it. Each strategy will be tried
in turn.
"""
is_html = not self.is_xml
if is_html:
self.processing_instruction_class = ProcessingInstruction
# We're in HTML mode, so if we're given XML, that's worth
# noting.
DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(markup, stacklevel=3)
else:
self.processing_instruction_class = XMLProcessingInstruction
if isinstance(markup, str):
# We were given Unicode. Maybe lxml can parse Unicode on
# this system?
# TODO: This is a workaround for
# https://bugs.launchpad.net/lxml/+bug/1948551.
# We can remove it once the upstream issue is fixed.
if len(markup) > 0 and markup[0] == "\N{BYTE ORDER MARK}":
markup = markup[1:]
yield markup, None, document_declared_encoding, False
if isinstance(markup, str):
# No, apparently not. Convert the Unicode to UTF-8 and
# tell lxml to parse it as UTF-8.
yield (markup.encode("utf8"), "utf8", document_declared_encoding, False)
# Since the document was Unicode in the first place, there
# is no need to try any more strategies; we know this will
# work.
return
known_definite_encodings: List[_Encoding] = []
if user_specified_encoding:
# This was provided by the end-user; treat it as a known
# definite encoding per the algorithm laid out in the
# HTML5 spec. (See the EncodingDetector class for
# details.)
known_definite_encodings.append(user_specified_encoding)
user_encodings: List[_Encoding] = []
if document_declared_encoding:
# This was found in the document; treat it as a slightly
# lower-priority user encoding.
user_encodings.append(document_declared_encoding)
detector = EncodingDetector(
markup,
known_definite_encodings=known_definite_encodings,
user_encodings=user_encodings,
is_html=is_html,
exclude_encodings=exclude_encodings,
)
for encoding in detector.encodings:
yield (detector.markup, encoding, document_declared_encoding, False)
def feed(self, markup: _RawMarkup) -> None:
io: Union[BytesIO, StringIO]
if isinstance(markup, bytes):
io = BytesIO(markup)
elif isinstance(markup, str):
io = StringIO(markup)
# initialize_soup is called before feed, so we know this
# is not None.
assert self.soup is not None
# Call feed() at least once, even if the markup is empty,
# or the parser won't be initialized.
data = io.read(self.CHUNK_SIZE)
try:
self.parser = self.parser_for(self.soup.original_encoding)
self.parser.feed(data)
while len(data) != 0:
# Now call feed() on the rest of the data, chunk by chunk.
data = io.read(self.CHUNK_SIZE)
if len(data) != 0:
self.parser.feed(data)
self.parser.close()
except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
raise ParserRejectedMarkup(e)
def close(self) -> None:
self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
def start(
self,
tag: str | bytes,
attrs: Dict[str | bytes, str | bytes],
nsmap: _NamespaceMapping = {},
) -> None:
# This is called by lxml code as a result of calling
# BeautifulSoup.feed(), and we know self.soup is set by the time feed()
# is called.
assert self.soup is not None
assert isinstance(tag, str)
# We need to recreate the attribute dict for three
# reasons. First, for type checking, so we can assert there
# are no bytestrings in the keys or values. Second, because we
# need a mutable dict--lxml might send us an immutable
# dictproxy. Third, so we can handle namespaced attribute
# names by converting the keys to NamespacedAttributes.
new_attrs: Dict[Union[str, NamespacedAttribute], str] = (
self.attribute_dict_class()
)
for k, v in attrs.items():
assert isinstance(k, str)
assert isinstance(v, str)
new_attrs[k] = v
nsprefix: Optional[_NamespacePrefix] = None
namespace: Optional[_NamespaceURL] = None
# Invert each namespace map as it comes in.
if len(nsmap) == 0 and len(self.nsmaps) > 1:
# There are no new namespaces for this tag, but
# non-default namespaces are in play, so we need a
# separate tag stack to know when they end.
self.nsmaps.append(None)
elif len(nsmap) > 0:
# A new namespace mapping has come into play.
# First, Let the BeautifulSoup object know about it.
self._register_namespaces(nsmap)
# Then, add it to our running list of inverted namespace
# mappings.
self.nsmaps.append(_invert(nsmap))
# The currently active namespace prefixes have
# changed. Calculate the new mapping so it can be stored
# with all Tag objects created while these prefixes are in
# scope.
current_mapping = dict(self.active_namespace_prefixes[-1])
current_mapping.update(nsmap)
# We should not track un-prefixed namespaces as we can only hold one
# and it will be recognized as the default namespace by soupsieve,
# which may be confusing in some situations.
if "" in current_mapping:
del current_mapping[""]
self.active_namespace_prefixes.append(current_mapping)
# Also treat the namespace mapping as a set of attributes on the
# tag, so we can recreate it later.
for prefix, namespace in list(nsmap.items()):
attribute = NamespacedAttribute(
"xmlns", prefix, "http://www.w3.org/2000/xmlns/"
)
new_attrs[attribute] = namespace
# Namespaces are in play. Find any attributes that came in
# from lxml with namespaces attached to their names, and
# turn then into NamespacedAttribute objects.
final_attrs: AttributeDict = self.attribute_dict_class()
for attr, value in list(new_attrs.items()):
namespace, attr = self._getNsTag(attr)
if namespace is None:
final_attrs[attr] = value
else:
nsprefix = self._prefix_for_namespace(namespace)
attr = NamespacedAttribute(nsprefix, attr, namespace)
final_attrs[attr] = value
namespace, tag = self._getNsTag(tag)
nsprefix = self._prefix_for_namespace(namespace)
self.soup.handle_starttag(
tag,
namespace,
nsprefix,
final_attrs,
namespaces=self.active_namespace_prefixes[-1],
)
def _prefix_for_namespace(
self, namespace: Optional[_NamespaceURL]
) -> Optional[_NamespacePrefix]:
"""Find the currently active prefix for the given namespace."""
if namespace is None:
return None
for inverted_nsmap in reversed(self.nsmaps):
if inverted_nsmap is not None and namespace in inverted_nsmap:
return inverted_nsmap[namespace]
return None
def end(self, name: str | bytes) -> None:
assert self.soup is not None
assert isinstance(name, str)
self.soup.endData()
namespace, name = self._getNsTag(name)
nsprefix = None
if namespace is not None:
for inverted_nsmap in reversed(self.nsmaps):
if inverted_nsmap is not None and namespace in inverted_nsmap:
nsprefix = inverted_nsmap[namespace]
break
self.soup.handle_endtag(name, nsprefix)
if len(self.nsmaps) > 1:
# This tag, or one of its parents, introduced a namespace
# mapping, so pop it off the stack.
out_of_scope_nsmap = self.nsmaps.pop()
if out_of_scope_nsmap is not None:
# This tag introduced a namespace mapping which is no
# longer in scope. Recalculate the currently active
# namespace prefixes.
self.active_namespace_prefixes.pop()
def pi(self, target: str, data: str) -> None:
assert self.soup is not None
self.soup.endData()
data = target + " " + data
self.soup.handle_data(data)
self.soup.endData(self.processing_instruction_class)
def data(self, data: str | bytes) -> None:
assert self.soup is not None
assert isinstance(data, str)
self.soup.handle_data(data)
def doctype(self, name: str, pubid: str, system: str) -> None:
assert self.soup is not None
self.soup.endData()
doctype_string = Doctype._string_for_name_and_ids(name, pubid, system)
self.soup.handle_data(doctype_string)
self.soup.endData(containerClass=Doctype)
def comment(self, text: str | bytes) -> None:
"Handle comments as Comment objects."
assert self.soup is not None
assert isinstance(text, str)
self.soup.endData()
self.soup.handle_data(text)
self.soup.endData(Comment)
def test_fragment_to_document(self, fragment: str) -> str:
"""See `TreeBuilder`."""
return '<?xml version="1.0" encoding="utf-8"?>\n%s' % fragment
class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
NAME: str = LXML
ALTERNATE_NAMES: Iterable[str] = ["lxml-html"]
features: Iterable[str] = list(ALTERNATE_NAMES) + [NAME, HTML, FAST, PERMISSIVE]
is_xml: bool = False
def default_parser(self, encoding: Optional[_Encoding]) -> _ParserOrParserClass:
return etree.HTMLParser
def feed(self, markup: _RawMarkup) -> None:
# We know self.soup is set by the time feed() is called.
assert self.soup is not None
encoding = self.soup.original_encoding
try:
self.parser = self.parser_for(encoding)
self.parser.feed(markup)
self.parser.close()
except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
raise ParserRejectedMarkup(e)
def test_fragment_to_document(self, fragment: str) -> str:
"""See `TreeBuilder`."""
return "<html><body>%s</body></html>" % fragment

View File

@ -0,0 +1,338 @@
"""Integration code for CSS selectors using `Soup Sieve <https://facelessuser.github.io/soupsieve/>`_ (pypi: ``soupsieve``).
Acquire a `CSS` object through the `element.Tag.css` attribute of
the starting point of your CSS selector, or (if you want to run a
selector against the entire document) of the `BeautifulSoup` object
itself.
The main advantage of doing this instead of using ``soupsieve``
functions is that you don't need to keep passing the `element.Tag` to be
selected against, since the `CSS` object is permanently scoped to that
`element.Tag`.
"""
from __future__ import annotations
from types import ModuleType
from typing import (
Any,
cast,
Iterable,
Iterator,
Optional,
TYPE_CHECKING,
)
import warnings
from bs4._typing import _NamespaceMapping
if TYPE_CHECKING:
from soupsieve import SoupSieve
from bs4 import element
from bs4.element import ResultSet, Tag
soupsieve: Optional[ModuleType]
try:
import soupsieve
except ImportError:
soupsieve = None
warnings.warn(
"The soupsieve package is not installed. CSS selectors cannot be used."
)
class CSS(object):
"""A proxy object against the ``soupsieve`` library, to simplify its
CSS selector API.
You don't need to instantiate this class yourself; instead, use
`element.Tag.css`.
:param tag: All CSS selectors run by this object will use this as
their starting point.
:param api: An optional drop-in replacement for the ``soupsieve`` module,
intended for use in unit tests.
"""
def __init__(self, tag: element.Tag, api: Optional[ModuleType] = None):
if api is None:
api = soupsieve
if api is None:
raise NotImplementedError(
"Cannot execute CSS selectors because the soupsieve package is not installed."
)
self.api = api
self.tag = tag
def escape(self, ident: str) -> str:
"""Escape a CSS identifier.
This is a simple wrapper around `soupsieve.escape() <https://facelessuser.github.io/soupsieve/api/#soupsieveescape>`_. See the
documentation for that function for more information.
"""
if soupsieve is None:
raise NotImplementedError(
"Cannot escape CSS identifiers because the soupsieve package is not installed."
)
return cast(str, self.api.escape(ident))
def _ns(
self, ns: Optional[_NamespaceMapping], select: str
) -> Optional[_NamespaceMapping]:
"""Normalize a dictionary of namespaces."""
if not isinstance(select, self.api.SoupSieve) and ns is None:
# If the selector is a precompiled pattern, it already has
# a namespace context compiled in, which cannot be
# replaced.
ns = self.tag._namespaces
return ns
def _rs(self, results: Iterable[Tag]) -> ResultSet[Tag]:
"""Normalize a list of results to a py:class:`ResultSet`.
A py:class:`ResultSet` is more consistent with the rest of
Beautiful Soup's API, and :py:meth:`ResultSet.__getattr__` has
a helpful error message if you try to treat a list of results
as a single result (a common mistake).
"""
# Import here to avoid circular import
from bs4 import ResultSet
return ResultSet(None, results)
def compile(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
flags: int = 0,
**kwargs: Any,
) -> SoupSieve:
"""Pre-compile a selector and return the compiled object.
:param selector: A CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will use the prefixes it encountered while
parsing the document.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.compile() <https://facelessuser.github.io/soupsieve/api/#soupsievecompile>`_ method.
:param kwargs: Keyword arguments to be passed into Soup Sieve's
`soupsieve.compile() <https://facelessuser.github.io/soupsieve/api/#soupsievecompile>`_ method.
:return: A precompiled selector object.
:rtype: soupsieve.SoupSieve
"""
return self.api.compile(select, self._ns(namespaces, select), flags, **kwargs)
def select_one(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
flags: int = 0,
**kwargs: Any,
) -> element.Tag | None:
"""Perform a CSS selection operation on the current Tag and return the
first result, if any.
This uses the Soup Sieve library. For more information, see
that library's documentation for the `soupsieve.select_one() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect_one>`_ method.
:param selector: A CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will use the prefixes it encountered while
parsing the document.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.select_one() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect_one>`_ method.
:param kwargs: Keyword arguments to be passed into Soup Sieve's
`soupsieve.select_one() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect_one>`_ method.
"""
return self.api.select_one(
select, self.tag, self._ns(namespaces, select), flags, **kwargs
)
def select(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
limit: int = 0,
flags: int = 0,
**kwargs: Any,
) -> ResultSet[element.Tag]:
"""Perform a CSS selection operation on the current `element.Tag`.
This uses the Soup Sieve library. For more information, see
that library's documentation for the `soupsieve.select() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect>`_ method.
:param selector: A CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will pass in the prefixes it encountered while
parsing the document.
:param limit: After finding this number of results, stop looking.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.select() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect>`_ method.
:param kwargs: Keyword arguments to be passed into Soup Sieve's
`soupsieve.select() <https://facelessuser.github.io/soupsieve/api/#soupsieveselect>`_ method.
"""
if limit is None:
limit = 0
return self._rs(
self.api.select(
select, self.tag, self._ns(namespaces, select), limit, flags, **kwargs
)
)
def iselect(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
limit: int = 0,
flags: int = 0,
**kwargs: Any,
) -> Iterator[element.Tag]:
"""Perform a CSS selection operation on the current `element.Tag`.
This uses the Soup Sieve library. For more information, see
that library's documentation for the `soupsieve.iselect()
<https://facelessuser.github.io/soupsieve/api/#soupsieveiselect>`_
method. It is the same as select(), but it returns a generator
instead of a list.
:param selector: A string containing a CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will pass in the prefixes it encountered while
parsing the document.
:param limit: After finding this number of results, stop looking.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.iselect() <https://facelessuser.github.io/soupsieve/api/#soupsieveiselect>`_ method.
:param kwargs: Keyword arguments to be passed into Soup Sieve's
`soupsieve.iselect() <https://facelessuser.github.io/soupsieve/api/#soupsieveiselect>`_ method.
"""
return self.api.iselect(
select, self.tag, self._ns(namespaces, select), limit, flags, **kwargs
)
def closest(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
flags: int = 0,
**kwargs: Any,
) -> Optional[element.Tag]:
"""Find the `element.Tag` closest to this one that matches the given selector.
This uses the Soup Sieve library. For more information, see
that library's documentation for the `soupsieve.closest()
<https://facelessuser.github.io/soupsieve/api/#soupsieveclosest>`_
method.
:param selector: A string containing a CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will pass in the prefixes it encountered while
parsing the document.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.closest() <https://facelessuser.github.io/soupsieve/api/#soupsieveclosest>`_ method.
:param kwargs: Keyword arguments to be passed into Soup Sieve's
`soupsieve.closest() <https://facelessuser.github.io/soupsieve/api/#soupsieveclosest>`_ method.
"""
return self.api.closest(
select, self.tag, self._ns(namespaces, select), flags, **kwargs
)
def match(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
flags: int = 0,
**kwargs: Any,
) -> bool:
"""Check whether or not this `element.Tag` matches the given CSS selector.
This uses the Soup Sieve library. For more information, see
that library's documentation for the `soupsieve.match()
<https://facelessuser.github.io/soupsieve/api/#soupsievematch>`_
method.
:param: a CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will pass in the prefixes it encountered while
parsing the document.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.match()
<https://facelessuser.github.io/soupsieve/api/#soupsievematch>`_
method.
:param kwargs: Keyword arguments to be passed into SoupSieve's
`soupsieve.match()
<https://facelessuser.github.io/soupsieve/api/#soupsievematch>`_
method.
"""
return cast(
bool,
self.api.match(
select, self.tag, self._ns(namespaces, select), flags, **kwargs
),
)
def filter(
self,
select: str,
namespaces: Optional[_NamespaceMapping] = None,
flags: int = 0,
**kwargs: Any,
) -> ResultSet[element.Tag]:
"""Filter this `element.Tag`'s direct children based on the given CSS selector.
This uses the Soup Sieve library. It works the same way as
passing a `element.Tag` into that library's `soupsieve.filter()
<https://facelessuser.github.io/soupsieve/api/#soupsievefilter>`_
method. For more information, see the documentation for
`soupsieve.filter()
<https://facelessuser.github.io/soupsieve/api/#soupsievefilter>`_.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will pass in the prefixes it encountered while
parsing the document.
:param flags: Flags to be passed into Soup Sieve's
`soupsieve.filter()
<https://facelessuser.github.io/soupsieve/api/#soupsievefilter>`_
method.
:param kwargs: Keyword arguments to be passed into SoupSieve's
`soupsieve.filter()
<https://facelessuser.github.io/soupsieve/api/#soupsievefilter>`_
method.
"""
return self._rs(
self.api.filter(
select, self.tag, self._ns(namespaces, select), flags, **kwargs
)
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,268 @@
"""Diagnostic functions, mainly for use when doing tech support."""
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
import cProfile
from io import BytesIO
from html.parser import HTMLParser
import bs4
from bs4 import BeautifulSoup, __version__
from bs4.builder import builder_registry
from typing import (
Any,
IO,
List,
Optional,
Tuple,
TYPE_CHECKING,
)
if TYPE_CHECKING:
from bs4._typing import _IncomingMarkup
import pstats
import random
import tempfile
import time
import traceback
import sys
def diagnose(data: "_IncomingMarkup") -> None:
"""Diagnostic suite for isolating common problems.
:param data: Some markup that needs to be explained.
:return: None; diagnostics are printed to standard output.
"""
print(("Diagnostic running on Beautiful Soup %s" % __version__))
print(("Python version %s" % sys.version))
basic_parsers = ["html.parser", "html5lib", "lxml"]
for name in basic_parsers:
for builder in builder_registry.builders:
if name in builder.features:
break
else:
basic_parsers.remove(name)
print(
("I noticed that %s is not installed. Installing it may help." % name)
)
if "lxml" in basic_parsers:
basic_parsers.append("lxml-xml")
try:
from lxml import etree
print(("Found lxml version %s" % ".".join(map(str, etree.LXML_VERSION))))
except ImportError:
print("lxml is not installed or couldn't be imported.")
if "html5lib" in basic_parsers:
try:
import html5lib
print(("Found html5lib version %s" % html5lib.__version__))
except ImportError:
print("html5lib is not installed or couldn't be imported.")
if hasattr(data, "read"):
data = data.read()
for parser in basic_parsers:
print(("Trying to parse your markup with %s" % parser))
success = False
try:
soup = BeautifulSoup(data, features=parser)
success = True
except Exception:
print(("%s could not parse the markup." % parser))
traceback.print_exc()
if success:
print(("Here's what %s did with the markup:" % parser))
print((soup.prettify()))
print(("-" * 80))
def lxml_trace(data: "_IncomingMarkup", html: bool = True, **kwargs: Any) -> None:
"""Print out the lxml events that occur during parsing.
This lets you see how lxml parses a document when no Beautiful
Soup code is running. You can use this to determine whether
an lxml-specific problem is in Beautiful Soup's lxml tree builders
or in lxml itself.
:param data: Some markup.
:param html: If True, markup will be parsed with lxml's HTML parser.
if False, lxml's XML parser will be used.
"""
from lxml import etree
recover = kwargs.pop("recover", True)
if isinstance(data, str):
data = data.encode("utf8")
if not isinstance(data, IO):
reader = BytesIO(data)
for event, element in etree.iterparse(reader, html=html, recover=recover, **kwargs):
print(("%s, %4s, %s" % (event, element.tag, element.text)))
class AnnouncingParser(HTMLParser):
"""Subclass of HTMLParser that announces parse events, without doing
anything else.
You can use this to get a picture of how html.parser sees a given
document. The easiest way to do this is to call `htmlparser_trace`.
"""
def _p(self, s: str) -> None:
print(s)
def handle_starttag(
self,
name: str,
attrs: List[Tuple[str, Optional[str]]],
handle_empty_element: bool = True,
) -> None:
self._p(f"{name} {attrs} START")
def handle_endtag(self, name: str, check_already_closed: bool = True) -> None:
self._p("%s END" % name)
def handle_data(self, data: str) -> None:
self._p("%s DATA" % data)
def handle_charref(self, name: str) -> None:
self._p("%s CHARREF" % name)
def handle_entityref(self, name: str) -> None:
self._p("%s ENTITYREF" % name)
def handle_comment(self, data: str) -> None:
self._p("%s COMMENT" % data)
def handle_decl(self, data: str) -> None:
self._p("%s DECL" % data)
def unknown_decl(self, data: str) -> None:
self._p("%s UNKNOWN-DECL" % data)
def handle_pi(self, data: str) -> None:
self._p("%s PI" % data)
def htmlparser_trace(data: str) -> None:
"""Print out the HTMLParser events that occur during parsing.
This lets you see how HTMLParser parses a document when no
Beautiful Soup code is running.
:param data: Some markup.
"""
parser = AnnouncingParser()
parser.feed(data)
_vowels: str = "aeiou"
_consonants: str = "bcdfghjklmnpqrstvwxyz"
def rword(length: int = 5) -> str:
"""Generate a random word-like string.
:meta private:
"""
s = ""
for i in range(length):
if i % 2 == 0:
t = _consonants
else:
t = _vowels
s += random.choice(t)
return s
def rsentence(length: int = 4) -> str:
"""Generate a random sentence-like string.
:meta private:
"""
return " ".join(rword(random.randint(4, 9)) for i in range(length))
def rdoc(num_elements: int = 1000) -> str:
"""Randomly generate an invalid HTML document.
:meta private:
"""
tag_names = ["p", "div", "span", "i", "b", "script", "table"]
elements = []
for i in range(num_elements):
choice = random.randint(0, 3)
if choice == 0:
# New tag.
tag_name = random.choice(tag_names)
elements.append("<%s>" % tag_name)
elif choice == 1:
elements.append(rsentence(random.randint(1, 4)))
elif choice == 2:
# Close a tag.
tag_name = random.choice(tag_names)
elements.append("</%s>" % tag_name)
return "<html>" + "\n".join(elements) + "</html>"
def benchmark_parsers(num_elements: int = 100000) -> None:
"""Very basic head-to-head performance benchmark."""
print(("Comparative parser benchmark on Beautiful Soup %s" % __version__))
data = rdoc(num_elements)
print(("Generated a large invalid HTML document (%d bytes)." % len(data)))
for parser_name in ["lxml", ["lxml", "html"], "html5lib", "html.parser"]:
success = False
try:
a = time.time()
BeautifulSoup(data, parser_name)
b = time.time()
success = True
except Exception:
print(("%s could not parse the markup." % parser_name))
traceback.print_exc()
if success:
print(("BS4+%s parsed the markup in %.2fs." % (parser_name, b - a)))
from lxml import etree
a = time.time()
etree.HTML(data)
b = time.time()
print(("Raw lxml parsed the markup in %.2fs." % (b - a)))
import html5lib
parser = html5lib.HTMLParser()
a = time.time()
parser.parse(data)
b = time.time()
print(("Raw html5lib parsed the markup in %.2fs." % (b - a)))
def profile(num_elements: int = 100000, parser: str = "lxml") -> None:
"""Use Python's profiler on a randomly generated document."""
filehandle = tempfile.NamedTemporaryFile()
filename = filehandle.name
data = rdoc(num_elements)
vars = dict(bs4=bs4, data=data, parser=parser)
cProfile.runctx("bs4.BeautifulSoup(data, parser)", vars, vars, filename)
stats = pstats.Stats(filename)
# stats.strip_dirs()
stats.sort_stats("cumulative")
stats.print_stats("_html5lib|bs4", 50)
# If this file is run as a script, standard input is diagnosed.
if __name__ == "__main__":
diagnose(sys.stdin.read())

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,28 @@
"""Exceptions defined by Beautiful Soup itself."""
from typing import Union
class StopParsing(Exception):
"""Exception raised by a TreeBuilder if it's unable to continue parsing."""
class FeatureNotFound(ValueError):
"""Exception raised by the BeautifulSoup constructor if no parser with the
requested features is found.
"""
class ParserRejectedMarkup(Exception):
"""An Exception to be raised when the underlying parser simply
refuses to parse the given markup.
"""
def __init__(self, message_or_exception: Union[str, Exception]):
"""Explain why the parser rejected the given markup, either
with a textual explanation or another exception.
"""
if isinstance(message_or_exception, Exception):
e = message_or_exception
message_or_exception = "%s: %s" % (e.__class__.__name__, str(e))
super(ParserRejectedMarkup, self).__init__(message_or_exception)

View File

@ -0,0 +1,755 @@
from __future__ import annotations
from collections import defaultdict
import re
from typing import (
Any,
Callable,
cast,
Dict,
Iterator,
Iterable,
List,
Optional,
Sequence,
Type,
Union,
)
import warnings
from bs4._deprecation import _deprecated
from bs4.element import (
AttributeDict,
NavigableString,
PageElement,
ResultSet,
Tag,
)
from bs4._typing import (
_AtMostOneElement,
_AttributeValue,
_OneElement,
_PageElementMatchFunction,
_QueryResults,
_RawAttributeValues,
_RegularExpressionProtocol,
_StrainableAttribute,
_StrainableElement,
_StrainableString,
_StringMatchFunction,
_TagMatchFunction,
)
class ElementFilter(object):
"""`ElementFilter` encapsulates the logic necessary to decide:
1. whether a `PageElement` (a `Tag` or a `NavigableString`) matches a
user-specified query.
2. whether a given sequence of markup found during initial parsing
should be turned into a `PageElement` at all, or simply discarded.
The base class is the simplest `ElementFilter`. By default, it
matches everything and allows all markup to become `PageElement`
objects. You can make it more selective by passing in a
user-defined match function, or defining a subclass.
Most users of Beautiful Soup will never need to use
`ElementFilter`, or its more capable subclass
`SoupStrainer`. Instead, they will use methods like
:py:meth:`Tag.find`, which will convert their arguments into
`SoupStrainer` objects and run them against the tree.
However, if you find yourself wanting to treat the arguments to
Beautiful Soup's find_*() methods as first-class objects, those
objects will be `SoupStrainer` objects. You can create them
yourself and then make use of functions like
`ElementFilter.filter()`.
"""
match_function: Optional[_PageElementMatchFunction]
def __init__(self, match_function: Optional[_PageElementMatchFunction] = None):
"""Pass in a match function to easily customize the behavior of
`ElementFilter.match` without needing to subclass.
:param match_function: A function that takes a `PageElement`
and returns `True` if that `PageElement` matches some criteria.
"""
self.match_function = match_function
@property
def includes_everything(self) -> bool:
"""Does this `ElementFilter` obviously include everything? If so,
the filter process can be made much faster.
The `ElementFilter` might turn out to include everything even
if this returns `False`, but it won't include everything in an
obvious way.
The base `ElementFilter` implementation includes things based on
the match function, so includes_everything is only true if
there is no match function.
"""
return not self.match_function
@property
def excludes_everything(self) -> bool:
"""Does this `ElementFilter` obviously exclude everything? If
so, Beautiful Soup will issue a warning if you try to use it
when parsing a document.
The `ElementFilter` might turn out to exclude everything even
if this returns `False`, but it won't exclude everything in an
obvious way.
The base `ElementFilter` implementation excludes things based
on a match function we can't inspect, so excludes_everything
is always false.
"""
return False
def match(self, element: PageElement, _known_rules:bool=False) -> bool:
"""Does the given PageElement match the rules set down by this
ElementFilter?
The base implementation delegates to the function passed in to
the constructor.
:param _known_rules: Defined for compatibility with
SoupStrainer._match(). Used more for consistency than because
we need the performance optimization.
"""
if not _known_rules and self.includes_everything:
return True
if not self.match_function:
return True
return self.match_function(element)
def filter(self, generator: Iterator[PageElement]) -> Iterator[_OneElement]:
"""The most generic search method offered by Beautiful Soup.
Acts like Python's built-in `filter`, using
`ElementFilter.match` as the filtering function.
"""
# If there are no rules at all, don't bother filtering. Let
# anything through.
if self.includes_everything:
for i in generator:
yield i
while True:
try:
i = next(generator)
except StopIteration:
break
if i:
if self.match(i, _known_rules=True):
yield cast("_OneElement", i)
def find(self, generator: Iterator[PageElement]) -> _AtMostOneElement:
"""A lower-level equivalent of :py:meth:`Tag.find`.
You can pass in your own generator for iterating over
`PageElement` objects. The first one that matches this
`ElementFilter` will be returned.
:param generator: A way of iterating over `PageElement`
objects.
"""
for match in self.filter(generator):
return match
return None
def find_all(
self, generator: Iterator[PageElement], limit: Optional[int] = None
) -> _QueryResults:
"""A lower-level equivalent of :py:meth:`Tag.find_all`.
You can pass in your own generator for iterating over
`PageElement` objects. Only elements that match this
`ElementFilter` will be returned in the :py:class:`ResultSet`.
:param generator: A way of iterating over `PageElement`
objects.
:param limit: Stop looking after finding this many results.
"""
results: _QueryResults = ResultSet(self)
for match in self.filter(generator):
results.append(match)
if limit is not None and len(results) >= limit:
break
return results
def allow_tag_creation(
self, nsprefix: Optional[str], name: str, attrs: Optional[_RawAttributeValues]
) -> bool:
"""Based on the name and attributes of a tag, see whether this
`ElementFilter` will allow a `Tag` object to even be created.
By default, all tags are parsed. To change this, subclass
`ElementFilter`.
:param name: The name of the prospective tag.
:param attrs: The attributes of the prospective tag.
"""
return True
def allow_string_creation(self, string: str) -> bool:
"""Based on the content of a string, see whether this
`ElementFilter` will allow a `NavigableString` object based on
this string to be added to the parse tree.
By default, all strings are processed into `NavigableString`
objects. To change this, subclass `ElementFilter`.
:param str: The string under consideration.
"""
return True
class MatchRule(object):
"""Each MatchRule encapsulates the logic behind a single argument
passed in to one of the Beautiful Soup find* methods.
"""
string: Optional[str]
pattern: Optional[_RegularExpressionProtocol]
present: Optional[bool]
exclude_everything: Optional[bool]
# TODO-TYPING: All MatchRule objects also have an attribute
# ``function``, but the type of the function depends on the
# subclass.
def __init__(
self,
string: Optional[Union[str, bytes]] = None,
pattern: Optional[_RegularExpressionProtocol] = None,
function: Optional[Callable] = None,
present: Optional[bool] = None,
exclude_everything: Optional[bool] = None
):
if isinstance(string, bytes):
string = string.decode("utf8")
self.string = string
if isinstance(pattern, bytes):
self.pattern = re.compile(pattern.decode("utf8"))
elif isinstance(pattern, str):
self.pattern = re.compile(pattern)
else:
self.pattern = pattern
self.function = function
self.present = present
self.exclude_everything = exclude_everything
values = [
x
for x in (self.string, self.pattern, self.function, self.present, self.exclude_everything)
if x is not None
]
if len(values) == 0:
raise ValueError(
"Either string, pattern, function, present, or exclude_everything must be provided."
)
if len(values) > 1:
raise ValueError(
"At most one of string, pattern, function, present, and exclude_everything must be provided."
)
def _base_match(self, string: Optional[str]) -> Optional[bool]:
"""Run the 'cheap' portion of a match, trying to get an answer without
calling a potentially expensive custom function.
:return: True or False if we have a (positive or negative)
match; None if we need to keep trying.
"""
# self.exclude_everything matches nothing.
if self.exclude_everything:
return False
# self.present==True matches everything except None.
if self.present is True:
return string is not None
# self.present==False matches _only_ None.
if self.present is False:
return string is None
# self.string does an exact string match.
if self.string is not None:
# print(f"{self.string} ?= {string}")
return self.string == string
# self.pattern does a regular expression search.
if self.pattern is not None:
# print(f"{self.pattern} ?~ {string}")
if string is None:
return False
return self.pattern.search(string) is not None
return None
def matches_string(self, string: Optional[str]) -> bool:
_base_result = self._base_match(string)
if _base_result is not None:
# No need to invoke the test function.
return _base_result
if self.function is not None and not self.function(string):
# print(f"{self.function}({string}) == False")
return False
return True
def __repr__(self) -> str:
cls = type(self).__name__
return f"<{cls} string={self.string} pattern={self.pattern} function={self.function} present={self.present}>"
def __eq__(self, other: Any) -> bool:
return (
isinstance(other, MatchRule)
and self.string == other.string
and self.pattern == other.pattern
and self.function == other.function
and self.present == other.present
)
class TagNameMatchRule(MatchRule):
"""A MatchRule implementing the rules for matches against tag name."""
function: Optional[_TagMatchFunction]
def matches_tag(self, tag: Tag) -> bool:
base_value = self._base_match(tag.name)
if base_value is not None:
return base_value
# The only remaining possibility is that the match is determined
# by a function call. Call the function.
function = cast(_TagMatchFunction, self.function)
if function(tag):
return True
return False
class AttributeValueMatchRule(MatchRule):
"""A MatchRule implementing the rules for matches against attribute value."""
function: Optional[_StringMatchFunction]
class StringMatchRule(MatchRule):
"""A MatchRule implementing the rules for matches against a NavigableString."""
function: Optional[_StringMatchFunction]
class SoupStrainer(ElementFilter):
"""The `ElementFilter` subclass used internally by Beautiful Soup.
A `SoupStrainer` encapsulates the logic necessary to perform the
kind of matches supported by methods such as
:py:meth:`Tag.find`. `SoupStrainer` objects are primarily created
internally, but you can create one yourself and pass it in as
``parse_only`` to the `BeautifulSoup` constructor, to parse a
subset of a large document.
Internally, `SoupStrainer` objects work by converting the
constructor arguments into `MatchRule` objects. Incoming
tags/markup are matched against those rules.
:param name: One or more restrictions on the tags found in a document.
:param attrs: A dictionary that maps attribute names to
restrictions on tags that use those attributes.
:param string: One or more restrictions on the strings found in a
document.
:param kwargs: A dictionary that maps attribute names to restrictions
on tags that use those attributes. These restrictions are additive to
any specified in ``attrs``.
"""
name_rules: List[TagNameMatchRule]
attribute_rules: Dict[str, List[AttributeValueMatchRule]]
string_rules: List[StringMatchRule]
def __init__(
self,
name: Optional[_StrainableElement] = None,
attrs: Dict[str, _StrainableAttribute] = {},
string: Optional[_StrainableString] = None,
**kwargs: _StrainableAttribute,
):
if string is None and "text" in kwargs:
string = cast(Optional[_StrainableString], kwargs.pop("text"))
warnings.warn(
"As of version 4.11.0, the 'text' argument to the SoupStrainer constructor is deprecated. Use 'string' instead.",
DeprecationWarning,
stacklevel=2,
)
if name is None and not attrs and not string and not kwargs:
# Special case for backwards compatibility. Instantiating
# a SoupStrainer with no arguments whatsoever gets you one
# that matches all Tags, and only Tags.
self.name_rules = [TagNameMatchRule(present=True)]
else:
self.name_rules = cast(
List[TagNameMatchRule], list(self._make_match_rules(name, TagNameMatchRule))
)
self.attribute_rules = defaultdict(list)
if not isinstance(attrs, dict):
# Passing something other than a dictionary as attrs is
# sugar for matching that thing against the 'class'
# attribute.
attrs = {"class": attrs}
for attrdict in attrs, kwargs:
for attr, value in attrdict.items():
if attr == "class_" and attrdict is kwargs:
# If you pass in 'class_' as part of kwargs, it's
# because class is a Python reserved word. If you
# pass it in as part of the attrs dict, it's
# because you really are looking for an attribute
# called 'class_'.
attr = "class"
if value is None:
value = False
for rule_obj in self._make_match_rules(value, AttributeValueMatchRule):
self.attribute_rules[attr].append(
cast(AttributeValueMatchRule, rule_obj)
)
self.string_rules = cast(
List[StringMatchRule], list(self._make_match_rules(string, StringMatchRule))
)
#: DEPRECATED 4.13.0: You shouldn't need to check this under
#: any name (.string or .text), and if you do, you're probably
#: not taking into account all of the types of values this
#: variable might have. Look at the .string_rules list instead.
self.__string = string
@property
def includes_everything(self) -> bool:
"""Check whether the provided rules will obviously include
everything. (They might include everything even if this returns `False`,
but not in an obvious way.)
"""
return not self.name_rules and not self.string_rules and not self.attribute_rules
@property
def excludes_everything(self) -> bool:
"""Check whether the provided rules will obviously exclude
everything. (They might exclude everything even if this returns `False`,
but not in an obvious way.)
"""
if (self.string_rules and (self.name_rules or self.attribute_rules)):
# This is self-contradictory, so the rules exclude everything.
return True
# If there's a rule that ended up treated as an "exclude everything"
# rule due to creating a logical inconsistency, then the rules
# exclude everything.
if any(x.exclude_everything for x in self.string_rules):
return True
if any(x.exclude_everything for x in self.name_rules):
return True
for ruleset in self.attribute_rules.values():
if any(x.exclude_everything for x in ruleset):
return True
return False
@property
def string(self) -> Optional[_StrainableString]:
":meta private:"
warnings.warn(
"Access to deprecated property string. (Look at .string_rules instead) -- Deprecated since version 4.13.0.",
DeprecationWarning,
stacklevel=2,
)
return self.__string
@property
def text(self) -> Optional[_StrainableString]:
":meta private:"
warnings.warn(
"Access to deprecated property text. (Look at .string_rules instead) -- Deprecated since version 4.13.0.",
DeprecationWarning,
stacklevel=2,
)
return self.__string
def __repr__(self) -> str:
return f"<{self.__class__.__name__} name={self.name_rules} attrs={self.attribute_rules} string={self.string_rules}>"
@classmethod
def _make_match_rules(
cls,
obj: Optional[Union[_StrainableElement, _StrainableAttribute]],
rule_class: Type[MatchRule],
) -> Iterator[MatchRule]:
"""Convert a vaguely-specific 'object' into one or more well-defined
`MatchRule` objects.
:param obj: Some kind of object that corresponds to one or more
matching rules.
:param rule_class: Create instances of this `MatchRule` subclass.
"""
if obj is None:
return
if isinstance(obj, (str, bytes)):
yield rule_class(string=obj)
elif isinstance(obj, bool):
yield rule_class(present=obj)
elif callable(obj):
yield rule_class(function=obj)
elif isinstance(obj, _RegularExpressionProtocol):
yield rule_class(pattern=obj)
elif hasattr(obj, "__iter__"):
if not obj:
# The attribute is being matched against the null set,
# which means it should exclude everything.
yield rule_class(exclude_everything=True)
for o in obj:
if not isinstance(o, (bytes, str)) and hasattr(o, "__iter__"):
# This is almost certainly the user's
# mistake. This list contains another list, which
# opens up the possibility of infinite
# self-reference. In the interests of avoiding
# infinite recursion, we'll treat this as an
# impossible match and issue a rule that excludes
# everything, rather than looking inside.
warnings.warn(
f"Ignoring nested list {o} to avoid the possibility of infinite recursion.",
stacklevel=5,
)
yield rule_class(exclude_everything=True)
continue
for x in cls._make_match_rules(o, rule_class):
yield x
else:
yield rule_class(string=str(obj))
def matches_tag(self, tag: Tag) -> bool:
"""Do the rules of this `SoupStrainer` trigger a match against the
given `Tag`?
If the `SoupStrainer` has any `TagNameMatchRule`, at least one
must match the `Tag` or its `Tag.name`.
If there are any `AttributeValueMatchRule` for a given
attribute, at least one of them must match the attribute
value.
If there are any `StringMatchRule`, at least one must match,
but a `SoupStrainer` that *only* contains `StringMatchRule`
cannot match a `Tag`, only a `NavigableString`.
"""
# If there are no rules at all, let anything through.
#if self.includes_everything:
# return True
# String rules cannot not match a Tag on their own.
if not self.name_rules and not self.attribute_rules:
return False
# Optimization for a very common case where the user is
# searching for a tag with one specific name, and we're
# looking at a tag with a different name.
if (
not tag.prefix
and len(self.name_rules) == 1
and self.name_rules[0].string is not None
and tag.name != self.name_rules[0].string
):
return False
# If there are name rules, at least one must match. It can
# match either the Tag object itself or the prefixed name of
# the tag.
prefixed_name = None
if tag.prefix:
prefixed_name = f"{tag.prefix}:{tag.name}"
if self.name_rules:
name_matches = False
for rule in self.name_rules:
# attrs = " ".join(
# [f"{k}={v}" for k, v in sorted(tag.attrs.items())]
# )
# print(f"Testing <{tag.name} {attrs}>{tag.string}</{tag.name}> against {rule}")
if rule.matches_tag(tag) or (
prefixed_name is not None and rule.matches_string(prefixed_name)
):
name_matches = True
break
if not name_matches:
return False
# If there are attribute rules for a given attribute, at least
# one of them must match. If there are rules for multiple
# attributes, each attribute must have at least one match.
for attr, rules in self.attribute_rules.items():
attr_value = tag.get(attr, None)
this_attr_match = self._attribute_match(attr_value, rules)
if not this_attr_match:
return False
# If there are string rules, at least one must match.
if self.string_rules:
_str = tag.string
if _str is None:
return False
if not self.matches_any_string_rule(_str):
return False
return True
def _attribute_match(
self,
attr_value: Optional[_AttributeValue],
rules: Iterable[AttributeValueMatchRule],
) -> bool:
attr_values: Sequence[Optional[str]]
if isinstance(attr_value, list):
attr_values = attr_value
else:
attr_values = [cast(str, attr_value)]
def _match_attribute_value_helper(attr_values: Sequence[Optional[str]]) -> bool:
for rule in rules:
for attr_value in attr_values:
if rule.matches_string(attr_value):
return True
return False
this_attr_match = _match_attribute_value_helper(attr_values)
if not this_attr_match and len(attr_values) > 1:
# This cast converts Optional[str] to plain str.
#
# We know if there's more than one value, there can't be
# any None in the list, because Beautiful Soup never uses
# None as a value of a multi-valued attribute, and if None
# is passed in as attr_value, it's turned into a list with
# a single element (thus len(attr_values) > 1 fails).
attr_values = cast(Sequence[str], attr_values)
# Try again but treat the attribute value
# as a single string.
joined_attr_value = " ".join(attr_values)
this_attr_match = _match_attribute_value_helper([joined_attr_value])
return this_attr_match
def allow_tag_creation(
self, nsprefix: Optional[str], name: str, attrs: Optional[_RawAttributeValues]
) -> bool:
"""Based on the name and attributes of a tag, see whether this
`SoupStrainer` will allow a `Tag` object to even be created.
:param name: The name of the prospective tag.
:param attrs: The attributes of the prospective tag.
"""
if self.string_rules:
# A SoupStrainer that has string rules can't be used to
# manage tag creation, because the string rule can't be
# evaluated until after the tag and all of its contents
# have been parsed.
return False
prefixed_name = None
if nsprefix:
prefixed_name = f"{nsprefix}:{name}"
if self.name_rules:
# At least one name rule must match.
name_match = False
for rule in self.name_rules:
for x in name, prefixed_name:
if x is not None:
if rule.matches_string(x):
name_match = True
break
if not name_match:
return False
# For each attribute that has rules, at least one rule must
# match.
if attrs is None:
attrs = AttributeDict()
for attr, rules in self.attribute_rules.items():
attr_value = attrs.get(attr)
if not self._attribute_match(attr_value, rules):
return False
return True
def allow_string_creation(self, string: str) -> bool:
"""Based on the content of a markup string, see whether this
`SoupStrainer` will allow it to be instantiated as a
`NavigableString` object, or whether it should be ignored.
"""
if self.name_rules or self.attribute_rules:
# A SoupStrainer that has name or attribute rules won't
# match any strings; it's designed to match tags with
# certain properties.
return False
if not self.string_rules:
# A SoupStrainer with no string rules will match
# all strings.
return True
if not self.matches_any_string_rule(string):
return False
return True
def matches_any_string_rule(self, string: str) -> bool:
"""See whether the content of a string matches any of
this `SoupStrainer`'s string rules.
"""
if not self.string_rules:
return True
for string_rule in self.string_rules:
if string_rule.matches_string(string):
return True
return False
def match(self, element: PageElement, _known_rules: bool=False) -> bool:
"""Does the given `PageElement` match the rules set down by this
`SoupStrainer`?
The find_* methods rely heavily on this method to find matches.
:param element: A `PageElement`.
:param _known_rules: Set to true in the common case where
we already checked and found at least one rule in this SoupStrainer
that might exclude a PageElement. Without this, we need
to check .includes_everything every time, just to be safe.
:return: `True` if the element matches this `SoupStrainer`'s rules; `False` otherwise.
"""
# If there are no rules at all, let anything through.
if not _known_rules and self.includes_everything:
return True
if isinstance(element, Tag):
return self.matches_tag(element)
assert isinstance(element, NavigableString)
if not (self.name_rules or self.attribute_rules):
# A NavigableString can only match a SoupStrainer that
# does not define any name or attribute rules.
# Then it comes down to the string rules.
return self.matches_any_string_rule(element)
return False
@_deprecated("allow_tag_creation", "4.13.0")
def search_tag(self, name: str, attrs: Optional[_RawAttributeValues]) -> bool:
"""A less elegant version of `allow_tag_creation`. Deprecated as of 4.13.0"""
":meta private:"
return self.allow_tag_creation(None, name, attrs)
@_deprecated("match", "4.13.0")
def search(self, element: PageElement) -> Optional[PageElement]:
"""A less elegant version of match(). Deprecated as of 4.13.0.
:meta private:
"""
return element if self.match(element) else None

View File

@ -0,0 +1,276 @@
from __future__ import annotations
from typing import Callable, Dict, Iterable, Optional, Set, Tuple, TYPE_CHECKING, Union
from typing_extensions import TypeAlias
from bs4.dammit import EntitySubstitution
if TYPE_CHECKING:
from bs4._typing import _AttributeValue
class Formatter(EntitySubstitution):
"""Describes a strategy to use when outputting a parse tree to a string.
Some parts of this strategy come from the distinction between
HTML4, HTML5, and XML. Others are configurable by the user.
Formatters are passed in as the `formatter` argument to methods
like `bs4.element.Tag.encode`. Most people won't need to
think about formatters, and most people who need to think about
them can pass in one of these predefined strings as `formatter`
rather than making a new Formatter object:
For HTML documents:
* 'html' - HTML entity substitution for generic HTML documents. (default)
* 'html5' - HTML entity substitution for HTML5 documents, as
well as some optimizations in the way tags are rendered.
* 'html5-4.12.0' - The version of the 'html5' formatter used prior to
Beautiful Soup 4.13.0.
* 'minimal' - Only make the substitutions necessary to guarantee
valid HTML.
* None - Do not perform any substitution. This will be faster
but may result in invalid markup.
For XML documents:
* 'html' - Entity substitution for XHTML documents.
* 'minimal' - Only make the substitutions necessary to guarantee
valid XML. (default)
* None - Do not perform any substitution. This will be faster
but may result in invalid markup.
"""
#: Constant name denoting HTML markup
HTML: str = "html"
#: Constant name denoting XML markup
XML: str = "xml"
#: Default values for the various constructor options when the
#: markup language is HTML.
HTML_DEFAULTS: Dict[str, Set[str]] = dict(
cdata_containing_tags=set(["script", "style"]),
)
language: Optional[str] #: :meta private:
entity_substitution: Optional[_EntitySubstitutionFunction] #: :meta private:
void_element_close_prefix: str #: :meta private:
cdata_containing_tags: Set[str] #: :meta private:
indent: str #: :meta private:
#: If this is set to true by the constructor, then attributes whose
#: values are sent to the empty string will be treated as HTML
#: boolean attributes. (Attributes whose value is None are always
#: rendered this way.)
empty_attributes_are_booleans: bool
def _default(
self, language: str, value: Optional[Set[str]], kwarg: str
) -> Set[str]:
if value is not None:
return value
if language == self.XML:
# When XML is the markup language in use, all of the
# defaults are the empty list.
return set()
# Otherwise, it depends on what's in HTML_DEFAULTS.
return self.HTML_DEFAULTS[kwarg]
def __init__(
self,
language: Optional[str] = None,
entity_substitution: Optional[_EntitySubstitutionFunction] = None,
void_element_close_prefix: str = "/",
cdata_containing_tags: Optional[Set[str]] = None,
empty_attributes_are_booleans: bool = False,
indent: Union[int,str] = 1,
):
r"""Constructor.
:param language: This should be `Formatter.XML` if you are formatting
XML markup and `Formatter.HTML` if you are formatting HTML markup.
:param entity_substitution: A function to call to replace special
characters with XML/HTML entities. For examples, see
bs4.dammit.EntitySubstitution.substitute_html and substitute_xml.
:param void_element_close_prefix: By default, void elements
are represented as <tag/> (XML rules) rather than <tag>
(HTML rules). To get <tag>, pass in the empty string.
:param cdata_containing_tags: The set of tags that are defined
as containing CDATA in this dialect. For example, in HTML,
<script> and <style> tags are defined as containing CDATA,
and their contents should not be formatted.
:param empty_attributes_are_booleans: If this is set to true,
then attributes whose values are sent to the empty string
will be treated as `HTML boolean
attributes<https://dev.w3.org/html5/spec-LC/common-microsyntaxes.html#boolean-attributes>`_. (Attributes
whose value is None are always rendered this way.)
:param indent: If indent is a non-negative integer or string,
then the contents of elements will be indented
appropriately when pretty-printing. An indent level of 0,
negative, or "" will only insert newlines. Using a
positive integer indent indents that many spaces per
level. If indent is a string (such as "\t"), that string
is used to indent each level. The default behavior is to
indent one space per level.
"""
self.language = language or self.HTML
self.entity_substitution = entity_substitution
self.void_element_close_prefix = void_element_close_prefix
self.cdata_containing_tags = self._default(
self.language, cdata_containing_tags, "cdata_containing_tags"
)
self.empty_attributes_are_booleans = empty_attributes_are_booleans
if indent is None:
indent = 0
indent_str: str
if isinstance(indent, int):
if indent < 0:
indent = 0
indent_str = " " * indent
elif isinstance(indent, str):
indent_str = indent
else:
indent_str = " "
self.indent = indent_str
def substitute(self, ns: str) -> str:
"""Process a string that needs to undergo entity substitution.
This may be a string encountered in an attribute value or as
text.
:param ns: A string.
:return: The same string but with certain characters replaced by named
or numeric entities.
"""
if not self.entity_substitution:
return ns
from .element import NavigableString
if (
isinstance(ns, NavigableString)
and ns.parent is not None
and ns.parent.name in self.cdata_containing_tags
):
# Do nothing.
return ns
# Substitute.
return self.entity_substitution(ns)
def attribute_value(self, value: str) -> str:
"""Process the value of an attribute.
:param ns: A string.
:return: A string with certain characters replaced by named
or numeric entities.
"""
return self.substitute(value)
def attributes(
self, tag: bs4.element.Tag
) -> Iterable[Tuple[str, Optional[_AttributeValue]]]:
"""Reorder a tag's attributes however you want.
By default, attributes are sorted alphabetically. This makes
behavior consistent between Python 2 and Python 3, and preserves
backwards compatibility with older versions of Beautiful Soup.
If `empty_attributes_are_booleans` is True, then
attributes whose values are set to the empty string will be
treated as boolean attributes.
"""
if tag.attrs is None:
return []
items: Iterable[Tuple[str, _AttributeValue]] = list(tag.attrs.items())
return sorted(
(k, (None if self.empty_attributes_are_booleans and v == "" else v))
for k, v in items
)
class HTMLFormatter(Formatter):
"""A generic Formatter for HTML."""
REGISTRY: Dict[Optional[str], HTMLFormatter] = {}
def __init__(
self,
entity_substitution: Optional[_EntitySubstitutionFunction] = None,
void_element_close_prefix: str = "/",
cdata_containing_tags: Optional[Set[str]] = None,
empty_attributes_are_booleans: bool = False,
indent: Union[int,str] = 1,
):
super(HTMLFormatter, self).__init__(
self.HTML,
entity_substitution,
void_element_close_prefix,
cdata_containing_tags,
empty_attributes_are_booleans,
indent=indent
)
class XMLFormatter(Formatter):
"""A generic Formatter for XML."""
REGISTRY: Dict[Optional[str], XMLFormatter] = {}
def __init__(
self,
entity_substitution: Optional[_EntitySubstitutionFunction] = None,
void_element_close_prefix: str = "/",
cdata_containing_tags: Optional[Set[str]] = None,
empty_attributes_are_booleans: bool = False,
indent: Union[int,str] = 1,
):
super(XMLFormatter, self).__init__(
self.XML,
entity_substitution,
void_element_close_prefix,
cdata_containing_tags,
empty_attributes_are_booleans,
indent=indent,
)
# Set up aliases for the default formatters.
HTMLFormatter.REGISTRY["html"] = HTMLFormatter(
entity_substitution=EntitySubstitution.substitute_html
)
HTMLFormatter.REGISTRY["html5"] = HTMLFormatter(
entity_substitution=EntitySubstitution.substitute_html5,
void_element_close_prefix="",
empty_attributes_are_booleans=True,
)
HTMLFormatter.REGISTRY["html5-4.12"] = HTMLFormatter(
entity_substitution=EntitySubstitution.substitute_html,
void_element_close_prefix="",
empty_attributes_are_booleans=True,
)
HTMLFormatter.REGISTRY["minimal"] = HTMLFormatter(
entity_substitution=EntitySubstitution.substitute_xml
)
HTMLFormatter.REGISTRY[None] = HTMLFormatter(entity_substitution=None)
XMLFormatter.REGISTRY["html"] = XMLFormatter(
entity_substitution=EntitySubstitution.substitute_html
)
XMLFormatter.REGISTRY["minimal"] = XMLFormatter(
entity_substitution=EntitySubstitution.substitute_xml
)
XMLFormatter.REGISTRY[None] = XMLFormatter(entity_substitution=None)
# Define type aliases to improve readability.
#
#: A function to call to replace special characters with XML or HTML
#: entities.
_EntitySubstitutionFunction: TypeAlias = Callable[[str], str]
# Many of the output-centered methods take an argument that can either
# be a Formatter object or the name of a Formatter to be looked up.
_FormatterOrName = Union[Formatter, str]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
˙ ><applet></applet><applet></applet><apple|><applet><applet><appl„><applet><applet></applet></applet></applet></applet><applet></applet><apple>t<applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet>et><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><azplet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><plet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet><applet></applet></applet></applet></applet></appt></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet></applet><<meta charset=utf-8>

View File

@ -0,0 +1 @@
- ˙˙ <math><select><mi><select><select>t

View File

@ -0,0 +1,28 @@
import pytest
from unittest.mock import patch
from bs4.builder import DetectsXMLParsedAsHTML
class TestDetectsXMLParsedAsHTML:
@pytest.mark.parametrize(
"markup,looks_like_xml",
[
("No xml declaration", False),
("<html>obviously HTML</html", False),
("<?xml ><html>Actually XHTML</html>", False),
("<?xml> < html>Tricky XHTML</html>", False),
("<?xml ><no-html-tag>", True),
],
)
def test_warn_if_markup_looks_like_xml(self, markup, looks_like_xml):
# Test of our ability to guess at whether markup looks XML-ish
# _and_ not HTML-ish.
with patch("bs4.builder.DetectsXMLParsedAsHTML._warn") as mock:
for data in markup, markup.encode("utf8"):
result = DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(data)
assert result == looks_like_xml
if looks_like_xml:
assert mock.called
else:
assert not mock.called
mock.reset_mock()

View File

@ -0,0 +1,139 @@
"""Tests of the builder registry."""
import pytest
import warnings
from typing import Type
from bs4 import BeautifulSoup
from bs4.builder import (
builder_registry as registry,
TreeBuilder,
TreeBuilderRegistry,
)
from bs4.builder._htmlparser import HTMLParserTreeBuilder
from . import (
HTML5LIB_PRESENT,
LXML_PRESENT,
)
if HTML5LIB_PRESENT:
from bs4.builder._html5lib import HTML5TreeBuilder
if LXML_PRESENT:
from bs4.builder._lxml import (
LXMLTreeBuilderForXML,
LXMLTreeBuilder,
)
# TODO: Split out the lxml and html5lib tests into their own classes
# and gate with pytest.mark.skipIf.
class TestBuiltInRegistry(object):
"""Test the built-in registry with the default builders registered."""
def test_combination(self):
assert registry.lookup("strict", "html") == HTMLParserTreeBuilder
if LXML_PRESENT:
assert registry.lookup("fast", "html") == LXMLTreeBuilder
assert registry.lookup("permissive", "xml") == LXMLTreeBuilderForXML
if HTML5LIB_PRESENT:
assert registry.lookup("html5lib", "html") == HTML5TreeBuilder
def test_lookup_by_markup_type(self):
if LXML_PRESENT:
assert registry.lookup("html") == LXMLTreeBuilder
assert registry.lookup("xml") == LXMLTreeBuilderForXML
else:
assert registry.lookup("xml") is None
if HTML5LIB_PRESENT:
assert registry.lookup("html") == HTML5TreeBuilder
else:
assert registry.lookup("html") == HTMLParserTreeBuilder
def test_named_library(self):
if LXML_PRESENT:
assert registry.lookup("lxml", "xml") == LXMLTreeBuilderForXML
assert registry.lookup("lxml", "html") == LXMLTreeBuilder
if HTML5LIB_PRESENT:
assert registry.lookup("html5lib") == HTML5TreeBuilder
assert registry.lookup("html.parser") == HTMLParserTreeBuilder
def test_beautifulsoup_constructor_does_lookup(self):
with warnings.catch_warnings(record=True):
# This will create a warning about not explicitly
# specifying a parser, but we'll ignore it.
# You can pass in a string.
BeautifulSoup("", features="html")
# Or a list of strings.
BeautifulSoup("", features=["html", "fast"])
pass
# You'll get an exception if BS can't find an appropriate
# builder.
with pytest.raises(ValueError):
BeautifulSoup("", features="no-such-feature")
class TestRegistry(object):
"""Test the TreeBuilderRegistry class in general."""
def setup_method(self):
self.registry = TreeBuilderRegistry()
def builder_for_features(self, *feature_list: str) -> Type[TreeBuilder]:
cls = type(
"Builder_" + "_".join(feature_list), (object,), {"features": feature_list}
)
self.registry.register(cls)
return cls
def test_register_with_no_features(self):
builder = self.builder_for_features()
# Since the builder advertises no features, you can't find it
# by looking up features.
assert self.registry.lookup("foo") is None
# But you can find it by doing a lookup with no features, if
# this happens to be the only registered builder.
assert self.registry.lookup() == builder
def test_register_with_features_makes_lookup_succeed(self):
builder = self.builder_for_features("foo", "bar")
assert self.registry.lookup("foo") is builder
assert self.registry.lookup("bar") is builder
def test_lookup_fails_when_no_builder_implements_feature(self):
assert self.registry.lookup("baz") is None
def test_lookup_gets_most_recent_registration_when_no_feature_specified(self):
self.builder_for_features("foo")
builder2 = self.builder_for_features("bar")
assert self.registry.lookup() == builder2
def test_lookup_fails_when_no_tree_builders_registered(self):
assert self.registry.lookup() is None
def test_lookup_gets_most_recent_builder_supporting_all_features(self):
self.builder_for_features("foo")
self.builder_for_features("bar")
has_both_early = self.builder_for_features("foo", "bar", "baz")
has_both_late = self.builder_for_features("foo", "bar", "quux")
self.builder_for_features("bar")
self.builder_for_features("foo")
# There are two builders featuring 'foo' and 'bar', but
# the one that also features 'quux' was registered later.
assert self.registry.lookup("foo", "bar") == has_both_late
# There is only one builder featuring 'foo', 'bar', and 'baz'.
assert self.registry.lookup("foo", "bar", "baz") == has_both_early
def test_lookup_fails_when_cannot_reconcile_requested_features(self):
self.builder_for_features("foo", "bar")
self.builder_for_features("foo", "baz")
assert self.registry.lookup("bar", "baz") is None

View File

@ -0,0 +1,536 @@
import pytest
import types
from bs4 import (
BeautifulSoup,
ResultSet,
)
from typing import (
Any,
List,
Tuple,
Type,
)
from packaging.version import Version
from . import (
SoupTest,
SOUP_SIEVE_PRESENT,
)
SOUPSIEVE_EXCEPTION_ON_UNSUPPORTED_PSEUDOCLASS: Type[Exception]
if SOUP_SIEVE_PRESENT:
from soupsieve import __version__, SelectorSyntaxError
# Some behavior changes in soupsieve 2.6 that affects one of our
# tests. For the test to run under all versions of Python
# supported by Beautiful Soup (which includes versions of Python
# not supported by soupsieve 2.6) we need to check both behaviors.
SOUPSIEVE_EXCEPTION_ON_UNSUPPORTED_PSEUDOCLASS = SelectorSyntaxError
if Version(__version__) < Version("2.6"):
SOUPSIEVE_EXCEPTION_ON_UNSUPPORTED_PSEUDOCLASS = NotImplementedError
@pytest.mark.skipif(not SOUP_SIEVE_PRESENT, reason="Soup Sieve not installed")
class TestCSSSelectors(SoupTest):
"""Test basic CSS selector functionality.
This functionality is implemented in soupsieve, which has a much
more comprehensive test suite, so this is basically an extra check
that soupsieve works as expected.
"""
HTML = """
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>The title</title>
<link rel="stylesheet" href="blah.css" type="text/css" id="l1">
</head>
<body>
<custom-dashed-tag class="dashed" id="dash1">Hello there.</custom-dashed-tag>
<div id="main" class="fancy">
<div id="inner">
<h1 id="header1">An H1</h1>
<p>Some text</p>
<p class="onep" id="p1">Some more text</p>
<h2 id="header2">An H2</h2>
<p class="class1 class2 class3" id="pmulti">Another</p>
<a href="http://bob.example.org/" rel="friend met" id="bob">Bob</a>
<h2 id="header3">Another H2</h2>
<a id="me" href="http://simonwillison.net/" rel="me">me</a>
<span class="s1">
<a href="#" id="s1a1">span1a1</a>
<a href="#" id="s1a2">span1a2 <span id="s1a2s1">test</span></a>
<span class="span2">
<a href="#" id="s2a1">span2a1</a>
</span>
<span class="span3"></span>
<custom-dashed-tag class="dashed" id="dash2"/>
<div data-tag="dashedvalue" id="data1"/>
</span>
</div>
<x id="xid">
<z id="zida"/>
<z id="zidab"/>
<z id="zidac"/>
</x>
<y id="yid">
<z id="zidb"/>
</y>
<p lang="en" id="lang-en">English</p>
<p lang="en-gb" id="lang-en-gb">English UK</p>
<p lang="en-us" id="lang-en-us">English US</p>
<p lang="fr" id="lang-fr">French</p>
</div>
<div id="footer">
</div>
"""
def setup_method(self):
self._soup = BeautifulSoup(self.HTML, "html.parser")
def assert_css_selects(
self, selector: str, expected_ids: List[str], **kwargs: Any
) -> None:
results = self._soup.select(selector, **kwargs)
assert isinstance(results, ResultSet)
el_ids = [el["id"] for el in results]
el_ids.sort()
expected_ids.sort()
assert expected_ids == el_ids, "Selector %s, expected [%s], got [%s]" % (
selector,
", ".join(expected_ids),
", ".join(el_ids),
)
assertSelect = assert_css_selects
def assert_css_select_multiple(self, *tests: Tuple[str, List[str]]):
for selector, expected_ids in tests:
self.assert_css_selects(selector, expected_ids)
def test_precompiled(self):
sel = self._soup.css.compile("div")
els = self._soup.select(sel)
assert len(els) == 4
for div in els:
assert div.name == "div"
el = self._soup.select_one(sel)
assert "main" == el["id"]
def test_one_tag_one(self):
els = self._soup.select("title")
assert len(els) == 1
assert els[0].name == "title"
assert els[0].contents == ["The title"]
def test_one_tag_many(self):
els = self._soup.select("div")
assert len(els) == 4
for div in els:
assert div.name == "div"
el = self._soup.select_one("div")
assert "main" == el["id"]
def test_select_one_returns_none_if_no_match(self):
match = self._soup.select_one("nonexistenttag")
assert None is match
def test_tag_in_tag_one(self):
self.assert_css_selects("div div", ["inner", "data1"])
def test_tag_in_tag_many(self):
for selector in ("html div", "html body div", "body div"):
self.assert_css_selects(selector, ["data1", "main", "inner", "footer"])
def test_limit(self):
self.assert_css_selects("html div", ["main"], limit=1)
self.assert_css_selects("html body div", ["inner", "main"], limit=2)
self.assert_css_selects(
"body div", ["data1", "main", "inner", "footer"], limit=10
)
def test_tag_no_match(self):
assert len(self._soup.select("del")) == 0
def test_invalid_tag(self):
with pytest.raises(SelectorSyntaxError):
self._soup.select("tag%t")
def test_select_dashed_tag_ids(self):
self.assert_css_selects("custom-dashed-tag", ["dash1", "dash2"])
def test_select_dashed_by_id(self):
dashed = self._soup.select('custom-dashed-tag[id="dash2"]')
assert dashed[0].name == "custom-dashed-tag"
assert dashed[0]["id"] == "dash2"
def test_dashed_tag_text(self):
assert self._soup.select("body > custom-dashed-tag")[0].text == "Hello there."
def test_select_dashed_matches_find_all(self):
assert self._soup.select("custom-dashed-tag") == self._soup.find_all(
"custom-dashed-tag"
)
def test_header_tags(self):
self.assert_css_select_multiple(
("h1", ["header1"]),
("h2", ["header2", "header3"]),
)
def test_class_one(self):
for selector in (".onep", "p.onep", "html p.onep"):
els = self._soup.select(selector)
assert len(els) == 1
assert els[0].name == "p"
assert els[0]["class"] == ["onep"]
def test_class_mismatched_tag(self):
els = self._soup.select("div.onep")
assert len(els) == 0
def test_one_id(self):
for selector in ("div#inner", "#inner", "div div#inner"):
self.assert_css_selects(selector, ["inner"])
def test_bad_id(self):
els = self._soup.select("#doesnotexist")
assert len(els) == 0
def test_items_in_id(self):
els = self._soup.select("div#inner p")
assert len(els) == 3
for el in els:
assert el.name == "p"
assert els[1]["class"] == ["onep"]
assert not els[0].has_attr("class")
def test_a_bunch_of_emptys(self):
for selector in ("div#main del", "div#main div.oops", "div div#main"):
assert len(self._soup.select(selector)) == 0
def test_multi_class_support(self):
for selector in (
".class1",
"p.class1",
".class2",
"p.class2",
".class3",
"p.class3",
"html p.class2",
"div#inner .class2",
):
self.assert_css_selects(selector, ["pmulti"])
def test_multi_class_selection(self):
for selector in (".class1.class3", ".class3.class2", ".class1.class2.class3"):
self.assert_css_selects(selector, ["pmulti"])
def test_child_selector(self):
self.assert_css_selects(".s1 > a", ["s1a1", "s1a2"])
self.assert_css_selects(".s1 > a span", ["s1a2s1"])
def test_child_selector_id(self):
self.assert_css_selects(".s1 > a#s1a2 span", ["s1a2s1"])
def test_attribute_equals(self):
self.assert_css_select_multiple(
('p[class="onep"]', ["p1"]),
('p[id="p1"]', ["p1"]),
('[class="onep"]', ["p1"]),
('[id="p1"]', ["p1"]),
('link[rel="stylesheet"]', ["l1"]),
('link[type="text/css"]', ["l1"]),
('link[href="blah.css"]', ["l1"]),
('link[href="no-blah.css"]', []),
('[rel="stylesheet"]', ["l1"]),
('[type="text/css"]', ["l1"]),
('[href="blah.css"]', ["l1"]),
('[href="no-blah.css"]', []),
('p[href="no-blah.css"]', []),
('[href="no-blah.css"]', []),
)
def test_attribute_tilde(self):
self.assert_css_select_multiple(
('p[class~="class1"]', ["pmulti"]),
('p[class~="class2"]', ["pmulti"]),
('p[class~="class3"]', ["pmulti"]),
('[class~="class1"]', ["pmulti"]),
('[class~="class2"]', ["pmulti"]),
('[class~="class3"]', ["pmulti"]),
('a[rel~="friend"]', ["bob"]),
('a[rel~="met"]', ["bob"]),
('[rel~="friend"]', ["bob"]),
('[rel~="met"]', ["bob"]),
)
def test_attribute_startswith(self):
self.assert_css_select_multiple(
('[rel^="style"]', ["l1"]),
('link[rel^="style"]', ["l1"]),
('notlink[rel^="notstyle"]', []),
('[rel^="notstyle"]', []),
('link[rel^="notstyle"]', []),
('link[href^="bla"]', ["l1"]),
('a[href^="http://"]', ["bob", "me"]),
('[href^="http://"]', ["bob", "me"]),
('[id^="p"]', ["pmulti", "p1"]),
('[id^="m"]', ["me", "main"]),
('div[id^="m"]', ["main"]),
('a[id^="m"]', ["me"]),
('div[data-tag^="dashed"]', ["data1"]),
)
def test_attribute_endswith(self):
self.assert_css_select_multiple(
('[href$=".css"]', ["l1"]),
('link[href$=".css"]', ["l1"]),
('link[id$="1"]', ["l1"]),
(
'[id$="1"]',
["data1", "l1", "p1", "header1", "s1a1", "s2a1", "s1a2s1", "dash1"],
),
('div[id$="1"]', ["data1"]),
('[id$="noending"]', []),
)
def test_attribute_contains(self):
self.assert_css_select_multiple(
# From test_attribute_startswith
('[rel*="style"]', ["l1"]),
('link[rel*="style"]', ["l1"]),
('notlink[rel*="notstyle"]', []),
('[rel*="notstyle"]', []),
('link[rel*="notstyle"]', []),
('link[href*="bla"]', ["l1"]),
('[href*="http://"]', ["bob", "me"]),
('[id*="p"]', ["pmulti", "p1"]),
('div[id*="m"]', ["main"]),
('a[id*="m"]', ["me"]),
# From test_attribute_endswith
('[href*=".css"]', ["l1"]),
('link[href*=".css"]', ["l1"]),
('link[id*="1"]', ["l1"]),
(
'[id*="1"]',
[
"data1",
"l1",
"p1",
"header1",
"s1a1",
"s1a2",
"s2a1",
"s1a2s1",
"dash1",
],
),
('div[id*="1"]', ["data1"]),
('[id*="noending"]', []),
# New for this test
('[href*="."]', ["bob", "me", "l1"]),
('a[href*="."]', ["bob", "me"]),
('link[href*="."]', ["l1"]),
('div[id*="n"]', ["main", "inner"]),
('div[id*="nn"]', ["inner"]),
('div[data-tag*="edval"]', ["data1"]),
)
def test_attribute_exact_or_hypen(self):
self.assert_css_select_multiple(
('p[lang|="en"]', ["lang-en", "lang-en-gb", "lang-en-us"]),
('[lang|="en"]', ["lang-en", "lang-en-gb", "lang-en-us"]),
('p[lang|="fr"]', ["lang-fr"]),
('p[lang|="gb"]', []),
)
def test_attribute_exists(self):
self.assert_css_select_multiple(
("[rel]", ["l1", "bob", "me"]),
("link[rel]", ["l1"]),
("a[rel]", ["bob", "me"]),
("[lang]", ["lang-en", "lang-en-gb", "lang-en-us", "lang-fr"]),
("p[class]", ["p1", "pmulti"]),
("[blah]", []),
("p[blah]", []),
("div[data-tag]", ["data1"]),
)
def test_quoted_space_in_selector_name(self):
html = """<div style="display: wrong">nope</div>
<div style="display: right">yes</div>
"""
soup = BeautifulSoup(html, "html.parser")
[chosen] = soup.select('div[style="display: right"]')
assert "yes" == chosen.string
def test_unsupported_pseudoclass(self):
with pytest.raises(SOUPSIEVE_EXCEPTION_ON_UNSUPPORTED_PSEUDOCLASS):
self._soup.select("a:no-such-pseudoclass")
with pytest.raises(SelectorSyntaxError):
self._soup.select("a:nth-of-type(a)")
def test_nth_of_type(self):
# Try to select first paragraph
els = self._soup.select("div#inner p:nth-of-type(1)")
assert len(els) == 1
assert els[0].string == "Some text"
# Try to select third paragraph
els = self._soup.select("div#inner p:nth-of-type(3)")
assert len(els) == 1
assert els[0].string == "Another"
# Try to select (non-existent!) fourth paragraph
els = self._soup.select("div#inner p:nth-of-type(4)")
assert len(els) == 0
# Zero will select no tags.
els = self._soup.select("div p:nth-of-type(0)")
assert len(els) == 0
def test_nth_of_type_direct_descendant(self):
els = self._soup.select("div#inner > p:nth-of-type(1)")
assert len(els) == 1
assert els[0].string == "Some text"
def test_id_child_selector_nth_of_type(self):
self.assert_css_selects("#inner > p:nth-of-type(2)", ["p1"])
def test_select_on_element(self):
# Other tests operate on the tree; this operates on an element
# within the tree.
inner = self._soup.find("div", id="main")
selected = inner.select("div")
# The <div id="inner"> tag was selected. The <div id="footer">
# tag was not.
self.assert_selects_ids(selected, ["inner", "data1"])
def test_overspecified_child_id(self):
self.assert_css_selects(".fancy #inner", ["inner"])
self.assert_css_selects(".normal #inner", [])
def test_adjacent_sibling_selector(self):
self.assert_css_selects("#p1 + h2", ["header2"])
self.assert_css_selects("#p1 + h2 + p", ["pmulti"])
self.assert_css_selects("#p1 + #header2 + .class1", ["pmulti"])
assert [] == self._soup.select("#p1 + p")
def test_general_sibling_selector(self):
self.assert_css_selects("#p1 ~ h2", ["header2", "header3"])
self.assert_css_selects("#p1 ~ #header2", ["header2"])
self.assert_css_selects("#p1 ~ h2 + a", ["me"])
self.assert_css_selects('#p1 ~ h2 + [rel="me"]', ["me"])
assert [] == self._soup.select("#inner ~ h2")
def test_dangling_combinator(self):
with pytest.raises(SelectorSyntaxError):
self._soup.select("h1 >")
def test_sibling_combinator_wont_select_same_tag_twice(self):
self.assert_css_selects("p[lang] ~ p", ["lang-en-gb", "lang-en-us", "lang-fr"])
# Test the selector grouping operator (the comma)
def test_multiple_select(self):
self.assert_css_selects("x, y", ["xid", "yid"])
def test_multiple_select_with_no_space(self):
self.assert_css_selects("x,y", ["xid", "yid"])
def test_multiple_select_with_more_space(self):
self.assert_css_selects("x, y", ["xid", "yid"])
def test_multiple_select_duplicated(self):
self.assert_css_selects("x, x", ["xid"])
def test_multiple_select_sibling(self):
self.assert_css_selects("x, y ~ p[lang=fr]", ["xid", "lang-fr"])
def test_multiple_select_tag_and_direct_descendant(self):
self.assert_css_selects("x, y > z", ["xid", "zidb"])
def test_multiple_select_direct_descendant_and_tags(self):
self.assert_css_selects(
"div > x, y, z", ["xid", "yid", "zida", "zidb", "zidab", "zidac"]
)
def test_multiple_select_indirect_descendant(self):
self.assert_css_selects(
"div x,y, z", ["xid", "yid", "zida", "zidb", "zidab", "zidac"]
)
def test_invalid_multiple_select(self):
with pytest.raises(SelectorSyntaxError):
self._soup.select(",x, y")
with pytest.raises(SelectorSyntaxError):
self._soup.select("x,,y")
def test_multiple_select_attrs(self):
self.assert_css_selects("p[lang=en], p[lang=en-gb]", ["lang-en", "lang-en-gb"])
def test_multiple_select_ids(self):
self.assert_css_selects(
"x, y > z[id=zida], z[id=zidab], z[id=zidb]", ["xid", "zidb", "zidab"]
)
def test_multiple_select_nested(self):
self.assert_css_selects("body > div > x, y > z", ["xid", "zidb"])
def test_select_duplicate_elements(self):
# When markup contains duplicate elements, a multiple select
# will find all of them.
markup = '<div class="c1"/><div class="c2"/><div class="c1"/>'
soup = BeautifulSoup(markup, "html.parser")
selected = soup.select(".c1, .c2")
assert 3 == len(selected)
# Verify that find_all finds the same elements, though because
# of an implementation detail it finds them in a different
# order.
for element in soup.find_all(class_=["c1", "c2"]):
assert element in selected
def test_closest(self):
inner = self._soup.find("div", id="inner")
closest = inner.css.closest("div[id=main]")
assert closest == self._soup.find("div", id="main")
def test_match(self):
inner = self._soup.find("div", id="inner")
main = self._soup.find("div", id="main")
assert inner.css.match("div[id=main]") is False
assert main.css.match("div[id=main]") is True
def test_iselect(self):
gen = self._soup.css.iselect("h2")
assert isinstance(gen, types.GeneratorType)
[header2, header3] = gen
assert header2["id"] == "header2"
assert header3["id"] == "header3"
def test_filter(self):
inner = self._soup.find("div", id="inner")
results = inner.css.filter("h2")
assert len(inner.css.filter("h2")) == 2
results = inner.css.filter("h2[id=header3]")
assert isinstance(results, ResultSet)
[result] = results
assert result["id"] == "header3"
def test_escape(self):
m = self._soup.css.escape
assert m(".foo#bar") == "\\.foo\\#bar"
assert m("()[]{}") == "\\(\\)\\[\\]\\{\\}"
assert m(".foo") == self._soup.css.escape(".foo")

View File

@ -0,0 +1,433 @@
# encoding: utf-8
import pytest
import logging
import warnings
import bs4
from bs4 import BeautifulSoup
from bs4.dammit import (
EntitySubstitution,
EncodingDetector,
UnicodeDammit,
)
class TestUnicodeDammit(object):
"""Standalone tests of UnicodeDammit."""
def test_unicode_input(self):
markup = "I'm already Unicode! \N{SNOWMAN}"
dammit = UnicodeDammit(markup)
assert dammit.unicode_markup == markup
@pytest.mark.parametrize(
"smart_quotes_to,expect_converted",
[
(None, "\u2018\u2019\u201c\u201d"),
("xml", "&#x2018;&#x2019;&#x201C;&#x201D;"),
("html", "&lsquo;&rsquo;&ldquo;&rdquo;"),
("ascii", "''" + '""'),
],
)
def test_smart_quotes_to(self, smart_quotes_to, expect_converted):
"""Verify the functionality of the smart_quotes_to argument
to the UnicodeDammit constructor."""
markup = b"<foo>\x91\x92\x93\x94</foo>"
converted = UnicodeDammit(
markup,
known_definite_encodings=["windows-1252"],
smart_quotes_to=smart_quotes_to,
).unicode_markup
assert converted == "<foo>{}</foo>".format(expect_converted)
def test_detect_utf8(self):
utf8 = b"Sacr\xc3\xa9 bleu! \xe2\x98\x83"
dammit = UnicodeDammit(utf8)
assert dammit.original_encoding.lower() == "utf-8"
assert dammit.unicode_markup == "Sacr\xe9 bleu! \N{SNOWMAN}"
def test_convert_hebrew(self):
hebrew = b"\xed\xe5\xec\xf9"
dammit = UnicodeDammit(hebrew, ["iso-8859-8"])
assert dammit.original_encoding.lower() == "iso-8859-8"
assert dammit.unicode_markup == "\u05dd\u05d5\u05dc\u05e9"
def test_dont_see_smart_quotes_where_there_are_none(self):
utf_8 = b"\343\202\261\343\203\274\343\202\277\343\202\244 Watch"
dammit = UnicodeDammit(utf_8)
assert dammit.original_encoding.lower() == "utf-8"
assert dammit.unicode_markup.encode("utf-8") == utf_8
def test_ignore_inappropriate_codecs(self):
utf8_data = "Räksmörgås".encode("utf-8")
dammit = UnicodeDammit(utf8_data, ["iso-8859-8"])
assert dammit.original_encoding.lower() == "utf-8"
def test_ignore_invalid_codecs(self):
utf8_data = "Räksmörgås".encode("utf-8")
for bad_encoding in [".utf8", "...", "utF---16.!"]:
dammit = UnicodeDammit(utf8_data, [bad_encoding])
assert dammit.original_encoding.lower() == "utf-8"
def test_exclude_encodings(self):
# This is UTF-8.
utf8_data = "Räksmörgås".encode("utf-8")
# But if we exclude UTF-8 from consideration, the guess is
# Windows-1252.
dammit = UnicodeDammit(utf8_data, exclude_encodings=["utf-8"])
assert dammit.original_encoding.lower() == "windows-1252"
# And if we exclude that, there is no valid guess at all.
dammit = UnicodeDammit(utf8_data, exclude_encodings=["utf-8", "windows-1252"])
assert dammit.original_encoding is None
class TestEncodingDetector(object):
def test_encoding_detector_replaces_junk_in_encoding_name_with_replacement_character(
self,
):
detected = EncodingDetector(b'<?xml version="1.0" encoding="UTF-\xdb" ?>')
encodings = list(detected.encodings)
assert "utf-\N{REPLACEMENT CHARACTER}" in encodings
def test_detect_html5_style_meta_tag(self):
for data in (
b'<html><meta charset="euc-jp" /></html>',
b"<html><meta charset='euc-jp' /></html>",
b"<html><meta charset=euc-jp /></html>",
b"<html><meta charset=euc-jp/></html>",
):
dammit = UnicodeDammit(data, is_html=True)
assert "euc-jp" == dammit.original_encoding
def test_last_ditch_entity_replacement(self):
# This is a UTF-8 document that contains bytestrings
# completely incompatible with UTF-8 (ie. encoded with some other
# encoding).
#
# Since there is no consistent encoding for the document,
# Unicode, Dammit will eventually encode the document as UTF-8
# and encode the incompatible characters as REPLACEMENT
# CHARACTER.
#
# If chardet is installed, it will detect that the document
# can be converted into ISO-8859-1 without errors. This happens
# to be the wrong encoding, but it is a consistent encoding, so the
# code we're testing here won't run.
#
# So we temporarily disable chardet if it's present.
doc = b"""\357\273\277<?xml version="1.0" encoding="UTF-8"?>
<html><b>\330\250\330\252\330\261</b>
<i>\310\322\321\220\312\321\355\344</i></html>"""
chardet = bs4.dammit._chardet_dammit
logging.disable(logging.WARNING)
try:
def noop(str):
return None
bs4.dammit._chardet_dammit = noop
dammit = UnicodeDammit(doc)
assert True is dammit.contains_replacement_characters
assert "\ufffd" in dammit.unicode_markup
soup = BeautifulSoup(doc, "html.parser")
assert soup.contains_replacement_characters
finally:
logging.disable(logging.NOTSET)
bs4.dammit._chardet_dammit = chardet
def test_byte_order_mark_removed(self):
# A document written in UTF-16LE will have its byte order marker stripped.
data = b"\xff\xfe<\x00a\x00>\x00\xe1\x00\xe9\x00<\x00/\x00a\x00>\x00"
dammit = UnicodeDammit(data)
assert "<a>áé</a>" == dammit.unicode_markup
assert "utf-16le" == dammit.original_encoding
def test_known_definite_versus_user_encodings(self):
# The known_definite_encodings are used before sniffing the
# byte-order mark; the user_encodings are used afterwards.
# Here's a document in UTF-16LE.
data = b"\xff\xfe<\x00a\x00>\x00\xe1\x00\xe9\x00<\x00/\x00a\x00>\x00"
dammit = UnicodeDammit(data)
# We can process it as UTF-16 by passing it in as a known
# definite encoding.
before = UnicodeDammit(data, known_definite_encodings=["utf-16"])
assert "utf-16" == before.original_encoding
# If we pass UTF-18 as a user encoding, it's not even
# tried--the encoding sniffed from the byte-order mark takes
# precedence.
after = UnicodeDammit(data, user_encodings=["utf-8"])
assert "utf-16le" == after.original_encoding
assert ["utf-16le"] == [x[0] for x in dammit.tried_encodings]
# Here's a document in ISO-8859-8.
hebrew = b"\xed\xe5\xec\xf9"
dammit = UnicodeDammit(
hebrew, known_definite_encodings=["utf-8"], user_encodings=["iso-8859-8"]
)
# The known_definite_encodings don't work, BOM sniffing does
# nothing (it only works for a few UTF encodings), but one of
# the user_encodings does work.
assert "iso-8859-8" == dammit.original_encoding
assert ["utf-8", "iso-8859-8"] == [x[0] for x in dammit.tried_encodings]
def test_deprecated_override_encodings(self):
# override_encodings is a deprecated alias for
# known_definite_encodings.
hebrew = b"\xed\xe5\xec\xf9"
with warnings.catch_warnings(record=True) as w:
dammit = UnicodeDammit(
hebrew,
known_definite_encodings=["shift-jis"],
override_encodings=["utf-8"],
user_encodings=["iso-8859-8"],
)
[warning] = w
message = warning.message
assert isinstance(message, DeprecationWarning)
assert warning.filename == __file__
assert "iso-8859-8" == dammit.original_encoding
# known_definite_encodings and override_encodings were tried
# before user_encodings.
assert ["shift-jis", "utf-8", "iso-8859-8"] == (
[x[0] for x in dammit.tried_encodings]
)
def test_detwingle(self):
# Here's a UTF8 document.
utf8 = ("\N{SNOWMAN}" * 3).encode("utf8")
# Here's a Windows-1252 document.
windows_1252 = (
"\N{LEFT DOUBLE QUOTATION MARK}Hi, I like Windows!"
"\N{RIGHT DOUBLE QUOTATION MARK}"
).encode("windows_1252")
# Through some unholy alchemy, they've been stuck together.
doc = utf8 + windows_1252 + utf8
# The document can't be turned into UTF-8:
with pytest.raises(UnicodeDecodeError):
doc.decode("utf8")
# Unicode, Dammit thinks the whole document is Windows-1252,
# and decodes it into "☃☃☃“Hi, I like Windows!”☃☃☃"
# But if we run it through fix_embedded_windows_1252, it's fixed:
fixed = UnicodeDammit.detwingle(doc)
assert "☃☃☃“Hi, I like Windows!”☃☃☃" == fixed.decode("utf8")
def test_detwingle_ignores_multibyte_characters(self):
# Each of these characters has a UTF-8 representation ending
# in \x93. \x93 is a smart quote if interpreted as
# Windows-1252. But our code knows to skip over multibyte
# UTF-8 characters, so they'll survive the process unscathed.
for tricky_unicode_char in (
"\N{LATIN SMALL LIGATURE OE}", # 2-byte char '\xc5\x93'
"\N{LATIN SUBSCRIPT SMALL LETTER X}", # 3-byte char '\xe2\x82\x93'
"\xf0\x90\x90\x93", # This is a CJK character, not sure which one.
):
input = tricky_unicode_char.encode("utf8")
assert input.endswith(b"\x93")
output = UnicodeDammit.detwingle(input)
assert output == input
def test_find_declared_encoding(self):
# Test our ability to find a declared encoding inside an
# XML or HTML document.
#
# Even if the document comes in as Unicode, it may be
# interesting to know what encoding was claimed
# originally.
html_unicode = '<html><head><meta charset="utf-8"></head></html>'
html_bytes = html_unicode.encode("ascii")
xml_unicode = '<?xml version="1.0" encoding="ISO-8859-1" ?>'
xml_bytes = xml_unicode.encode("ascii")
m = EncodingDetector.find_declared_encoding
assert m(html_unicode, is_html=False) is None
assert "utf-8" == m(html_unicode, is_html=True)
assert "utf-8" == m(html_bytes, is_html=True)
assert "iso-8859-1" == m(xml_unicode)
assert "iso-8859-1" == m(xml_bytes)
# Normally, only the first few kilobytes of a document are checked for
# an encoding.
spacer = b" " * 5000
assert m(spacer + html_bytes) is None
assert m(spacer + xml_bytes) is None
# But you can tell find_declared_encoding to search an entire
# HTML document.
assert (
m(spacer + html_bytes, is_html=True, search_entire_document=True) == "utf-8"
)
# The XML encoding declaration has to be the very first thing
# in the document. We'll allow whitespace before the document
# starts, but nothing else.
assert m(xml_bytes, search_entire_document=True) == "iso-8859-1"
assert m(b" " + xml_bytes, search_entire_document=True) == "iso-8859-1"
assert m(b"a" + xml_bytes, search_entire_document=True) is None
class TestEntitySubstitution(object):
"""Standalone tests of the EntitySubstitution class."""
def setup_method(self):
self.sub = EntitySubstitution
@pytest.mark.parametrize(
"original,substituted",
[
# Basic case. Unicode characters corresponding to named
# HTML entites are substituted; others are not.
("foo\u2200\N{SNOWMAN}\u00f5bar", "foo&forall;\N{SNOWMAN}&otilde;bar"),
# MS smart quotes are a common source of frustration, so we
# give them a special test.
("foo“”", "&lsquo;&rsquo;foo&ldquo;&rdquo;"),
],
)
def test_substitute_html(self, original, substituted):
assert self.sub.substitute_html(original) == substituted
def test_html5_entity(self):
for entity, u in (
# A few spot checks of our ability to recognize
# special character sequences and convert them
# to named entities.
("&models;", "\u22a7"),
("&Nfr;", "\U0001d511"),
("&ngeqq;", "\u2267\u0338"),
("&not;", "\xac"),
("&Not;", "\u2aec"),
# We _could_ convert | to &verbarr;, but we don't, because
# | is an ASCII character.
("|" "|"),
# Similarly for the fj ligature, which we could convert to
# &fjlig;, but we don't.
("fj", "fj"),
# We do convert _these_ ASCII characters to HTML entities,
# because that's required to generate valid HTML.
("&gt;", ">"),
("&lt;", "<"),
):
template = "3 %s 4"
raw = template % u
with_entities = template % entity
assert self.sub.substitute_html(raw) == with_entities
def test_html5_entity_with_variation_selector(self):
# Some HTML5 entities correspond either to a single-character
# Unicode sequence _or_ to the same character plus U+FE00,
# VARIATION SELECTOR 1. We can handle this.
data = "fjords \u2294 penguins"
markup = "fjords &sqcup; penguins"
assert self.sub.substitute_html(data) == markup
data = "fjords \u2294\ufe00 penguins"
markup = "fjords &sqcups; penguins"
assert self.sub.substitute_html(data) == markup
def test_xml_converstion_includes_no_quotes_if_make_quoted_attribute_is_false(self):
s = 'Welcome to "my bar"'
assert self.sub.substitute_xml(s, False) == s
def test_xml_attribute_quoting_normally_uses_double_quotes(self):
assert self.sub.substitute_xml("Welcome", True) == '"Welcome"'
assert self.sub.substitute_xml("Bob's Bar", True) == '"Bob\'s Bar"'
def test_xml_attribute_quoting_uses_single_quotes_when_value_contains_double_quotes(
self,
):
s = 'Welcome to "my bar"'
assert self.sub.substitute_xml(s, True) == "'Welcome to \"my bar\"'"
def test_xml_attribute_quoting_escapes_single_quotes_when_value_contains_both_single_and_double_quotes(
self,
):
s = 'Welcome to "Bob\'s Bar"'
assert self.sub.substitute_xml(s, True) == '"Welcome to &quot;Bob\'s Bar&quot;"'
def test_xml_quotes_arent_escaped_when_value_is_not_being_quoted(self):
quoted = 'Welcome to "Bob\'s Bar"'
assert self.sub.substitute_xml(quoted) == quoted
def test_xml_quoting_handles_angle_brackets(self):
assert self.sub.substitute_xml("foo<bar>") == "foo&lt;bar&gt;"
def test_xml_quoting_handles_ampersands(self):
assert self.sub.substitute_xml("AT&T") == "AT&amp;T"
def test_xml_quoting_including_ampersands_when_they_are_part_of_an_entity(self):
assert self.sub.substitute_xml("&Aacute;T&T") == "&amp;Aacute;T&amp;T"
def test_xml_quoting_ignoring_ampersands_when_they_are_part_of_an_entity(self):
assert (
self.sub.substitute_xml_containing_entities("&Aacute;T&T")
== "&Aacute;T&amp;T"
)
def test_quotes_not_html_substituted(self):
"""There's no need to do this except inside attribute values."""
text = 'Bob\'s "bar"'
assert self.sub.substitute_html(text) == text
@pytest.mark.parametrize(
"markup, old",
[
("foo & bar", "foo &amp; bar"),
("foo&", "foo&amp;"),
("foo&&& bar", "foo&amp;&amp;&amp; bar"),
("x=1&y=2", "x=1&amp;y=2"),
("&123", "&amp;123"),
("&abc", "&amp;abc"),
("foo &0 bar", "foo &amp;0 bar"),
("foo &lolwat bar", "foo &amp;lolwat bar"),
],
)
def test_unambiguous_ampersands_not_escaped(self, markup, old):
assert self.sub.substitute_html(markup) == old
assert self.sub.substitute_html5_raw(markup) == markup
@pytest.mark.parametrize(
"markup,html,html5,html5raw",
[
("&divide;", "&amp;divide;", "&amp;divide;", "&divide;"),
("&nonesuch;", "&amp;nonesuch;", "&amp;nonesuch;", "&amp;nonesuch;"),
("&#247;", "&amp;#247;", "&amp;#247;", "&amp;#247;"),
("&#xa1;", "&amp;#xa1;", "&amp;#xa1;", "&amp;#xa1;"),
],
)
def test_when_entity_ampersands_are_escaped(self, markup, html, html5, html5raw):
# The html and html5 formatters always escape the ampersand
# that begins an entity reference, because they assume
# Beautiful Soup has already converted any unescaped entity references
# to Unicode characters.
#
# The html5_raw formatter does not escape the ampersand that
# begins a recognized HTML entity, because it does not
# fit the HTML5 definition of an ambiguous ampersand.
#
# The html5_raw formatter does escape the ampersands in front
# of unrecognized named entities, as well as numeric and
# hexadecimal entities, because they do fit the definition.
assert self.sub.substitute_html(markup) == html
assert self.sub.substitute_html5(markup) == html5
assert self.sub.substitute_html5_raw(markup) == html5raw
@pytest.mark.parametrize(
"markup,expect", [("&nosuchentity;", "&amp;nosuchentity;")]
)
def test_ambiguous_ampersands_escaped(self, markup, expect):
assert self.sub.substitute_html(markup) == expect
assert self.sub.substitute_html5_raw(markup) == expect

View File

@ -0,0 +1,138 @@
"""Tests of classes in element.py.
The really big classes -- Tag, PageElement, and NavigableString --
are tested in separate files.
"""
import pytest
from bs4.element import (
HTMLAttributeDict,
XMLAttributeDict,
CharsetMetaAttributeValue,
ContentMetaAttributeValue,
NamespacedAttribute,
ResultSet,
)
class TestNamedspacedAttribute:
def test_name_may_be_none_or_missing(self):
a = NamespacedAttribute("xmlns", None)
assert a == "xmlns"
a = NamespacedAttribute("xmlns", "")
assert a == "xmlns"
a = NamespacedAttribute("xmlns")
assert a == "xmlns"
def test_namespace_may_be_none_or_missing(self):
a = NamespacedAttribute(None, "tag")
assert a == "tag"
a = NamespacedAttribute("", "tag")
assert a == "tag"
def test_attribute_is_equivalent_to_colon_separated_string(self):
a = NamespacedAttribute("a", "b")
assert "a:b" == a
def test_attributes_are_equivalent_if_prefix_and_name_identical(self):
a = NamespacedAttribute("a", "b", "c")
b = NamespacedAttribute("a", "b", "c")
assert a == b
# The actual namespace is not considered.
c = NamespacedAttribute("a", "b", None)
assert a == c
# But name and prefix are important.
d = NamespacedAttribute("a", "z", "c")
assert a != d
e = NamespacedAttribute("z", "b", "c")
assert a != e
class TestAttributeValueWithCharsetSubstitution:
"""Certain attributes are designed to have the charset of the
final document substituted into their value.
"""
def test_charset_meta_attribute_value(self):
# The value of a CharsetMetaAttributeValue is whatever
# encoding the string is in.
value = CharsetMetaAttributeValue("euc-jp")
assert "euc-jp" == value
assert "euc-jp" == value.original_value
assert "utf8" == value.substitute_encoding("utf8")
assert "ascii" == value.substitute_encoding("ascii")
# If the target encoding is a Python internal encoding,
# no encoding will be mentioned in the output HTML.
assert "" == value.substitute_encoding("palmos")
def test_content_meta_attribute_value(self):
value = ContentMetaAttributeValue("text/html; charset=euc-jp")
assert "text/html; charset=euc-jp" == value
assert "text/html; charset=euc-jp" == value.original_value
assert "text/html; charset=utf8" == value.substitute_encoding("utf8")
assert "text/html; charset=ascii" == value.substitute_encoding("ascii")
# If the target encoding is a Python internal encoding, the
# charset argument will be omitted altogether.
assert "text/html" == value.substitute_encoding("palmos")
class TestAttributeDicts:
def test_xml_attribute_value_handling(self):
# Verify that attribute values are processed according to the
# XML spec's rules.
d = XMLAttributeDict()
d["v"] = 100
assert d["v"] == "100"
d["v"] = 100.123
assert d["v"] == "100.123"
# This preserves Beautiful Soup's old behavior in the absence of
# guidance from the spec.
d["v"] = False
assert d["v"] is False
d["v"] = True
assert d["v"] is True
d["v"] = None
assert d["v"] == ""
def test_html_attribute_value_handling(self):
# Verify that attribute values are processed according to the
# HTML spec's rules.
d = HTMLAttributeDict()
d["v"] = 100
assert d["v"] == "100"
d["v"] = 100.123
assert d["v"] == "100.123"
d["v"] = False
assert "v" not in d
d["v"] = None
assert "v" not in d
d["v"] = True
assert d["v"] == "v"
attribute = NamespacedAttribute("prefix", "name", "namespace")
d[attribute] = True
assert d[attribute] == "name"
class TestResultSet:
def test_getattr_exception(self):
rs = ResultSet(None)
with pytest.raises(AttributeError) as e:
rs.name
assert (
"""ResultSet object has no attribute "name". You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?"""
== str(e.value)
)

View File

@ -0,0 +1,674 @@
import pytest
import re
import warnings
from . import (
SoupTest,
)
from typing import (
Callable,
Optional,
Tuple,
)
from bs4.element import Tag
from bs4.filter import (
AttributeValueMatchRule,
ElementFilter,
MatchRule,
SoupStrainer,
StringMatchRule,
TagNameMatchRule,
)
from bs4._typing import _RawAttributeValues
class TestElementFilter(SoupTest):
def test_default_behavior(self):
# An unconfigured ElementFilter matches absolutely everything.
selector = ElementFilter()
assert not selector.excludes_everything
assert selector.includes_everything
soup = self.soup("<a>text</a>")
tag = soup.a
string = tag.string
assert True is selector.match(soup)
assert True is selector.match(tag)
assert True is selector.match(string)
assert soup.find(selector).name == "a"
# And allows any incoming markup to be turned into PageElements.
assert True is selector.allow_tag_creation(None, "tag", None)
assert True is selector.allow_string_creation("some string")
def test_setup_with_match_function(self):
# Configure an ElementFilter with a match function and
# we can no longer state with certainty that it includes everything.
selector = ElementFilter(lambda x: False)
assert not selector.includes_everything
def test_match(self):
def m(pe):
return pe.string == "allow" or (isinstance(pe, Tag) and pe.name == "allow")
soup = self.soup("<allow>deny</allow>allow<deny>deny</deny>")
allow_tag = soup.allow
allow_string = soup.find(string="allow")
deny_tag = soup.deny
deny_string = soup.find(string="deny")
selector = ElementFilter(match_function=m)
assert True is selector.match(allow_tag)
assert True is selector.match(allow_string)
assert False is selector.match(deny_tag)
assert False is selector.match(deny_string)
# Since only the match function was provided, there is
# no effect on tag or string creation.
soup = self.soup("<a>text</a>", parse_only=selector)
assert "text" == soup.a.string
def test_allow_tag_creation(self):
# By default, ElementFilter.allow_tag_creation allows everything.
filter = ElementFilter()
f = filter.allow_tag_creation
assert True is f("allow", "ignore", {})
assert True is f("ignore", "allow", {})
assert True is f(None, "ignore", {"allow": "1"})
assert True is f("no", "no", {"no": "nope"})
# You can customize this behavior by overriding
# allow_tag_creation in a subclass.
class MyFilter(ElementFilter):
def allow_tag_creation(
self,
nsprefix: Optional[str],
name: str,
attrs: Optional[_RawAttributeValues],
):
return (
nsprefix == "allow"
or name == "allow"
or (attrs is not None and "allow" in attrs)
)
filter = MyFilter()
f = filter.allow_tag_creation
assert True is f("allow", "ignore", {})
assert True is f("ignore", "allow", {})
assert True is f(None, "ignore", {"allow": "1"})
assert False is f("no", "no", {"no": "nope"})
# Test the customized ElementFilter as a value for parse_only.
soup = self.soup(
"<deny>deny</deny> <allow>deny</allow> allow", parse_only=filter
)
# The <deny> tag was filtered out, but there was no effect on
# the strings, since only allow_tag_creation_function was
# overridden.
assert "deny <allow>deny</allow> allow" == soup.decode()
# Similarly, since match_function was not defined, this
# ElementFilter matches everything.
assert soup.find(filter) == "deny"
def test_allow_string_creation(self):
# By default, ElementFilter.allow_string_creation allows everything.
filter = ElementFilter()
f = filter.allow_string_creation
assert True is f("allow")
assert True is f("deny")
assert True is f("please allow")
# You can customize this behavior by overriding allow_string_creation
# in a subclass.
class MyFilter(ElementFilter):
def allow_string_creation(self, s: str):
return s == "allow"
filter = MyFilter()
f = filter.allow_string_creation
assert True is f("allow")
assert False is f("deny")
assert False is f("please allow")
# Test the customized ElementFilter as a value for parse_only.
soup = self.soup(
"<deny>deny</deny> <allow>deny</allow> allow", parse_only=filter
)
# All incoming strings other than "allow" (even whitespace)
# were filtered out, but there was no effect on the tags,
# since only allow_string_creation_function was defined.
assert "<deny>deny</deny><allow>deny</allow>" == soup.decode()
# Similarly, since match_function was not defined, this
# ElementFilter matches everything.
assert soup.find(filter).name == "deny"
class TestMatchRule(SoupTest):
def _tuple(
self, rule: MatchRule
) -> Tuple[Optional[str], Optional[str], Optional[Callable], Optional[bool]]:
return (
rule.string,
rule.pattern.pattern if rule.pattern else None,
rule.function,
rule.present,
)
@staticmethod
def tag_function(x: Tag) -> bool:
return False
@staticmethod
def string_function(x: str) -> bool:
return False
@pytest.mark.parametrize(
"constructor_args, constructor_kwargs, result",
[
# String
([], dict(string="a"), ("a", None, None, None)),
(
[],
dict(string="\N{SNOWMAN}".encode("utf8")),
("\N{SNOWMAN}", None, None, None),
),
# Regular expression
([], dict(pattern=re.compile("a")), (None, "a", None, None)),
([], dict(pattern="b"), (None, "b", None, None)),
([], dict(pattern=b"c"), (None, "c", None, None)),
# Function
([], dict(function=tag_function), (None, None, tag_function, None)),
([], dict(function=string_function), (None, None, string_function, None)),
# Boolean
([], dict(present=True), (None, None, None, True)),
# With positional arguments rather than keywords
(("a", None, None, None), {}, ("a", None, None, None)),
((None, "b", None, None), {}, (None, "b", None, None)),
((None, None, tag_function, None), {}, (None, None, tag_function, None)),
((None, None, None, True), {}, (None, None, None, True)),
],
)
def test_constructor(self, constructor_args, constructor_kwargs, result):
rule = MatchRule(*constructor_args, **constructor_kwargs)
assert result == self._tuple(rule)
def test_empty_match_not_allowed(self):
with pytest.raises(
ValueError,
match="Either string, pattern, function, present, or exclude_everything must be provided.",
):
MatchRule()
def test_full_match_not_allowed(self):
with pytest.raises(
ValueError,
match="At most one of string, pattern, function, present, and exclude_everything must be provided.",
):
MatchRule("a", "b", self.tag_function, True)
@pytest.mark.parametrize(
"rule_kwargs, match_against, result",
[
(dict(string="a"), "a", True),
(dict(string="a"), "ab", False),
(dict(pattern="a"), "a", True),
(dict(pattern="a"), "ab", True),
(dict(pattern="^a$"), "a", True),
(dict(pattern="^a$"), "ab", False),
(dict(present=True), "any random value", True),
(dict(present=True), None, False),
(dict(present=False), "any random value", False),
(dict(present=False), None, True),
(dict(function=lambda x: x.upper() == x), "UPPERCASE", True),
(dict(function=lambda x: x.upper() == x), "lowercase", False),
(dict(function=lambda x: x.lower() == x), "UPPERCASE", False),
(dict(function=lambda x: x.lower() == x), "lowercase", True),
],
)
def test_matches_string(self, rule_kwargs, match_against, result):
rule = MatchRule(**rule_kwargs)
assert rule.matches_string(match_against) == result
class TestTagNameMatchRule(SoupTest):
@pytest.mark.parametrize(
"rule_kwargs, tag_kwargs, result",
[
(dict(string="a"), dict(name="a"), True),
(dict(string="a"), dict(name="ab"), False),
(dict(pattern="a"), dict(name="a"), True),
(dict(pattern="a"), dict(name="ab"), True),
(dict(pattern="^a$"), dict(name="a"), True),
(dict(pattern="^a$"), dict(name="ab"), False),
# This isn't very useful, but it will work.
(dict(present=True), dict(name="any random value"), True),
(dict(present=False), dict(name="any random value"), False),
(
dict(function=lambda t: t.name in t.attrs),
dict(name="id", attrs=dict(id="a")),
True,
),
(
dict(function=lambda t: t.name in t.attrs),
dict(name="id", attrs={"class": "a"}),
False,
),
],
)
def test_matches_tag(self, rule_kwargs, tag_kwargs, result):
rule = TagNameMatchRule(**rule_kwargs)
tag = Tag(**tag_kwargs)
assert rule.matches_tag(tag) == result
# AttributeValueMatchRule and StringMatchRule have the same
# logic as MatchRule.
class TestSoupStrainer(SoupTest):
def test_constructor_string_deprecated_text_argument(self):
with warnings.catch_warnings(record=True) as w:
strainer = SoupStrainer(text="text")
assert strainer.text == "text"
[w1, w2] = w
msg = str(w1.message)
assert w1.filename == __file__
assert (
msg
== "As of version 4.11.0, the 'text' argument to the SoupStrainer constructor is deprecated. Use 'string' instead."
)
msg = str(w2.message)
assert w2.filename == __file__
assert (
msg
== "Access to deprecated property text. (Look at .string_rules instead) -- Deprecated since version 4.13.0."
)
def test_search_tag_deprecated(self):
strainer = SoupStrainer(name="a")
with warnings.catch_warnings(record=True) as w:
assert False is strainer.search_tag("b", {})
[w1] = w
msg = str(w1.message)
assert w1.filename == __file__
assert (
msg
== "Call to deprecated method search_tag. (Replaced by allow_tag_creation) -- Deprecated since version 4.13.0."
)
def test_search_deprecated(self):
strainer = SoupStrainer(name="a")
soup = self.soup("<a></a><b></b>")
with warnings.catch_warnings(record=True) as w:
assert soup.a == strainer.search(soup.a)
assert None is strainer.search(soup.b)
[w1, w2] = w
msg = str(w1.message)
assert msg == str(w2.message)
assert w1.filename == __file__
assert (
msg
== "Call to deprecated method search. (Replaced by match) -- Deprecated since version 4.13.0."
)
# Dummy function used within tests.
def _match_function(x):
pass
def test_constructor_default(self):
# The default SoupStrainer matches all tags, and only tags.
strainer = SoupStrainer()
[name_rule] = strainer.name_rules
assert True == name_rule.present
assert 0 == len(strainer.attribute_rules)
assert 0 == len(strainer.string_rules)
def test_constructor(self):
strainer = SoupStrainer(
"tagname",
{"attr1": "value"},
string=self._match_function,
attr2=["value1", False],
)
[name_rule] = strainer.name_rules
assert name_rule == TagNameMatchRule(string="tagname")
[attr1_rule] = strainer.attribute_rules.pop("attr1")
assert attr1_rule == AttributeValueMatchRule(string="value")
[attr2_rule1, attr2_rule2] = strainer.attribute_rules.pop("attr2")
assert attr2_rule1 == AttributeValueMatchRule(string="value1")
assert attr2_rule2 == AttributeValueMatchRule(present=False)
assert not strainer.attribute_rules
[string_rule] = strainer.string_rules
assert string_rule == StringMatchRule(function=self._match_function)
def test_scalar_attrs_becomes_class_restriction(self):
# For the sake of convenience, passing a scalar value as
# ``args`` results in a restriction on the 'class' attribute.
strainer = SoupStrainer(attrs="mainbody")
assert [] == strainer.name_rules
assert [] == strainer.string_rules
assert {"class": [AttributeValueMatchRule(string="mainbody")]} == (
strainer.attribute_rules
)
def test_constructor_class_attribute(self):
# The 'class' HTML attribute is also treated specially because
# it's a Python reserved word. Passing in "class_" as a
# keyword argument results in a restriction on the 'class'
# attribute.
strainer = SoupStrainer(class_="mainbody")
assert [] == strainer.name_rules
assert [] == strainer.string_rules
assert {"class": [AttributeValueMatchRule(string="mainbody")]} == (
strainer.attribute_rules
)
# But if you pass in "class_" as part of the ``attrs`` dict
# it's not changed. (Otherwise there'd be no way to actually put
# a restriction on an attribute called "class_".)
strainer = SoupStrainer(attrs=dict(class_="mainbody"))
assert [] == strainer.name_rules
assert [] == strainer.string_rules
assert {"class_": [AttributeValueMatchRule(string="mainbody")]} == (
strainer.attribute_rules
)
def test_constructor_with_overlapping_attributes(self):
# If you specify the same attribute in args and **kwargs, you end up
# with two different AttributeValueMatchRule objects.
# This happens whether you use the 'class' shortcut on attrs...
strainer = SoupStrainer(attrs="class1", class_="class2")
rule1, rule2 = strainer.attribute_rules["class"]
assert rule1.string == "class1"
assert rule2.string == "class2"
# Or explicitly specify the same attribute twice.
strainer = SoupStrainer(attrs={"id": "id1"}, id="id2")
rule1, rule2 = strainer.attribute_rules["id"]
assert rule1.string == "id1"
assert rule2.string == "id2"
@pytest.mark.parametrize(
"obj, result",
[
("a", MatchRule(string="a")),
(b"a", MatchRule(string="a")),
(True, MatchRule(present=True)),
(False, MatchRule(present=False)),
(re.compile("a"), MatchRule(pattern=re.compile("a"))),
(_match_function, MatchRule(function=_match_function)),
# Pass in a list and get back a list of rules.
(["a", b"b"], [MatchRule(string="a"), MatchRule(string="b")]),
(
[re.compile("a"), _match_function],
[
MatchRule(pattern=re.compile("a")),
MatchRule(function=_match_function),
],
),
# Anything that doesn't fit is converted to a string.
(100, MatchRule(string="100")),
],
)
def test__make_match_rules(self, obj, result):
actual = list(SoupStrainer._make_match_rules(obj, MatchRule))
# Helper to reduce the number of single-item lists in the
# parameters.
if len(actual) == 1:
[actual] = actual
assert result == actual
@pytest.mark.parametrize(
"cls, result",
[
(AttributeValueMatchRule, AttributeValueMatchRule(string="a")),
(StringMatchRule, StringMatchRule(string="a")),
],
)
def test__make_match_rules_different_classes(self, cls, result):
actual = cls(string="a")
assert actual == result
def test__make_match_rules_nested_list(self):
# If you pass a nested list into _make_match_rules, it's
# turned into a restriction that excludes everything, to avoid the
# possibility of an infinite recursion.
# Create a self-referential object.
selfref = []
selfref.append(selfref)
with warnings.catch_warnings(record=True) as w:
rules = SoupStrainer._make_match_rules(["a", selfref, "b"], MatchRule)
assert list(rules) == [MatchRule(string="a"), MatchRule(exclude_everything=True), MatchRule(string="b")]
[warning] = w
# Don't check the filename because the stacklevel is
# designed for normal use and we're testing the private
# method directly.
msg = str(warning.message)
assert (
msg
== "Ignoring nested list [[...]] to avoid the possibility of infinite recursion."
)
def tag_matches(
self,
strainer: SoupStrainer,
name: str,
attrs: Optional[_RawAttributeValues] = None,
string: Optional[str] = None,
prefix: Optional[str] = None,
) -> bool:
# Create a Tag with the given prefix, name and attributes,
# then make sure that strainer.matches_tag and allow_tag_creation
# both approve it.
tag = Tag(prefix=prefix, name=name, attrs=attrs)
if string:
tag.string = string
return strainer.matches_tag(tag) and strainer.allow_tag_creation(
prefix, name, attrs
)
def test_matches_tag_with_only_string(self):
# A SoupStrainer that only has StringMatchRules won't ever
# match a Tag.
strainer = SoupStrainer(string=["a string", re.compile("string")])
tag = Tag(name="b", attrs=dict(id="1"))
tag.string = "a string"
assert not strainer.matches_tag(tag)
# There has to be a TagNameMatchRule or an
# AttributeValueMatchRule as well.
strainer.name_rules.append(TagNameMatchRule(string="b"))
assert strainer.matches_tag(tag)
strainer.name_rules = []
strainer.attribute_rules["id"] = [AttributeValueMatchRule("1")]
assert strainer.matches_tag(tag)
def test_matches_tag_with_prefix(self):
# If a tag has an attached namespace prefix, the tag's name is
# tested both with and without the prefix.
kwargs = dict(name="a", prefix="ns")
assert self.tag_matches(SoupStrainer(name="a"), **kwargs)
assert self.tag_matches(SoupStrainer(name="ns:a"), **kwargs)
assert not self.tag_matches(SoupStrainer(name="ns2:a"), **kwargs)
def test_one_name_rule_must_match(self):
# If there are TagNameMatchRule, at least one must match.
kwargs = dict(name="b")
assert self.tag_matches(SoupStrainer(name="b"), **kwargs)
assert not self.tag_matches(SoupStrainer(name="c"), **kwargs)
assert self.tag_matches(SoupStrainer(name=["c", "d", "d", "b"]), **kwargs)
assert self.tag_matches(
SoupStrainer(name=[re.compile("c-f"), re.compile("[ab]$")]), **kwargs
)
def test_one_attribute_rule_must_match_for_each_attribute(self):
# If there is one or more AttributeValueMatchRule for a given
# attribute, at least one must match that attribute's
# value. This is true for *every* attribute -- just matching one
# attribute isn't enough.
kwargs = dict(name="b", attrs={"class": "main", "id": "1"})
# 'class' and 'id' match
assert self.tag_matches(
SoupStrainer(
class_=["other", "main"], id=["20", "a", re.compile("^[0-9]")]
),
**kwargs,
)
# 'class' and 'id' are present and 'data' attribute is missing
assert self.tag_matches(
SoupStrainer(class_=True, id=True, data=False), **kwargs
)
# 'id' matches, 'class' does not.
assert not self.tag_matches(SoupStrainer(class_=["other"], id=["2"]), **kwargs)
# 'class' matches, 'id' does not
assert not self.tag_matches(SoupStrainer(class_=["main"], id=["2"]), **kwargs)
# 'class' and 'id' match but 'data' attribute is missing
assert not self.tag_matches(
SoupStrainer(class_=["main"], id=["1"], data=True), **kwargs
)
def test_match_against_multi_valued_attribute(self):
# If an attribute has multiple values, only one of them
# has to match the AttributeValueMatchRule.
kwargs = dict(name="b", attrs={"class": ["main", "big"]})
assert self.tag_matches(SoupStrainer(attrs="main"), **kwargs)
assert self.tag_matches(SoupStrainer(attrs="big"), **kwargs)
assert self.tag_matches(SoupStrainer(attrs=["main", "big"]), **kwargs)
assert self.tag_matches(SoupStrainer(attrs=["big", "small"]), **kwargs)
assert not self.tag_matches(SoupStrainer(attrs=["small", "smaller"]), **kwargs)
def test_match_against_multi_valued_attribute_as_string(self):
# If an attribute has multiple values, you can treat the entire
# thing as one string during a match.
kwargs = dict(name="b", attrs={"class": ["main", "big"]})
assert self.tag_matches(SoupStrainer(attrs="main big"), **kwargs)
# But you can't put them in any order; it's got to be the
# order they are present in the Tag, which basically means the
# order they were originally present in the document.
assert not self.tag_matches(SoupStrainer(attrs=["big main"]), **kwargs)
def test_one_string_rule_must_match(self):
# If there's a TagNameMatchRule and/or an
# AttributeValueMatchRule, then the StringMatchRule is _not_
# ignored, and must match as well.
tag = Tag(name="b", attrs=dict(id="1"))
tag.string = "A string"
assert SoupStrainer(name="b", string="A string").matches_tag(tag)
assert not SoupStrainer(name="a", string="A string").matches_tag(tag)
assert not SoupStrainer(name="a", string="Wrong string").matches_tag(tag)
assert SoupStrainer(id="1", string="A string").matches_tag(tag)
assert not SoupStrainer(id="2", string="A string").matches_tag(tag)
assert not SoupStrainer(id="1", string="Wrong string").matches_tag(tag)
assert SoupStrainer(name="b", id="1", string="A string").matches_tag(tag)
# If there are multiple string rules, only one needs to match.
assert SoupStrainer(
name="b",
id="1",
string=["Wrong string", "Also wrong", re.compile("string")],
).matches_tag(tag)
def test_allowing_tag_implies_allowing_its_contents(self):
markup = "<a><b>one string<div>another string</div></b></a>"
# Letting the <b> tag through implies parsing the <div> tag
# and both strings, even though they wouldn't match the
# SoupStrainer on their own.
assert (
"<b>one string<div>another string</div></b>"
== self.soup(markup, parse_only=SoupStrainer(name="b")).decode()
)
@pytest.mark.parametrize(
"soupstrainer",
[
SoupStrainer(name="b", string="one string"),
SoupStrainer(name="div", string="another string"),
],
)
def test_parse_only_combining_tag_and_string(self, soupstrainer):
# If you pass parse_only a SoupStrainer that contains both tag
# restrictions and string restrictions, you get no results,
# because the string restrictions can't be evaluated during
# the parsing process, and the tag restrictions eliminate
# any strings from consideration.
#
# We can detect this ahead of time, and warn about it,
# thanks to SoupStrainer.excludes_everything
markup = "<a><b>one string<div>another string</div></b></a>"
with warnings.catch_warnings(record=True) as w:
assert True, soupstrainer.excludes_everything
assert "" == self.soup(markup, parse_only=soupstrainer).decode()
[warning] = w
str(warning.message)
assert warning.filename == __file__
assert str(warning.message).startswith(
"The given value for parse_only will exclude everything:"
)
# The average SoupStrainer has excludes_everything=False
assert not SoupStrainer().excludes_everything
def test_documentation_examples(self):
"""Medium-weight real-world tests based on the Beautiful Soup
documentation.
"""
html_doc = """<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
only_a_tags = SoupStrainer("a")
only_tags_with_id_link2 = SoupStrainer(id="link2")
def is_short_string(string):
return string is not None and len(string) < 10
only_short_strings = SoupStrainer(string=is_short_string)
a_soup = self.soup(html_doc, parse_only=only_a_tags)
assert (
'<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a><a class="sister" href="http://example.com/lacie" id="link2">Lacie</a><a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>'
== a_soup.decode()
)
id_soup = self.soup(html_doc, parse_only=only_tags_with_id_link2)
assert (
'<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>'
== id_soup.decode()
)
string_soup = self.soup(html_doc, parse_only=only_short_strings)
assert "\n\n\nElsie,\nLacie and\nTillie\n...\n" == string_soup.decode()

View File

@ -0,0 +1,170 @@
import pytest
from bs4.element import Tag
from bs4.formatter import (
Formatter,
HTMLFormatter,
XMLFormatter,
)
from . import SoupTest
class TestFormatter(SoupTest):
def test_default_attributes(self):
# Test the default behavior of Formatter.attributes().
formatter = Formatter()
tag = Tag(name="tag")
tag["b"] = "1"
tag["a"] = "2"
# Attributes come out sorted by name. In Python 3, attributes
# normally come out of a dictionary in the order they were
# added.
assert [("a", "2"), ("b", "1")] == formatter.attributes(tag)
# This works even if Tag.attrs is None, though this shouldn't
# normally happen.
tag.attrs = None
assert [] == formatter.attributes(tag)
assert " " == formatter.indent
def test_sort_attributes(self):
# Test the ability to override Formatter.attributes() to,
# e.g., disable the normal sorting of attributes.
class UnsortedFormatter(Formatter):
def attributes(self, tag):
self.called_with = tag
for k, v in sorted(tag.attrs.items()):
if k == "ignore":
continue
yield k, v
soup = self.soup('<p cval="1" aval="2" ignore="ignored"></p>')
formatter = UnsortedFormatter()
decoded = soup.decode(formatter=formatter)
# attributes() was called on the <p> tag. It filtered out one
# attribute and sorted the other two.
assert formatter.called_with == soup.p
assert '<p aval="2" cval="1"></p>' == decoded
def test_empty_attributes_are_booleans(self):
# Test the behavior of empty_attributes_are_booleans as well
# as which Formatters have it enabled.
for name in ("html", "minimal", None):
formatter = HTMLFormatter.REGISTRY[name]
assert False is formatter.empty_attributes_are_booleans
formatter = XMLFormatter.REGISTRY[None]
assert False is formatter.empty_attributes_are_booleans
formatter = HTMLFormatter.REGISTRY["html5"]
assert True is formatter.empty_attributes_are_booleans
# Verify that the constructor sets the value.
formatter = Formatter(empty_attributes_are_booleans=True)
assert True is formatter.empty_attributes_are_booleans
# Now demonstrate what it does to markup.
for markup in ("<option selected></option>", '<option selected=""></option>'):
soup = self.soup(markup)
for formatter in ("html", "minimal", "xml", None):
assert b'<option selected=""></option>' == soup.option.encode(
formatter="html"
)
assert b"<option selected></option>" == soup.option.encode(
formatter="html5"
)
@pytest.mark.parametrize(
"indent,expect",
[
(None, "<a>\n<b>\ntext\n</b>\n</a>\n"),
(-1, "<a>\n<b>\ntext\n</b>\n</a>\n"),
(0, "<a>\n<b>\ntext\n</b>\n</a>\n"),
("", "<a>\n<b>\ntext\n</b>\n</a>\n"),
(1, "<a>\n <b>\n text\n </b>\n</a>\n"),
(2, "<a>\n <b>\n text\n </b>\n</a>\n"),
("\t", "<a>\n\t<b>\n\t\ttext\n\t</b>\n</a>\n"),
("abc", "<a>\nabc<b>\nabcabctext\nabc</b>\n</a>\n"),
# Some invalid inputs -- the default behavior is used.
(object(), "<a>\n <b>\n text\n </b>\n</a>\n"),
(b"bytes", "<a>\n <b>\n text\n </b>\n</a>\n"),
],
)
def test_indent(self, indent, expect):
# Pretty-print a tree with a Formatter set to
# indent in a certain way and verify the results.
soup = self.soup("<a><b>text</b></a>")
formatter = Formatter(indent=indent)
assert soup.prettify(formatter=formatter) == expect
# Pretty-printing only happens with prettify(), not
# encode().
assert soup.encode(formatter=formatter) != expect
def test_default_indent_value(self):
formatter = Formatter()
assert formatter.indent == " "
@pytest.mark.parametrize("formatter,expect",
[
(HTMLFormatter(indent=1), "<p>\n a\n</p>\n"),
(HTMLFormatter(indent=2), "<p>\n a\n</p>\n"),
(XMLFormatter(indent=1), "<p>\n a\n</p>\n"),
(XMLFormatter(indent="\t"), "<p>\n\ta\n</p>\n"),
] )
def test_indent_subclasses(self, formatter, expect):
soup = self.soup("<p>a</p>")
assert expect == soup.p.prettify(formatter=formatter)
@pytest.mark.parametrize(
"s,expect_html,expect_html5",
[
# The html5 formatter is much less aggressive about escaping ampersands
# than the html formatter.
("foo & bar", "foo &amp; bar", "foo & bar"),
("foo&", "foo&amp;", "foo&"),
("foo&&& bar", "foo&amp;&amp;&amp; bar", "foo&&& bar"),
("x=1&y=2", "x=1&amp;y=2", "x=1&y=2"),
("&123", "&amp;123", "&123"),
("&abc", "&amp;abc", "&abc"),
("foo &0 bar", "foo &amp;0 bar", "foo &0 bar"),
("foo &lolwat bar", "foo &amp;lolwat bar", "foo &lolwat bar"),
# But both formatters escape what the HTML5 spec considers ambiguous ampersands.
("&nosuchentity;", "&amp;nosuchentity;", "&amp;nosuchentity;"),
],
)
def test_entity_substitution(self, s, expect_html, expect_html5):
assert HTMLFormatter.REGISTRY["html"].substitute(s) == expect_html
assert HTMLFormatter.REGISTRY["html5"].substitute(s) == expect_html5
assert HTMLFormatter.REGISTRY["html5-4.12"].substitute(s) == expect_html
def test_entity_round_trip(self):
# This is more an explanatory test and a way to avoid regressions than a test of functionality.
markup = "<p>Some division signs: ÷ &divide; &#247; &#xf7;. These are made with: ÷ &amp;divide; &amp;#247;</p>"
soup = self.soup(markup)
assert (
"Some division signs: ÷ ÷ ÷ ÷. These are made with: ÷ &divide; &#247;"
== soup.p.string
)
# Oops, I forgot to mention the entity.
soup.p.string = soup.p.string + " &#xf7;"
assert (
"Some division signs: ÷ ÷ ÷ ÷. These are made with: ÷ &divide; &#247; &#xf7;"
== soup.p.string
)
expect = "<p>Some division signs: &divide; &divide; &divide; &divide;. These are made with: &divide; &amp;divide; &amp;#247; &amp;#xf7;</p>"
assert expect == soup.p.decode(formatter="html")
assert expect == soup.p.decode(formatter="html5")
markup = "<p>a & b</p>"
soup = self.soup(markup)
assert "<p>a &amp; b</p>" == soup.p.decode(formatter="html")
assert "<p>a & b</p>" == soup.p.decode(formatter="html5")

View File

@ -0,0 +1,181 @@
"""This file contains test cases reported by third parties using
fuzzing tools, primarily from Google's oss-fuzz project. Some of these
represent real problems with Beautiful Soup, but many are problems in
libraries that Beautiful Soup depends on, and many of the test cases
represent different ways of triggering the same problem.
Grouping these test cases together makes it easy to see which test
cases represent the same problem, and puts the test cases in close
proximity to code that can trigger the problems.
"""
import os
import importlib
import pytest
from bs4 import (
BeautifulSoup,
ParserRejectedMarkup,
)
try:
from soupsieve.util import SelectorSyntaxError
has_lxml = importlib.util.find_spec("lxml")
has_html5lib = importlib.util.find_spec("html5lib")
fully_fuzzable = has_lxml != None and has_html5lib != None
except ImportError:
fully_fuzzable = False
@pytest.mark.skipif(
not fully_fuzzable, reason="Prerequisites for fuzz tests are not installed."
)
class TestFuzz(object):
# Test case markup files from fuzzers are given this extension so
# they can be included in builds.
TESTCASE_SUFFIX = ".testcase"
# Copied 20230512 from
# https://github.com/google/oss-fuzz/blob/4ac6a645a197a695fe76532251feb5067076b3f3/projects/bs4/bs4_fuzzer.py
#
# Copying the code lets us precisely duplicate the behavior of
# oss-fuzz. The downside is that this code changes over time, so
# multiple copies of the code must be kept around to run against
# older tests. I'm not sure what to do about this, but I may
# retire old tests after a time.
def fuzz_test_with_css(self, filename: str) -> None:
data = self.__markup(filename)
parsers = ["lxml-xml", "html5lib", "html.parser", "lxml"]
try:
idx = int(data[0]) % len(parsers)
except ValueError:
return
css_selector, data = data[1:10], data[10:]
try:
soup = BeautifulSoup(data[1:], features=parsers[idx])
except ParserRejectedMarkup:
return
except ValueError:
return
list(soup.find_all(True))
try:
soup.css.select(css_selector.decode("utf-8", "replace"))
except SelectorSyntaxError:
return
soup.prettify()
# This class of error has been fixed by catching a less helpful
# exception from html.parser and raising ParserRejectedMarkup
# instead.
@pytest.mark.parametrize(
"filename",
[
"clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912",
"crash-ffbdfa8a2b26f13537b68d3794b0478a4090ee4a",
],
)
def test_rejected_markup(self, filename):
markup = self.__markup(filename)
with pytest.raises(ParserRejectedMarkup):
BeautifulSoup(markup, "html.parser")
# This class of error has to do with very deeply nested documents
# which overflow the Python call stack when the tree is converted
# to a string. This is an issue with Beautiful Soup which was fixed
# as part of [bug=1471755].
#
# These test cases are in the older format that doesn't specify
# which parser to use or give a CSS selector.
@pytest.mark.parametrize(
"filename",
[
"clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440",
"clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632",
"clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464",
"clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400",
],
)
def test_deeply_nested_document_without_css(self, filename):
# Parsing the document and encoding it back to a string is
# sufficient to demonstrate that the overflow problem has
# been fixed.
markup = self.__markup(filename)
BeautifulSoup(markup, "html.parser").encode()
# This class of error has to do with very deeply nested documents
# which overflow the Python call stack when the tree is converted
# to a string. This is an issue with Beautiful Soup which was fixed
# as part of [bug=1471755].
@pytest.mark.parametrize(
"filename",
[
"clusterfuzz-testcase-minimized-bs4_fuzzer-5000587759190016",
"clusterfuzz-testcase-minimized-bs4_fuzzer-5375146639360000",
"clusterfuzz-testcase-minimized-bs4_fuzzer-5492400320282624",
],
)
def test_deeply_nested_document(self, filename):
self.fuzz_test_with_css(filename)
@pytest.mark.parametrize(
"filename",
[
"clusterfuzz-testcase-minimized-bs4_fuzzer-4670634698080256",
"clusterfuzz-testcase-minimized-bs4_fuzzer-5270998950477824",
],
)
def test_soupsieve_errors(self, filename):
self.fuzz_test_with_css(filename)
# This class of error represents problems with html5lib's parser,
# not Beautiful Soup. I use
# https://github.com/html5lib/html5lib-python/issues/568 to notify
# the html5lib developers of these issues.
#
# These test cases are in the older format that doesn't specify
# which parser to use or give a CSS selector.
@pytest.mark.skip(reason="html5lib-specific problems")
@pytest.mark.parametrize(
"filename",
[
# b"""ÿ<!DOCTyPEV PUBLIC'''Ð'"""
"clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320",
# b')<a><math><TR><a><mI><a><p><a>'
"clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456",
# b'-<math><sElect><mi><sElect><sElect>'
"clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896",
# b'ñ<table><svg><html>'
"clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224",
# <TABLE>, some ^@ characters, some <math> tags.
"clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744",
# Nested table
"crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08",
],
)
def test_html5lib_parse_errors_without_css(self, filename):
markup = self.__markup(filename)
print(BeautifulSoup(markup, "html5lib").encode())
# This class of error represents problems with html5lib's parser,
# not Beautiful Soup. I use
# https://github.com/html5lib/html5lib-python/issues/568 to notify
# the html5lib developers of these issues.
@pytest.mark.skip(reason="html5lib-specific problems")
@pytest.mark.parametrize(
"filename",
[
# b'- \xff\xff <math>\x10<select><mi><select><select>t'
"clusterfuzz-testcase-minimized-bs4_fuzzer-6306874195312640",
],
)
def test_html5lib_parse_errors(self, filename):
self.fuzz_test_with_css(filename)
def __markup(self, filename: str) -> bytes:
if not filename.endswith(self.TESTCASE_SUFFIX):
filename += self.TESTCASE_SUFFIX
this_dir = os.path.split(__file__)[0]
path = os.path.join(this_dir, "fuzz", filename)
return open(path, "rb").read()

View File

@ -0,0 +1,264 @@
"""Tests to ensure that the html5lib tree builder generates good trees."""
import pytest
import warnings
from bs4 import BeautifulSoup
from bs4.filter import SoupStrainer
from . import (
HTML5LIB_PRESENT,
HTML5TreeBuilderSmokeTest,
)
@pytest.mark.skipif(
not HTML5LIB_PRESENT,
reason="html5lib seems not to be present, not testing its tree builder.",
)
class TestHTML5LibBuilder(HTML5TreeBuilderSmokeTest):
"""See ``HTML5TreeBuilderSmokeTest``."""
@property
def default_builder(self):
from bs4.builder import HTML5TreeBuilder
return HTML5TreeBuilder
def test_soupstrainer(self):
# The html5lib tree builder does not support parse_only.
strainer = SoupStrainer("b")
markup = "<p>A <b>bold</b> statement.</p>"
with warnings.catch_warnings(record=True) as w:
soup = BeautifulSoup(markup, "html5lib", parse_only=strainer)
assert soup.decode() == self.document_for(markup)
[warning] = w
assert warning.filename == __file__
assert "the html5lib tree builder doesn't support parse_only" in str(
warning.message
)
def test_correctly_nested_tables(self):
"""html5lib inserts <tbody> tags where other parsers don't."""
markup = (
'<table id="1">'
"<tr>"
"<td>Here's another table:"
'<table id="2">'
"<tr><td>foo</td></tr>"
"</table></td>"
)
self.assert_soup(
markup,
'<table id="1"><tbody><tr><td>Here\'s another table:'
'<table id="2"><tbody><tr><td>foo</td></tr></tbody></table>'
"</td></tr></tbody></table>",
)
self.assert_soup(
"<table><thead><tr><td>Foo</td></tr></thead>"
"<tbody><tr><td>Bar</td></tr></tbody>"
"<tfoot><tr><td>Baz</td></tr></tfoot></table>"
)
def test_xml_declaration_followed_by_doctype(self):
markup = """<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<p>foo</p>
</body>
</html>"""
soup = self.soup(markup)
# Verify that we can reach the <p> tag; this means the tree is connected.
assert b"<p>foo</p>" == soup.p.encode()
def test_reparented_markup(self):
markup = "<p><em>foo</p>\n<p>bar<a></a></em></p>"
soup = self.soup(markup)
assert (
"<body><p><em>foo</em></p><em>\n</em><p><em>bar<a></a></em></p></body>"
== soup.body.decode()
)
assert 2 == len(soup.find_all("p"))
def test_reparented_markup_ends_with_whitespace(self):
markup = "<p><em>foo</p>\n<p>bar<a></a></em></p>\n"
soup = self.soup(markup)
assert (
"<body><p><em>foo</em></p><em>\n</em><p><em>bar<a></a></em></p>\n</body>"
== soup.body.decode()
)
assert 2 == len(soup.find_all("p"))
def test_reparented_markup_containing_identical_whitespace_nodes(self):
"""Verify that we keep the two whitespace nodes in this
document distinct when reparenting the adjacent <tbody> tags.
"""
markup = "<table> <tbody><tbody><ims></tbody> </table>"
soup = self.soup(markup)
space1, space2 = soup.find_all(string=" ")
tbody1, tbody2 = soup.find_all("tbody")
assert space1.next_element is tbody1
assert tbody2.next_element is space2
def test_reparented_markup_containing_children(self):
markup = (
"<div><a>aftermath<p><noscript>target</noscript>aftermath</a></p></div>"
)
soup = self.soup(markup)
noscript = soup.noscript
assert "target" == noscript.next_element
target = soup.find(string="target")
# The 'aftermath' string was duplicated; we want the second one.
final_aftermath = soup.find_all(string="aftermath")[-1]
# The <noscript> tag was moved beneath a copy of the <a> tag,
# but the 'target' string within is still connected to the
# (second) 'aftermath' string.
assert final_aftermath == target.next_element
assert target == final_aftermath.previous_element
def test_processing_instruction(self):
"""Processing instructions become comments."""
markup = b"""<?PITarget PIContent?>"""
soup = self.soup(markup)
assert str(soup).startswith("<!--?PITarget PIContent?-->")
def test_cloned_multivalue_node(self):
markup = b"""<a class="my_class"><p></a>"""
soup = self.soup(markup)
a1, a2 = soup.find_all("a")
assert a1 == a2
assert a1 is not a2
def test_foster_parenting(self):
markup = b"""<table><td></tbody>A"""
soup = self.soup(markup)
assert (
"<body>A<table><tbody><tr><td></td></tr></tbody></table></body>"
== soup.body.decode()
)
def test_extraction(self):
"""
Test that extraction does not destroy the tree.
https://bugs.launchpad.net/beautifulsoup/+bug/1782928
"""
markup = """
<html><head></head>
<style>
</style><script></script><body><p>hello</p></body></html>
"""
soup = self.soup(markup)
[s.extract() for s in soup("script")]
[s.extract() for s in soup("style")]
assert len(soup.find_all("p")) == 1
def test_empty_comment(self):
"""
Test that empty comment does not break structure.
https://bugs.launchpad.net/beautifulsoup/+bug/1806598
"""
markup = """
<html>
<body>
<form>
<!----><input type="text">
</form>
</body>
</html>
"""
soup = self.soup(markup)
inputs = []
for form in soup.find_all("form"):
inputs.extend(form.find_all("input"))
assert len(inputs) == 1
def test_tracking_line_numbers(self):
# The html.parser TreeBuilder keeps track of line number and
# position of each element.
markup = "\n <p>\n\n<sourceline>\n<b>text</b></sourceline><sourcepos></p>"
soup = self.soup(markup)
assert 2 == soup.p.sourceline
assert 5 == soup.p.sourcepos
assert "sourceline" == soup.p.find("sourceline").name
# You can deactivate this behavior.
soup = self.soup(markup, store_line_numbers=False)
assert None is soup.p.sourceline
assert None is soup.p.sourcepos
def test_special_string_containers(self):
# The html5lib tree builder doesn't support this standard feature,
# because there's no way of knowing, when a string is created,
# where in the tree it will eventually end up.
pass
def test_html5_attributes(self):
# The html5lib TreeBuilder can convert any entity named in
# the HTML5 spec to a sequence of Unicode characters, and
# convert those Unicode characters to a (potentially
# different) named entity on the way out.
#
# This is a copy of the same test from
# HTMLParserTreeBuilderSmokeTest. It's not in the superclass
# because the lxml HTML TreeBuilder _doesn't_ work this way.
for input_element, output_unicode, output_element in (
("&RightArrowLeftArrow;", "\u21c4", b"&rlarr;"),
("&models;", "\u22a7", b"&models;"),
("&Nfr;", "\U0001d511", b"&Nfr;"),
("&ngeqq;", "\u2267\u0338", b"&ngeqq;"),
("&not;", "\xac", b"&not;"),
("&Not;", "\u2aec", b"&Not;"),
("&quot;", '"', b'"'),
("&there4;", "\u2234", b"&there4;"),
("&Therefore;", "\u2234", b"&there4;"),
("&therefore;", "\u2234", b"&there4;"),
("&fjlig;", "fj", b"fj"),
("&sqcup;", "\u2294", b"&sqcup;"),
("&sqcups;", "\u2294\ufe00", b"&sqcups;"),
("&apos;", "'", b"'"),
("&verbar;", "|", b"|"),
):
markup = "<div>%s</div>" % input_element
div = self.soup(markup).div
without_element = div.encode()
expect = b"<div>%s</div>" % output_unicode.encode("utf8")
assert without_element == expect
with_element = div.encode(formatter="html")
expect = b"<div>%s</div>" % output_element
assert with_element == expect
@pytest.mark.parametrize(
"name,value",
[("document_declared_encoding", "utf8"), ("exclude_encodings", ["utf8"])],
)
def test_prepare_markup_warnings(self, name, value):
# html5lib doesn't support a couple of the common arguments to
# prepare_markup.
builder = self.default_builder()
kwargs = {name: value}
with warnings.catch_warnings(record=True) as w:
list(builder.prepare_markup("a", **kwargs))
[warning] = w
msg = str(warning.message)
assert (
msg
== f"You provided a value for {name}, but the html5lib tree builder doesn't support {name}."
)
def test_doctype_filtered(self):
# Since the html5lib parser doesn't support parse_only, this standard
# smoke-test test can't be run.
pass

View File

@ -0,0 +1,161 @@
"""Tests to ensure that the html.parser tree builder generates good
trees."""
import pickle
import pytest
from bs4.builder._htmlparser import (
_DuplicateAttributeHandler,
BeautifulSoupHTMLParser,
HTMLParserTreeBuilder,
)
from bs4.exceptions import ParserRejectedMarkup
from typing import Any
from . import HTMLTreeBuilderSmokeTest
class TestHTMLParserTreeBuilder(HTMLTreeBuilderSmokeTest):
default_builder = HTMLParserTreeBuilder
def test_rejected_input(self):
# Python's html.parser will occasionally reject markup,
# especially when there is a problem with the initial DOCTYPE
# declaration. Different versions of Python sound the alarm in
# different ways, but Beautiful Soup consistently raises
# errors as ParserRejectedMarkup exceptions.
bad_markup = [
# https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=28873
# https://github.com/guidovranken/python-library-fuzzers/blob/master/corp-html/519e5b4269a01185a0d5e76295251921da2f0700
# https://github.com/python/cpython/issues/81928
b"\n<![\xff\xfe\xfe\xcd\x00",
# https://github.com/guidovranken/python-library-fuzzers/blob/master/corp-html/de32aa55785be29bbc72a1a8e06b00611fb3d9f8
# https://github.com/python/cpython/issues/78661
#
b"<![n\x00",
b"<![UNKNOWN[]]>",
]
for markup in bad_markup:
with pytest.raises(ParserRejectedMarkup):
self.soup(markup)
def test_namespaced_system_doctype(self):
# html.parser can't handle namespaced doctypes, so skip this one.
pass
def test_namespaced_public_doctype(self):
# html.parser can't handle namespaced doctypes, so skip this one.
pass
def test_builder_is_pickled(self):
"""Unlike most tree builders, HTMLParserTreeBuilder and will
be restored after pickling.
"""
tree = self.soup("<a><b>foo</a>")
dumped = pickle.dumps(tree, 2)
loaded = pickle.loads(dumped)
assert isinstance(loaded.builder, type(tree.builder))
def test_redundant_empty_element_closing_tags(self):
self.assert_soup("<br></br><br></br><br></br>", "<br/><br/><br/>")
self.assert_soup("</br></br></br>", "")
def test_empty_element(self):
# This verifies that any buffered data present when the parser
# finishes working is handled.
self.assert_soup("foo &# bar", "foo &amp;# bar")
def test_tracking_line_numbers(self):
# The html.parser TreeBuilder keeps track of line number and
# position of each element.
markup = "\n <p>\n\n<sourceline>\n<b>text</b></sourceline><sourcepos></p>"
soup = self.soup(markup)
assert 2 == soup.p.sourceline
assert 3 == soup.p.sourcepos
assert "sourceline" == soup.p.find("sourceline").name
# You can deactivate this behavior.
soup = self.soup(markup, store_line_numbers=False)
assert None is soup.p.sourceline
assert None is soup.p.sourcepos
def test_on_duplicate_attribute(self):
# The html.parser tree builder has a variety of ways of
# handling a tag that contains the same attribute multiple times.
markup = '<a class="cls" href="url1" href="url2" href="url3" id="id">'
# If you don't provide any particular value for
# on_duplicate_attribute, later values replace earlier values.
soup = self.soup(markup)
assert "url3" == soup.a["href"]
assert ["cls"] == soup.a["class"]
assert "id" == soup.a["id"]
# You can also get this behavior explicitly.
def assert_attribute(
on_duplicate_attribute: _DuplicateAttributeHandler, expected: Any
) -> None:
soup = self.soup(markup, on_duplicate_attribute=on_duplicate_attribute)
assert soup.a is not None
assert expected == soup.a["href"]
# Verify that non-duplicate attributes are treated normally.
assert ["cls"] == soup.a["class"]
assert "id" == soup.a["id"]
assert_attribute(None, "url3")
assert_attribute(BeautifulSoupHTMLParser.REPLACE, "url3")
# You can ignore subsequent values in favor of the first.
assert_attribute(BeautifulSoupHTMLParser.IGNORE, "url1")
# And you can pass in a callable that does whatever you want.
def accumulate(attrs, key, value):
if not isinstance(attrs[key], list):
attrs[key] = [attrs[key]]
attrs[key].append(value)
assert_attribute(accumulate, ["url1", "url2", "url3"])
def test_html5_attributes(self):
# The html.parser TreeBuilder can convert any entity named in
# the HTML5 spec to a sequence of Unicode characters, and
# convert those Unicode characters to a (potentially
# different) named entity on the way out.
for input_element, output_unicode, output_element in (
("&RightArrowLeftArrow;", "\u21c4", b"&rlarr;"),
("&models;", "\u22a7", b"&models;"),
("&Nfr;", "\U0001d511", b"&Nfr;"),
("&ngeqq;", "\u2267\u0338", b"&ngeqq;"),
("&not;", "\xac", b"&not;"),
("&Not;", "\u2aec", b"&Not;"),
("&quot;", '"', b'"'),
("&there4;", "\u2234", b"&there4;"),
("&Therefore;", "\u2234", b"&there4;"),
("&therefore;", "\u2234", b"&there4;"),
("&fjlig;", "fj", b"fj"),
("&sqcup;", "\u2294", b"&sqcup;"),
("&sqcups;", "\u2294\ufe00", b"&sqcups;"),
("&apos;", "'", b"'"),
("&verbar;", "|", b"|"),
):
markup = "<div>%s</div>" % input_element
div = self.soup(markup).div
without_element = div.encode()
expect = b"<div>%s</div>" % output_unicode.encode("utf8")
assert without_element == expect
with_element = div.encode(formatter="html")
expect = b"<div>%s</div>" % output_element
assert with_element == expect
def test_invalid_html_entity(self):
# The html.parser treebuilder can't distinguish between an invalid
# HTML entity with a semicolon and an invalid HTML entity with no
# semicolon.
markup = "<p>a &nosuchentity b</p>"
soup = self.soup(markup)
assert "<p>a &amp;nosuchentity b</p>" == soup.p.decode()
markup = "<p>a &nosuchentity; b</p>"
soup = self.soup(markup)
assert "<p>a &amp;nosuchentity b</p>" == soup.p.decode()

View File

@ -0,0 +1,196 @@
"""Tests to ensure that the lxml tree builder generates good trees."""
import pickle
import pytest
import warnings
from . import LXML_PRESENT, LXML_VERSION
if LXML_PRESENT:
from bs4.builder._lxml import LXMLTreeBuilder, LXMLTreeBuilderForXML
from bs4 import (
BeautifulStoneSoup,
)
from . import (
HTMLTreeBuilderSmokeTest,
XMLTreeBuilderSmokeTest,
SOUP_SIEVE_PRESENT,
)
@pytest.mark.skipif(
not LXML_PRESENT,
reason="lxml seems not to be present, not testing its tree builder.",
)
class TestLXMLTreeBuilder(HTMLTreeBuilderSmokeTest):
"""See ``HTMLTreeBuilderSmokeTest``."""
@property
def default_builder(self):
return LXMLTreeBuilder
def test_out_of_range_entity(self):
self.assert_soup("<p>foo&#10000000000000;bar</p>", "<p>foobar</p>")
self.assert_soup("<p>foo&#x10000000000000;bar</p>", "<p>foobar</p>")
self.assert_soup("<p>foo&#1000000000;bar</p>", "<p>foobar</p>")
def test_entities_in_foreign_document_encoding(self):
# We can't implement this case correctly because by the time we
# hear about markup like "&#147;", it's been (incorrectly) converted into
# a string like u'\x93'
pass
# In lxml < 2.3.5, an empty doctype causes a segfault. Skip this
# test if an old version of lxml is installed.
@pytest.mark.skipif(
not LXML_PRESENT or LXML_VERSION < (2, 3, 5, 0),
reason="Skipping doctype test for old version of lxml to avoid segfault.",
)
def test_empty_doctype(self):
soup = self.soup("<!DOCTYPE>")
doctype = soup.contents[0]
assert "" == doctype.strip()
def test_beautifulstonesoup_is_xml_parser(self):
# Make sure that the deprecated BSS class uses an xml builder
# if one is installed.
with warnings.catch_warnings(record=True) as w:
soup = BeautifulStoneSoup("<b />")
assert "<b/>" == str(soup.b)
[warning] = w
assert warning.filename == __file__
assert "The BeautifulStoneSoup class was deprecated" in str(warning.message)
def test_tracking_line_numbers(self):
# The lxml TreeBuilder cannot keep track of line numbers from
# the original markup. Even if you ask for line numbers, we
# don't have 'em.
#
# However, for consistency with other parsers, Tag.sourceline
# and Tag.sourcepos are always set to None, rather than being
# available as an alias for find().
soup = self.soup(
"\n <p>\n\n<sourceline>\n<b>text</b></sourceline><sourcepos></p>",
store_line_numbers=True,
)
assert None is soup.p.sourceline
assert None is soup.p.sourcepos
@pytest.mark.skipif(
not LXML_PRESENT,
reason="lxml seems not to be present, not testing its XML tree builder.",
)
class TestLXMLXMLTreeBuilder(XMLTreeBuilderSmokeTest):
"""See ``HTMLTreeBuilderSmokeTest``."""
@property
def default_builder(self):
return LXMLTreeBuilderForXML
def test_namespace_indexing(self):
soup = self.soup(
'<?xml version="1.1"?>\n'
"<root>"
'<tag xmlns="http://unprefixed-namespace.com">content</tag>'
'<prefix:tag2 xmlns:prefix="http://prefixed-namespace.com">content</prefix:tag2>'
'<prefix2:tag3 xmlns:prefix2="http://another-namespace.com">'
'<subtag xmlns="http://another-unprefixed-namespace.com">'
'<subsubtag xmlns="http://yet-another-unprefixed-namespace.com">'
"</prefix2:tag3>"
"</root>"
)
# The BeautifulSoup object includes every namespace prefix
# defined in the entire document. This is the default set of
# namespaces used by soupsieve.
#
# Un-prefixed namespaces are not included, and if a given
# prefix is defined twice, only the first prefix encountered
# in the document shows up here.
assert soup._namespaces == {
"xml": "http://www.w3.org/XML/1998/namespace",
"prefix": "http://prefixed-namespace.com",
"prefix2": "http://another-namespace.com",
}
# A Tag object includes only the namespace prefixes
# that were in scope when it was parsed.
# We do not track un-prefixed namespaces as we can only hold
# one (the first one), and it will be recognized as the
# default namespace by soupsieve, even when operating from a
# tag with a different un-prefixed namespace.
assert soup.tag._namespaces == {
"xml": "http://www.w3.org/XML/1998/namespace",
}
assert soup.tag2._namespaces == {
"prefix": "http://prefixed-namespace.com",
"xml": "http://www.w3.org/XML/1998/namespace",
}
assert soup.subtag._namespaces == {
"prefix2": "http://another-namespace.com",
"xml": "http://www.w3.org/XML/1998/namespace",
}
assert soup.subsubtag._namespaces == {
"prefix2": "http://another-namespace.com",
"xml": "http://www.w3.org/XML/1998/namespace",
}
@pytest.mark.skipif(not SOUP_SIEVE_PRESENT, reason="Soup Sieve not installed")
def test_namespace_interaction_with_select_and_find(self):
# Demonstrate how namespaces interact with select* and
# find* methods.
soup = self.soup(
'<?xml version="1.1"?>\n'
"<root>"
'<tag xmlns="http://unprefixed-namespace.com">content</tag>'
'<prefix:tag2 xmlns:prefix="http://prefixed-namespace.com">content</tag>'
'<subtag xmlns:prefix="http://another-namespace-same-prefix.com">'
"<prefix:tag3>"
"</subtag>"
"</root>"
)
# soupselect uses namespace URIs.
assert soup.select_one("tag").name == "tag"
assert soup.select_one("prefix|tag2").name == "tag2"
# If a prefix is declared more than once, only the first usage
# is registered with the BeautifulSoup object.
assert soup.select_one("prefix|tag3") is None
# But you can always explicitly specify a namespace dictionary.
assert (
soup.select_one("prefix|tag3", namespaces=soup.subtag._namespaces).name
== "tag3"
)
# And a Tag (as opposed to the BeautifulSoup object) will
# have a set of default namespaces scoped to that Tag.
assert soup.subtag.select_one("prefix|tag3").name == "tag3"
# the find() methods aren't fully namespace-aware; they just
# look at prefixes.
assert soup.find("tag").name == "tag"
assert soup.find("prefix:tag2").name == "tag2"
assert soup.find("prefix:tag3").name == "tag3"
assert soup.subtag.find("prefix:tag3").name == "tag3"
def test_pickle_restores_builder(self):
# The lxml TreeBuilder is not picklable, so when unpickling
# a document created with it, a new TreeBuilder of the
# appropriate class is created.
soup = self.soup("<a>some markup</a>")
assert isinstance(soup.builder, self.default_builder)
pickled = pickle.dumps(soup)
unpickled = pickle.loads(pickled)
assert "some markup" == unpickled.a.string
assert unpickled.builder != soup.builder
assert isinstance(unpickled.builder, self.default_builder)

Some files were not shown because too many files have changed in this diff Show More