Base code
This commit is contained in:
256
backend/test/BILLING_SYSTEM_INTEGRATION.md
Normal file
256
backend/test/BILLING_SYSTEM_INTEGRATION.md
Normal file
@@ -0,0 +1,256 @@
|
||||
# ALwrity Billing & Subscription System Integration
|
||||
|
||||
## Overview
|
||||
|
||||
The ALwrity backend now includes a comprehensive billing and subscription system that automatically tracks API usage, calculates costs, and manages subscription limits. This system is fully integrated into the startup process and provides real-time monitoring capabilities.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Start the Backend with Billing System
|
||||
|
||||
```bash
|
||||
# From the backend directory
|
||||
python start_alwrity_backend.py
|
||||
```
|
||||
|
||||
The startup script will automatically:
|
||||
- ✅ Create billing and subscription database tables
|
||||
- ✅ Initialize default pricing and subscription plans
|
||||
- ✅ Set up usage tracking middleware
|
||||
- ✅ Verify all billing components are working
|
||||
- ✅ Start the server with billing endpoints enabled
|
||||
|
||||
### 2. Verify Installation
|
||||
|
||||
```bash
|
||||
# Run the comprehensive verification script
|
||||
python verify_billing_setup.py
|
||||
```
|
||||
|
||||
### 3. Test API Endpoints
|
||||
|
||||
```bash
|
||||
# Get subscription plans
|
||||
curl http://localhost:8000/api/subscription/plans
|
||||
|
||||
# Get user usage (replace 'demo' with actual user ID)
|
||||
curl http://localhost:8000/api/subscription/usage/demo
|
||||
|
||||
# Get billing dashboard data
|
||||
curl http://localhost:8000/api/subscription/dashboard/demo
|
||||
|
||||
# Get API pricing information
|
||||
curl http://localhost:8000/api/subscription/pricing
|
||||
```
|
||||
|
||||
## 📊 Database Tables
|
||||
|
||||
The billing system creates the following tables:
|
||||
|
||||
| Table Name | Purpose |
|
||||
|------------|---------|
|
||||
| `subscription_plans` | Available subscription tiers and pricing |
|
||||
| `user_subscriptions` | User subscription assignments |
|
||||
| `api_usage_logs` | Detailed API usage tracking |
|
||||
| `usage_summaries` | Aggregated usage statistics |
|
||||
| `api_provider_pricing` | Cost per token for each AI provider |
|
||||
| `usage_alerts` | Usage limit warnings and notifications |
|
||||
| `billing_history` | Historical billing records |
|
||||
|
||||
## 🔧 System Components
|
||||
|
||||
### 1. Database Models (`models/subscription_models.py`)
|
||||
- **SubscriptionPlan**: Subscription tiers and pricing
|
||||
- **UserSubscription**: User subscription assignments
|
||||
- **APIUsageLog**: Detailed usage tracking
|
||||
- **UsageSummary**: Aggregated statistics
|
||||
- **APIProviderPricing**: Cost calculations
|
||||
- **UsageAlert**: Limit notifications
|
||||
|
||||
### 2. Services
|
||||
- **PricingService** (`services/pricing_service.py`): Cost calculations and plan management
|
||||
- **UsageTrackingService** (`services/usage_tracking_service.py`): Usage monitoring and limits
|
||||
- **SubscriptionExceptionHandler** (`services/subscription_exception_handler.py`): Error handling
|
||||
|
||||
### 3. API Endpoints (`api/subscription_api.py`)
|
||||
- `GET /api/subscription/plans` - Available subscription plans
|
||||
- `GET /api/subscription/usage/{user_id}` - User usage statistics
|
||||
- `GET /api/subscription/dashboard/{user_id}` - Dashboard data
|
||||
- `GET /api/subscription/pricing` - API pricing information
|
||||
- `GET /api/subscription/trends/{user_id}` - Usage trends
|
||||
|
||||
### 4. Middleware Integration
|
||||
- **Monitoring Middleware** (`middleware/monitoring_middleware.py`): Automatic usage tracking
|
||||
- **Exception Handling**: Graceful error handling for billing issues
|
||||
|
||||
## 🎯 Frontend Integration
|
||||
|
||||
The billing system is fully integrated with the frontend dashboard:
|
||||
|
||||
### CompactBillingDashboard
|
||||
- Real-time usage metrics
|
||||
- Cost tracking
|
||||
- System health monitoring
|
||||
- Interactive tooltips and help text
|
||||
|
||||
### EnhancedBillingDashboard
|
||||
- Detailed usage breakdowns
|
||||
- Provider-specific costs
|
||||
- Usage trends and analytics
|
||||
- Alert management
|
||||
|
||||
## 📈 Usage Tracking
|
||||
|
||||
The system automatically tracks:
|
||||
|
||||
- **API Calls**: Number of requests to each provider
|
||||
- **Token Usage**: Input and output tokens for each request
|
||||
- **Costs**: Real-time cost calculations
|
||||
- **Response Times**: Performance monitoring
|
||||
- **Error Rates**: Failed request tracking
|
||||
- **User Activity**: Per-user usage patterns
|
||||
|
||||
## 💰 Pricing Configuration
|
||||
|
||||
### Default AI Provider Pricing (per token)
|
||||
|
||||
| Provider | Model | Input Cost | Output Cost |
|
||||
|----------|-------|------------|-------------|
|
||||
| OpenAI | GPT-4 | $0.00003 | $0.00006 |
|
||||
| OpenAI | GPT-3.5-turbo | $0.0000015 | $0.000002 |
|
||||
| Gemini | Gemini Pro | $0.0000005 | $0.0000015 |
|
||||
| Anthropic | Claude-3 | $0.000008 | $0.000024 |
|
||||
| Mistral | Mistral-7B | $0.0000002 | $0.0000006 |
|
||||
|
||||
### Subscription Plans
|
||||
|
||||
| Plan | Monthly Price | Yearly Price | API Limits |
|
||||
|------|---------------|--------------|------------|
|
||||
| Free | $0 | $0 | 1,000 calls/month |
|
||||
| Starter | $29 | $290 | 10,000 calls/month |
|
||||
| Professional | $99 | $990 | 100,000 calls/month |
|
||||
| Enterprise | $299 | $2,990 | Unlimited |
|
||||
|
||||
## 🔍 Monitoring & Alerts
|
||||
|
||||
### Real-time Monitoring
|
||||
- Usage tracking for all API calls
|
||||
- Cost calculations in real-time
|
||||
- Performance metrics
|
||||
- Error rate monitoring
|
||||
|
||||
### Alert System
|
||||
- Usage approaching limits (80% threshold)
|
||||
- Cost overruns
|
||||
- System health issues
|
||||
- Provider-specific problems
|
||||
|
||||
## 🛠️ Development Mode
|
||||
|
||||
For development with auto-reload:
|
||||
|
||||
```bash
|
||||
# Development mode with auto-reload
|
||||
python start_alwrity_backend.py --dev
|
||||
|
||||
# Or with explicit reload flag
|
||||
python start_alwrity_backend.py --reload
|
||||
```
|
||||
|
||||
## 📝 Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The system uses the following environment variables:
|
||||
|
||||
```bash
|
||||
# Database
|
||||
DATABASE_URL=sqlite:///./alwrity.db
|
||||
|
||||
# API Keys (configured through onboarding)
|
||||
OPENAI_API_KEY=your_key_here
|
||||
GEMINI_API_KEY=your_key_here
|
||||
ANTHROPIC_API_KEY=your_key_here
|
||||
MISTRAL_API_KEY=your_key_here
|
||||
|
||||
# Server Configuration
|
||||
HOST=0.0.0.0
|
||||
PORT=8000
|
||||
DEBUG=true
|
||||
```
|
||||
|
||||
### Custom Pricing
|
||||
|
||||
To modify pricing, update the `PricingService.initialize_default_pricing()` method in `services/pricing_service.py`.
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Run Verification Script
|
||||
```bash
|
||||
python verify_billing_setup.py
|
||||
```
|
||||
|
||||
### Test Individual Components
|
||||
```bash
|
||||
# Test subscription system
|
||||
python test_subscription_system.py
|
||||
|
||||
# Test billing tables creation
|
||||
python scripts/create_billing_tables.py
|
||||
```
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Tables not created**: Run `python scripts/create_billing_tables.py`
|
||||
2. **Missing dependencies**: Run `pip install -r requirements.txt`
|
||||
3. **Database errors**: Check `DATABASE_URL` in environment
|
||||
4. **API key issues**: Verify API keys are configured
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug logging by setting `DEBUG=true` in your environment.
|
||||
|
||||
## 📚 API Documentation
|
||||
|
||||
Once the server is running, access the interactive API documentation:
|
||||
|
||||
- **Swagger UI**: http://localhost:8000/api/docs
|
||||
- **ReDoc**: http://localhost:8000/api/redoc
|
||||
|
||||
## 🔄 Updates and Maintenance
|
||||
|
||||
### Adding New Providers
|
||||
|
||||
1. Add provider to `APIProvider` enum in `models/subscription_models.py`
|
||||
2. Update pricing in `PricingService.initialize_default_pricing()`
|
||||
3. Add provider detection in middleware
|
||||
4. Update frontend provider chips
|
||||
|
||||
### Modifying Plans
|
||||
|
||||
1. Update `PricingService.initialize_default_plans()`
|
||||
2. Modify plan limits and pricing
|
||||
3. Test with verification script
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
1. Check the verification script output
|
||||
2. Review the startup logs
|
||||
3. Test individual components
|
||||
4. Check database table creation
|
||||
|
||||
## 🎉 Success Indicators
|
||||
|
||||
You'll know the billing system is working when:
|
||||
|
||||
- ✅ Startup script shows "Billing and subscription tables created successfully"
|
||||
- ✅ Verification script passes all checks
|
||||
- ✅ API endpoints return data
|
||||
- ✅ Frontend dashboard shows usage metrics
|
||||
- ✅ Usage tracking middleware is active
|
||||
|
||||
The billing system is now fully integrated and ready for production use!
|
||||
159
backend/test/check_db.py
Normal file
159
backend/test/check_db.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database check and sample data creation script
|
||||
"""
|
||||
|
||||
from services.database import get_db_session
|
||||
from models.content_planning import ContentStrategy, ContentGapAnalysis, AIAnalysisResult
|
||||
from sqlalchemy.orm import Session
|
||||
import json
|
||||
|
||||
def check_database():
|
||||
"""Check what data exists in the database"""
|
||||
db = get_db_session()
|
||||
|
||||
try:
|
||||
# Check strategies
|
||||
strategies = db.query(ContentStrategy).all()
|
||||
print(f"Found {len(strategies)} strategies")
|
||||
for strategy in strategies:
|
||||
print(f" Strategy {strategy.id}: {strategy.name} - {strategy.industry}")
|
||||
|
||||
# Check gap analyses
|
||||
gap_analyses = db.query(ContentGapAnalysis).all()
|
||||
print(f"Found {len(gap_analyses)} gap analyses")
|
||||
|
||||
# Check AI analytics
|
||||
ai_analytics = db.query(AIAnalysisResult).all()
|
||||
print(f"Found {len(ai_analytics)} AI analytics")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error checking database: {e}")
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
def create_sample_data():
|
||||
"""Create sample data for Strategic Intelligence and Keyword Research tabs"""
|
||||
db = get_db_session()
|
||||
|
||||
try:
|
||||
# Create a sample strategy if none exists
|
||||
existing_strategies = db.query(ContentStrategy).all()
|
||||
if not existing_strategies:
|
||||
sample_strategy = ContentStrategy(
|
||||
name="Sample Content Strategy",
|
||||
industry="Digital Marketing",
|
||||
target_audience={"demographics": "Small to medium businesses", "interests": ["marketing", "technology"]},
|
||||
content_pillars=["Educational Content", "Thought Leadership", "Case Studies"],
|
||||
ai_recommendations={
|
||||
"market_positioning": {
|
||||
"score": 75,
|
||||
"strengths": ["Strong brand voice", "Consistent content quality"],
|
||||
"weaknesses": ["Limited video content", "Slow content production"]
|
||||
},
|
||||
"competitive_advantages": [
|
||||
{"advantage": "AI-powered content creation", "impact": "High", "implementation": "In Progress"},
|
||||
{"advantage": "Data-driven strategy", "impact": "Medium", "implementation": "Complete"}
|
||||
],
|
||||
"strategic_risks": [
|
||||
{"risk": "Content saturation in market", "probability": "Medium", "impact": "High"},
|
||||
{"risk": "Algorithm changes affecting reach", "probability": "High", "impact": "Medium"}
|
||||
]
|
||||
},
|
||||
user_id=1
|
||||
)
|
||||
db.add(sample_strategy)
|
||||
db.commit()
|
||||
print("Created sample strategy")
|
||||
|
||||
# Create sample gap analysis
|
||||
existing_gaps = db.query(ContentGapAnalysis).all()
|
||||
if not existing_gaps:
|
||||
sample_gap = ContentGapAnalysis(
|
||||
website_url="https://example.com",
|
||||
competitor_urls=["competitor1.com", "competitor2.com"],
|
||||
target_keywords=["content marketing", "digital strategy", "SEO"],
|
||||
analysis_results={
|
||||
"gaps": ["Video content gap", "Local SEO opportunities"],
|
||||
"opportunities": [
|
||||
{"keyword": "AI content tools", "search_volume": "5K-10K", "competition": "Low", "cpc": "$2.50"},
|
||||
{"keyword": "content marketing ROI", "search_volume": "1K-5K", "competition": "Medium", "cpc": "$4.20"}
|
||||
]
|
||||
},
|
||||
recommendations=[
|
||||
{
|
||||
"type": "content",
|
||||
"title": "Create video tutorials",
|
||||
"description": "Address the video content gap",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"type": "seo",
|
||||
"title": "Optimize for local search",
|
||||
"description": "Target local keywords",
|
||||
"priority": "medium"
|
||||
}
|
||||
],
|
||||
user_id=1
|
||||
)
|
||||
db.add(sample_gap)
|
||||
db.commit()
|
||||
print("Created sample gap analysis")
|
||||
|
||||
# Create sample AI analytics
|
||||
existing_ai = db.query(AIAnalysisResult).all()
|
||||
if not existing_ai:
|
||||
sample_ai = AIAnalysisResult(
|
||||
analysis_type="strategic_intelligence",
|
||||
insights=[
|
||||
"Focus on video content to address market gap",
|
||||
"Leverage AI tools for competitive advantage",
|
||||
"Monitor algorithm changes closely"
|
||||
],
|
||||
recommendations=[
|
||||
{
|
||||
"type": "content",
|
||||
"title": "Increase video content production",
|
||||
"description": "Address the video content gap identified in analysis",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"type": "strategy",
|
||||
"title": "Implement AI-powered content creation",
|
||||
"description": "Leverage AI tools for competitive advantage",
|
||||
"priority": "medium"
|
||||
}
|
||||
],
|
||||
performance_metrics={
|
||||
"content_engagement": 78.5,
|
||||
"traffic_growth": 25.3,
|
||||
"conversion_rate": 2.1
|
||||
},
|
||||
personalized_data_used={
|
||||
"onboarding_data": True,
|
||||
"user_preferences": True,
|
||||
"historical_performance": True
|
||||
},
|
||||
processing_time=15.2,
|
||||
ai_service_status="operational",
|
||||
user_id=1
|
||||
)
|
||||
db.add(sample_ai)
|
||||
db.commit()
|
||||
print("Created sample AI analytics")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error creating sample data: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Checking database...")
|
||||
check_database()
|
||||
|
||||
print("\nCreating sample data...")
|
||||
create_sample_data()
|
||||
|
||||
print("\nFinal database state:")
|
||||
check_database()
|
||||
101
backend/test/debug_database_data.py
Normal file
101
backend/test/debug_database_data.py
Normal file
@@ -0,0 +1,101 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug Database Data
|
||||
|
||||
This script checks what data is actually in the database for debugging.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from loguru import logger
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
# Add the services directory to the path
|
||||
services_dir = os.path.join(backend_dir, "services")
|
||||
if services_dir not in sys.path:
|
||||
sys.path.insert(0, services_dir)
|
||||
|
||||
async def debug_database_data():
|
||||
"""Debug what data is in the database."""
|
||||
|
||||
try:
|
||||
logger.info("🔍 Debugging database data")
|
||||
|
||||
# Initialize database
|
||||
from services.database import init_database, get_db_session
|
||||
|
||||
try:
|
||||
init_database()
|
||||
logger.info("✅ Database initialized successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Database initialization failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Get database session
|
||||
db_session = get_db_session()
|
||||
if not db_session:
|
||||
logger.error("❌ Failed to get database session")
|
||||
return False
|
||||
|
||||
from services.content_planning_db import ContentPlanningDBService
|
||||
|
||||
db_service = ContentPlanningDBService(db_session)
|
||||
|
||||
# Check content strategies
|
||||
logger.info("📋 Checking content strategies...")
|
||||
strategies = await db_service.get_user_content_strategies(1)
|
||||
logger.info(f"Found {len(strategies)} strategies for user 1")
|
||||
|
||||
for strategy in strategies:
|
||||
logger.info(f"Strategy ID: {strategy.id}, Name: {strategy.name}")
|
||||
logger.info(f" Content Pillars: {strategy.content_pillars}")
|
||||
logger.info(f" Target Audience: {strategy.target_audience}")
|
||||
|
||||
# Check gap analyses
|
||||
logger.info("📋 Checking gap analyses...")
|
||||
gap_analyses = await db_service.get_user_content_gap_analyses(1)
|
||||
logger.info(f"Found {len(gap_analyses)} gap analyses for user 1")
|
||||
|
||||
for gap_analysis in gap_analyses:
|
||||
logger.info(f"Gap Analysis ID: {gap_analysis.id}")
|
||||
logger.info(f" Website URL: {gap_analysis.website_url}")
|
||||
logger.info(f" Analysis Results: {gap_analysis.analysis_results}")
|
||||
logger.info(f" Recommendations: {gap_analysis.recommendations}")
|
||||
logger.info(f" Opportunities: {gap_analysis.opportunities}")
|
||||
|
||||
# Check if analysis_results has content_gaps
|
||||
if gap_analysis.analysis_results:
|
||||
content_gaps = gap_analysis.analysis_results.get("content_gaps", [])
|
||||
logger.info(f" Content Gaps in analysis_results: {len(content_gaps)} items")
|
||||
for gap in content_gaps:
|
||||
logger.info(f" - {gap}")
|
||||
else:
|
||||
logger.info(" Analysis Results is None or empty")
|
||||
|
||||
db_session.close()
|
||||
logger.info("✅ Database debugging completed")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Debug failed with error: {str(e)}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the debug
|
||||
success = asyncio.run(debug_database_data())
|
||||
|
||||
if success:
|
||||
logger.info("✅ Debug completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Debug failed!")
|
||||
sys.exit(1)
|
||||
175
backend/test/debug_step8.py
Normal file
175
backend/test/debug_step8.py
Normal file
@@ -0,0 +1,175 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug script for Step 8 (Daily Content Planning) to isolate data type issues.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the project root to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_daily_content_planning.daily_schedule_generator import DailyScheduleGenerator
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s | %(levelname)-8s | %(name)s:%(funcName)s:%(lineno)d - %(message)s',
|
||||
datefmt='%H:%M:%S'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
async def debug_step8():
|
||||
"""Debug Step 8 with controlled test data."""
|
||||
|
||||
logger.info("🔍 Starting Step 8 Debug Session")
|
||||
|
||||
# Create test data with known types
|
||||
test_weekly_themes = [
|
||||
{
|
||||
"title": "Week 1 Theme: AI Implementation",
|
||||
"description": "Focus on AI tools and implementation",
|
||||
"primary_pillar": "AI and Machine Learning",
|
||||
"content_angles": ["AI tools", "Implementation guide", "Best practices"],
|
||||
"target_platforms": ["LinkedIn", "Blog", "Twitter"],
|
||||
"strategic_alignment": "High alignment with business goals",
|
||||
"gap_addressal": "Addresses AI implementation gap",
|
||||
"priority": "high",
|
||||
"estimated_impact": "High",
|
||||
"ai_confidence": 0.9,
|
||||
"week_number": 1
|
||||
},
|
||||
{
|
||||
"title": "Week 2 Theme: Digital Transformation",
|
||||
"description": "Digital transformation strategies",
|
||||
"primary_pillar": "Digital Transformation",
|
||||
"content_angles": ["Strategy", "Case studies", "ROI"],
|
||||
"target_platforms": ["LinkedIn", "Blog", "YouTube"],
|
||||
"strategic_alignment": "Medium alignment with business goals",
|
||||
"gap_addressal": "Addresses transformation gap",
|
||||
"priority": "medium",
|
||||
"estimated_impact": "Medium",
|
||||
"ai_confidence": 0.8,
|
||||
"week_number": 2
|
||||
}
|
||||
]
|
||||
|
||||
test_platform_strategies = {
|
||||
"LinkedIn": {
|
||||
"content_type": "professional",
|
||||
"posting_frequency": "daily",
|
||||
"engagement_strategy": "thought_leadership"
|
||||
},
|
||||
"Blog": {
|
||||
"content_type": "educational",
|
||||
"posting_frequency": "weekly",
|
||||
"engagement_strategy": "seo_optimized"
|
||||
},
|
||||
"Twitter": {
|
||||
"content_type": "conversational",
|
||||
"posting_frequency": "daily",
|
||||
"engagement_strategy": "community_building"
|
||||
}
|
||||
}
|
||||
|
||||
test_content_pillars = [
|
||||
{
|
||||
"name": "AI and Machine Learning",
|
||||
"weight": 0.4,
|
||||
"description": "AI tools and implementation"
|
||||
},
|
||||
{
|
||||
"name": "Digital Transformation",
|
||||
"weight": 0.3,
|
||||
"description": "Digital strategy and transformation"
|
||||
},
|
||||
{
|
||||
"name": "Business Strategy",
|
||||
"weight": 0.3,
|
||||
"description": "Strategic business insights"
|
||||
}
|
||||
]
|
||||
|
||||
test_calendar_framework = {
|
||||
"type": "monthly",
|
||||
"total_weeks": 4,
|
||||
"posting_frequency": "daily",
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
test_posting_preferences = {
|
||||
"preferred_times": ["09:00", "12:00", "15:00"],
|
||||
"posting_frequency": "daily",
|
||||
"content_count_per_day": 2
|
||||
}
|
||||
|
||||
test_business_goals = [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership"
|
||||
]
|
||||
|
||||
test_target_audience = {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {
|
||||
"age_range": "25-45",
|
||||
"location": "Global"
|
||||
}
|
||||
}
|
||||
|
||||
# Test data type validation
|
||||
logger.info("🔍 Validating test data types:")
|
||||
logger.info(f" weekly_themes: {type(test_weekly_themes)} (length: {len(test_weekly_themes)})")
|
||||
logger.info(f" platform_strategies: {type(test_platform_strategies)}")
|
||||
logger.info(f" content_pillars: {type(test_content_pillars)}")
|
||||
logger.info(f" calendar_framework: {type(test_calendar_framework)}")
|
||||
logger.info(f" posting_preferences: {type(test_posting_preferences)}")
|
||||
logger.info(f" business_goals: {type(test_business_goals)}")
|
||||
logger.info(f" target_audience: {type(test_target_audience)}")
|
||||
|
||||
# Validate weekly themes structure
|
||||
for i, theme in enumerate(test_weekly_themes):
|
||||
logger.info(f" Theme {i+1}: {type(theme)} - keys: {list(theme.keys())}")
|
||||
if not isinstance(theme, dict):
|
||||
logger.error(f"❌ Theme {i+1} is not a dictionary: {type(theme)}")
|
||||
return
|
||||
|
||||
try:
|
||||
# Initialize the daily schedule generator
|
||||
generator = DailyScheduleGenerator()
|
||||
logger.info("✅ DailyScheduleGenerator initialized successfully")
|
||||
|
||||
# Test the generate_daily_schedules method
|
||||
logger.info("🚀 Testing generate_daily_schedules method...")
|
||||
|
||||
daily_schedules = await generator.generate_daily_schedules(
|
||||
weekly_themes=test_weekly_themes,
|
||||
platform_strategies=test_platform_strategies,
|
||||
business_goals=test_business_goals,
|
||||
target_audience=test_target_audience,
|
||||
posting_preferences=test_posting_preferences,
|
||||
calendar_duration=28 # 4 weeks * 7 days
|
||||
)
|
||||
|
||||
logger.info(f"✅ Successfully generated {len(daily_schedules)} daily schedules")
|
||||
|
||||
# Log first few schedules for inspection
|
||||
for i, schedule in enumerate(daily_schedules[:3]):
|
||||
logger.info(f" Schedule {i+1}: {type(schedule)} - keys: {list(schedule.keys())}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error in Step 8 debug: {str(e)}")
|
||||
logger.error(f"📋 Error type: {type(e)}")
|
||||
import traceback
|
||||
logger.error(f"📋 Traceback: {traceback.format_exc()}")
|
||||
return
|
||||
|
||||
logger.info("🎉 Step 8 debug completed successfully!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(debug_step8())
|
||||
118
backend/test/debug_step8_ai_response.py
Normal file
118
backend/test/debug_step8_ai_response.py
Normal file
@@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug script to test AI response parsing in Step 8.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the project root to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_daily_content_planning.daily_schedule_generator import DailyScheduleGenerator
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s | %(levelname)-8s | %(name)s:%(funcName)s:%(lineno)d - %(message)s',
|
||||
datefmt='%H:%M:%S'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
async def debug_ai_response_parsing():
|
||||
"""Debug AI response parsing in Step 8."""
|
||||
|
||||
logger.info("🔍 Starting AI Response Parsing Debug")
|
||||
|
||||
# Create test data
|
||||
test_posting_day = {
|
||||
"day_number": 1,
|
||||
"date": "2025-09-01",
|
||||
"day_name": "monday",
|
||||
"posting_times": ["09:00", "12:00"],
|
||||
"content_count": 2,
|
||||
"week_number": 1
|
||||
}
|
||||
|
||||
test_weekly_theme = {
|
||||
"title": "Week 1 Theme: AI Implementation",
|
||||
"description": "Focus on AI tools and implementation",
|
||||
"content_angles": ["AI tools", "Implementation guide", "Best practices"]
|
||||
}
|
||||
|
||||
test_platform_strategies = {
|
||||
"LinkedIn": {"approach": "professional"},
|
||||
"Blog": {"approach": "educational"}
|
||||
}
|
||||
|
||||
# Test different AI response formats
|
||||
test_responses = [
|
||||
# Format 1: List of recommendations (correct format)
|
||||
[
|
||||
{
|
||||
"type": "Content Creation Opportunity",
|
||||
"title": "AI Implementation Guide",
|
||||
"description": "A comprehensive guide to AI implementation"
|
||||
},
|
||||
{
|
||||
"type": "Content Creation Opportunity",
|
||||
"title": "AI Tools Overview",
|
||||
"description": "Overview of AI tools for business"
|
||||
}
|
||||
],
|
||||
|
||||
# Format 2: Dictionary with recommendations key
|
||||
{
|
||||
"recommendations": [
|
||||
{
|
||||
"type": "Content Creation Opportunity",
|
||||
"title": "AI Implementation Guide",
|
||||
"description": "A comprehensive guide to AI implementation"
|
||||
},
|
||||
{
|
||||
"type": "Content Creation Opportunity",
|
||||
"title": "AI Tools Overview",
|
||||
"description": "Overview of AI tools for business"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
# Format 3: Float (the problematic case)
|
||||
0.95,
|
||||
|
||||
# Format 4: String
|
||||
"AI Implementation Guide",
|
||||
|
||||
# Format 5: None
|
||||
None
|
||||
]
|
||||
|
||||
generator = DailyScheduleGenerator()
|
||||
|
||||
for i, test_response in enumerate(test_responses):
|
||||
logger.info(f"🔍 Testing AI response format {i+1}: {type(test_response)} = {test_response}")
|
||||
|
||||
try:
|
||||
content_pieces = generator._parse_content_response(
|
||||
ai_response=test_response,
|
||||
posting_day=test_posting_day,
|
||||
weekly_theme=test_weekly_theme,
|
||||
platform_strategies=test_platform_strategies
|
||||
)
|
||||
|
||||
logger.info(f"✅ Format {i+1} parsed successfully: {len(content_pieces)} content pieces")
|
||||
for j, piece in enumerate(content_pieces):
|
||||
logger.info(f" Piece {j+1}: {piece.get('title', 'No title')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Format {i+1} failed: {str(e)}")
|
||||
logger.error(f"📋 Error type: {type(e)}")
|
||||
import traceback
|
||||
logger.error(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
logger.info("🎉 AI Response Parsing Debug completed!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(debug_ai_response_parsing())
|
||||
402
backend/test/debug_step8_isolated.py
Normal file
402
backend/test/debug_step8_isolated.py
Normal file
@@ -0,0 +1,402 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Step 8 Debug Script - Isolated Testing
|
||||
=====================================
|
||||
|
||||
This script tests Step 8 (Daily Content Planning) in isolation with controlled inputs
|
||||
to identify which specific parameter is causing the 'float' object has no attribute 'get' error.
|
||||
|
||||
The script will:
|
||||
1. Set up Step 8 with fixed, known dictionary inputs
|
||||
2. Test the daily content generation in isolation
|
||||
3. Identify which specific parameter is coming through as a float
|
||||
4. Help pinpoint whether the issue is in weekly_theme, posting_day, platform_strategies, or other data
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import logging
|
||||
from typing import Dict, List, Any
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Import Step 8 components
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_daily_content_planning.step8_main import DailyContentPlanningStep
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_daily_content_planning.daily_schedule_generator import DailyScheduleGenerator
|
||||
|
||||
def create_controlled_test_data():
|
||||
"""Create controlled test data with known types for Step 8 testing."""
|
||||
|
||||
# 1. Weekly themes from Step 7 (should be list of dictionaries)
|
||||
weekly_themes = [
|
||||
{
|
||||
"title": "Week 1 Theme: AI Implementation Guide",
|
||||
"description": "Comprehensive guide on AI implementation for businesses",
|
||||
"primary_pillar": "AI and Machine Learning",
|
||||
"secondary_pillars": ["Digital Transformation", "Innovation"],
|
||||
"strategic_alignment": "high",
|
||||
"audience_alignment": "high",
|
||||
"week_number": 1,
|
||||
"content_count": 5,
|
||||
"priority": "high",
|
||||
"estimated_impact": "High",
|
||||
"ai_confidence": 0.9
|
||||
},
|
||||
{
|
||||
"title": "Week 2 Theme: Digital Transformation Strategies",
|
||||
"description": "Strategic approaches to digital transformation",
|
||||
"primary_pillar": "Digital Transformation",
|
||||
"secondary_pillars": ["Business Strategy", "Innovation"],
|
||||
"strategic_alignment": "high",
|
||||
"audience_alignment": "medium",
|
||||
"week_number": 2,
|
||||
"content_count": 4,
|
||||
"priority": "medium",
|
||||
"estimated_impact": "Medium",
|
||||
"ai_confidence": 0.8
|
||||
}
|
||||
]
|
||||
|
||||
# 2. Platform strategies from Step 6 (should be dictionary)
|
||||
platform_strategies = {
|
||||
"linkedin": {
|
||||
"content_types": ["articles", "posts", "videos"],
|
||||
"posting_times": ["09:00", "12:00", "15:00"],
|
||||
"content_adaptation": "professional tone, industry insights",
|
||||
"engagement_strategy": "thought leadership content"
|
||||
},
|
||||
"twitter": {
|
||||
"content_types": ["tweets", "threads", "images"],
|
||||
"posting_times": ["08:00", "11:00", "14:00", "17:00"],
|
||||
"content_adaptation": "concise, engaging, hashtag optimization",
|
||||
"engagement_strategy": "conversation starters"
|
||||
}
|
||||
}
|
||||
|
||||
# 3. Content pillars from Step 5 (should be list)
|
||||
content_pillars = [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
]
|
||||
|
||||
# 4. Calendar framework from Step 4 (should be dictionary)
|
||||
calendar_framework = {
|
||||
"duration_weeks": 4,
|
||||
"content_frequency": "daily",
|
||||
"posting_schedule": {
|
||||
"monday": ["09:00", "12:00", "15:00"],
|
||||
"tuesday": ["09:00", "12:00", "15:00"],
|
||||
"wednesday": ["09:00", "12:00", "15:00"],
|
||||
"thursday": ["09:00", "12:00", "15:00"],
|
||||
"friday": ["09:00", "12:00", "15:00"]
|
||||
},
|
||||
"theme_structure": "weekly_themes",
|
||||
"content_mix": {
|
||||
"blog_posts": 0.4,
|
||||
"social_media": 0.3,
|
||||
"videos": 0.2,
|
||||
"infographics": 0.1
|
||||
}
|
||||
}
|
||||
|
||||
# 5. Business goals from Step 1 (should be list)
|
||||
business_goals = [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership"
|
||||
]
|
||||
|
||||
# 6. Target audience from Step 1 (should be dictionary)
|
||||
target_audience = {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {
|
||||
"age_range": "25-45",
|
||||
"location": "Global",
|
||||
"interests": ["technology", "innovation", "business growth"]
|
||||
}
|
||||
}
|
||||
|
||||
# 7. Keywords from Step 2 (should be list)
|
||||
keywords = [
|
||||
"AI implementation",
|
||||
"digital transformation",
|
||||
"machine learning",
|
||||
"business automation",
|
||||
"technology trends"
|
||||
]
|
||||
|
||||
return {
|
||||
"weekly_themes": weekly_themes,
|
||||
"platform_strategies": platform_strategies,
|
||||
"content_pillars": content_pillars,
|
||||
"calendar_framework": calendar_framework,
|
||||
"business_goals": business_goals,
|
||||
"target_audience": target_audience,
|
||||
"keywords": keywords
|
||||
}
|
||||
|
||||
def validate_data_types(data: Dict[str, Any], test_name: str):
|
||||
"""Validate that all data has the expected types."""
|
||||
logger.info(f"🔍 Validating data types for {test_name}")
|
||||
|
||||
expected_types = {
|
||||
"weekly_themes": list,
|
||||
"platform_strategies": dict,
|
||||
"content_pillars": list,
|
||||
"calendar_framework": dict,
|
||||
"business_goals": list,
|
||||
"target_audience": dict,
|
||||
"keywords": list
|
||||
}
|
||||
|
||||
for key, expected_type in expected_types.items():
|
||||
if key in data:
|
||||
actual_type = type(data[key])
|
||||
if actual_type != expected_type:
|
||||
logger.error(f"❌ Type mismatch for {key}: expected {expected_type.__name__}, got {actual_type.__name__}")
|
||||
logger.error(f" Value: {data[key]}")
|
||||
return False
|
||||
else:
|
||||
logger.info(f"✅ {key}: {actual_type.__name__} (correct)")
|
||||
else:
|
||||
logger.warning(f"⚠️ Missing key: {key}")
|
||||
|
||||
return True
|
||||
|
||||
async def test_daily_schedule_generator_isolated():
|
||||
"""Test the DailyScheduleGenerator in isolation with controlled inputs."""
|
||||
logger.info("🧪 Testing DailyScheduleGenerator in isolation")
|
||||
|
||||
# Create controlled test data
|
||||
test_data = create_controlled_test_data()
|
||||
|
||||
# Validate data types
|
||||
if not validate_data_types(test_data, "DailyScheduleGenerator"):
|
||||
logger.error("❌ Data type validation failed")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Create DailyScheduleGenerator instance
|
||||
generator = DailyScheduleGenerator()
|
||||
|
||||
# Test the generate_daily_schedules method
|
||||
logger.info("📅 Testing generate_daily_schedules method")
|
||||
|
||||
# Get posting preferences and calendar duration
|
||||
posting_preferences = {
|
||||
"preferred_times": ["09:00", "12:00", "15:00"],
|
||||
"posting_frequency": "daily"
|
||||
}
|
||||
calendar_duration = test_data["calendar_framework"]["duration_weeks"] * 7
|
||||
|
||||
# Call the method with controlled inputs
|
||||
daily_schedules = await generator.generate_daily_schedules(
|
||||
test_data["weekly_themes"],
|
||||
test_data["platform_strategies"],
|
||||
test_data["content_pillars"],
|
||||
test_data["calendar_framework"],
|
||||
posting_preferences,
|
||||
calendar_duration
|
||||
)
|
||||
|
||||
logger.info(f"✅ DailyScheduleGenerator test successful")
|
||||
logger.info(f" Generated {len(daily_schedules)} daily schedules")
|
||||
|
||||
# Validate the output
|
||||
if isinstance(daily_schedules, list):
|
||||
logger.info("✅ Output is a list (correct)")
|
||||
for i, schedule in enumerate(daily_schedules[:3]): # Show first 3
|
||||
logger.info(f" Schedule {i+1}: {type(schedule)} - {schedule.get('day_number', 'N/A')}")
|
||||
else:
|
||||
logger.error(f"❌ Output is not a list: {type(daily_schedules)}")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ DailyScheduleGenerator test failed: {str(e)}")
|
||||
logger.error(f" Exception type: {type(e).__name__}")
|
||||
import traceback
|
||||
logger.error(f" Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
async def test_step8_execute_method():
|
||||
"""Test Step 8's execute method with controlled inputs."""
|
||||
logger.info("🧪 Testing Step 8 execute method")
|
||||
|
||||
# Create controlled test data
|
||||
test_data = create_controlled_test_data()
|
||||
|
||||
# Validate data types
|
||||
if not validate_data_types(test_data, "Step 8 Execute"):
|
||||
logger.error("❌ Data type validation failed")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Create Step 8 instance
|
||||
step8 = DailyContentPlanningStep()
|
||||
|
||||
# Create context with controlled data
|
||||
context = {
|
||||
"step_results": {
|
||||
"step_07": {
|
||||
"result": {
|
||||
"weekly_themes": test_data["weekly_themes"]
|
||||
}
|
||||
},
|
||||
"step_06": {
|
||||
"result": {
|
||||
"platform_strategies": test_data["platform_strategies"]
|
||||
}
|
||||
},
|
||||
"step_05": {
|
||||
"result": {
|
||||
"content_pillars": test_data["content_pillars"]
|
||||
}
|
||||
},
|
||||
"step_04": {
|
||||
"result": {
|
||||
"calendar_framework": test_data["calendar_framework"]
|
||||
}
|
||||
},
|
||||
"step_01": {
|
||||
"result": {
|
||||
"business_goals": test_data["business_goals"],
|
||||
"target_audience": test_data["target_audience"]
|
||||
}
|
||||
},
|
||||
"step_02": {
|
||||
"result": {
|
||||
"keywords": test_data["keywords"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"user_data": {
|
||||
"business_goals": test_data["business_goals"],
|
||||
"target_audience": test_data["target_audience"],
|
||||
"keywords": test_data["keywords"]
|
||||
}
|
||||
}
|
||||
|
||||
# Test the execute method
|
||||
logger.info("📅 Testing Step 8 execute method")
|
||||
result = await step8.execute(context)
|
||||
|
||||
logger.info(f"✅ Step 8 execute test successful")
|
||||
logger.info(f" Result type: {type(result)}")
|
||||
logger.info(f" Result keys: {list(result.keys()) if isinstance(result, dict) else 'N/A'}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 8 execute test failed: {str(e)}")
|
||||
logger.error(f" Exception type: {type(e).__name__}")
|
||||
import traceback
|
||||
logger.error(f" Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
async def test_specific_methods_with_debugging():
|
||||
"""Test specific methods with detailed debugging to identify the float issue."""
|
||||
logger.info("🔍 Testing specific methods with detailed debugging")
|
||||
|
||||
# Create controlled test data
|
||||
test_data = create_controlled_test_data()
|
||||
|
||||
try:
|
||||
# Create DailyScheduleGenerator instance
|
||||
generator = DailyScheduleGenerator()
|
||||
|
||||
# Test _get_weekly_theme method specifically
|
||||
logger.info("🔍 Testing _get_weekly_theme method")
|
||||
for week_num in [1, 2]:
|
||||
theme = generator._get_weekly_theme(test_data["weekly_themes"], week_num)
|
||||
logger.info(f" Week {week_num} theme type: {type(theme)}")
|
||||
logger.info(f" Week {week_num} theme: {theme}")
|
||||
|
||||
if not isinstance(theme, dict):
|
||||
logger.error(f"❌ Week {week_num} theme is not a dictionary!")
|
||||
return False
|
||||
|
||||
# Test _generate_daily_content method with controlled inputs
|
||||
logger.info("🔍 Testing _generate_daily_content method")
|
||||
|
||||
# Create a controlled posting_day
|
||||
posting_day = {
|
||||
"day_number": 1,
|
||||
"week_number": 1,
|
||||
"content_count": 3,
|
||||
"platforms": ["linkedin", "twitter"]
|
||||
}
|
||||
|
||||
# Test with controlled weekly theme
|
||||
weekly_theme = test_data["weekly_themes"][0] # First theme
|
||||
|
||||
# Test the method
|
||||
content = await generator._generate_daily_content(
|
||||
posting_day,
|
||||
weekly_theme,
|
||||
test_data["platform_strategies"],
|
||||
test_data["content_pillars"],
|
||||
test_data["calendar_framework"]
|
||||
)
|
||||
|
||||
logger.info(f"✅ _generate_daily_content test successful")
|
||||
logger.info(f" Content type: {type(content)}")
|
||||
logger.info(f" Content: {content}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Specific method test failed: {str(e)}")
|
||||
logger.error(f" Exception type: {type(e).__name__}")
|
||||
import traceback
|
||||
logger.error(f" Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
async def main():
|
||||
"""Main debug function."""
|
||||
logger.info("🚀 Starting Step 8 Debug Script")
|
||||
logger.info("=" * 50)
|
||||
|
||||
# Test 1: DailyScheduleGenerator in isolation
|
||||
logger.info("\n🧪 Test 1: DailyScheduleGenerator in isolation")
|
||||
success1 = await test_daily_schedule_generator_isolated()
|
||||
|
||||
# Test 2: Step 8 execute method
|
||||
logger.info("\n🧪 Test 2: Step 8 execute method")
|
||||
success2 = await test_step8_execute_method()
|
||||
|
||||
# Test 3: Specific methods with debugging
|
||||
logger.info("\n🧪 Test 3: Specific methods with debugging")
|
||||
success3 = await test_specific_methods_with_debugging()
|
||||
|
||||
# Summary
|
||||
logger.info("\n" + "=" * 50)
|
||||
logger.info("📊 Debug Results Summary")
|
||||
logger.info("=" * 50)
|
||||
logger.info(f"✅ Test 1 (DailyScheduleGenerator): {'PASSED' if success1 else 'FAILED'}")
|
||||
logger.info(f"✅ Test 2 (Step 8 Execute): {'PASSED' if success2 else 'FAILED'}")
|
||||
logger.info(f"✅ Test 3 (Specific Methods): {'PASSED' if success3 else 'FAILED'}")
|
||||
|
||||
if success1 and success2 and success3:
|
||||
logger.info("🎉 All tests passed! Step 8 is working correctly with controlled inputs.")
|
||||
logger.info("💡 The issue might be in the data flow from previous steps.")
|
||||
else:
|
||||
logger.error("❌ Some tests failed. Check the logs above for specific issues.")
|
||||
|
||||
logger.info("=" * 50)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
197
backend/test/deploy_persona_system.py
Normal file
197
backend/test/deploy_persona_system.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Deployment script for the Persona System.
|
||||
Sets up database tables and validates the complete system.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from loguru import logger
|
||||
|
||||
def deploy_persona_system():
|
||||
"""Deploy the complete persona system."""
|
||||
|
||||
logger.info("🚀 Deploying Persona System")
|
||||
|
||||
try:
|
||||
# Step 1: Create database tables
|
||||
logger.info("📊 Step 1: Creating database tables...")
|
||||
from scripts.create_persona_tables import create_persona_tables
|
||||
create_persona_tables()
|
||||
logger.info("✅ Database tables created")
|
||||
|
||||
# Step 2: Validate Gemini integration
|
||||
logger.info("🤖 Step 2: Validating Gemini integration...")
|
||||
from services.llm_providers.gemini_provider import gemini_structured_json_response
|
||||
|
||||
test_schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": {"type": "string"},
|
||||
"timestamp": {"type": "string"}
|
||||
},
|
||||
"required": ["status"]
|
||||
}
|
||||
|
||||
test_response = gemini_structured_json_response(
|
||||
prompt="Return status='ready' and current timestamp",
|
||||
schema=test_schema,
|
||||
temperature=0.1,
|
||||
max_tokens=1024
|
||||
)
|
||||
|
||||
if "error" in test_response:
|
||||
logger.warning(f"⚠️ Gemini test warning: {test_response['error']}")
|
||||
else:
|
||||
logger.info("✅ Gemini integration validated")
|
||||
|
||||
# Step 3: Test persona service
|
||||
logger.info("🧠 Step 3: Testing persona service...")
|
||||
from services.persona_analysis_service import PersonaAnalysisService
|
||||
persona_service = PersonaAnalysisService()
|
||||
logger.info("✅ Persona service initialized")
|
||||
|
||||
# Step 4: Test replication engine
|
||||
logger.info("⚙️ Step 4: Testing replication engine...")
|
||||
from services.persona_replication_engine import PersonaReplicationEngine
|
||||
replication_engine = PersonaReplicationEngine()
|
||||
logger.info("✅ Replication engine initialized")
|
||||
|
||||
# Step 5: Validate API endpoints
|
||||
logger.info("🌐 Step 5: Validating API endpoints...")
|
||||
from api.persona_routes import router
|
||||
logger.info(f"✅ Persona router configured with {len(router.routes)} routes")
|
||||
|
||||
logger.info("🎉 Persona System deployed successfully!")
|
||||
|
||||
# Print deployment summary
|
||||
print_deployment_summary()
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Deployment failed: {str(e)}")
|
||||
return False
|
||||
|
||||
def print_deployment_summary():
|
||||
"""Print deployment summary and next steps."""
|
||||
|
||||
logger.info("📋 PERSONA SYSTEM DEPLOYMENT SUMMARY")
|
||||
logger.info("=" * 50)
|
||||
|
||||
logger.info("✅ Database Tables:")
|
||||
logger.info(" - writing_personas")
|
||||
logger.info(" - platform_personas")
|
||||
logger.info(" - persona_analysis_results")
|
||||
logger.info(" - persona_validation_results")
|
||||
|
||||
logger.info("✅ Services:")
|
||||
logger.info(" - PersonaAnalysisService")
|
||||
logger.info(" - PersonaReplicationEngine")
|
||||
|
||||
logger.info("✅ API Endpoints:")
|
||||
logger.info(" - POST /api/personas/generate")
|
||||
logger.info(" - GET /api/personas/user/{user_id}")
|
||||
logger.info(" - GET /api/personas/platform/{platform}")
|
||||
logger.info(" - GET /api/personas/export/{platform}")
|
||||
|
||||
logger.info("✅ Platform Support:")
|
||||
logger.info(" - Twitter/X, LinkedIn, Instagram, Facebook")
|
||||
logger.info(" - Blog, Medium, Substack")
|
||||
|
||||
logger.info("🔧 NEXT STEPS:")
|
||||
logger.info("1. Complete onboarding with website analysis (Step 2)")
|
||||
logger.info("2. Set research preferences (Step 3)")
|
||||
logger.info("3. Generate persona in Final Step (Step 6)")
|
||||
logger.info("4. Export hardened prompts for external AI systems")
|
||||
logger.info("5. Use persona for consistent content generation")
|
||||
|
||||
logger.info("=" * 50)
|
||||
|
||||
def validate_deployment():
|
||||
"""Validate that all components are working correctly."""
|
||||
|
||||
logger.info("🔍 Validating deployment...")
|
||||
|
||||
validation_results = {
|
||||
"database": False,
|
||||
"gemini": False,
|
||||
"persona_service": False,
|
||||
"replication_engine": False,
|
||||
"api_routes": False
|
||||
}
|
||||
|
||||
try:
|
||||
# Test database
|
||||
from services.database import get_db_session
|
||||
session = get_db_session()
|
||||
if session:
|
||||
session.close()
|
||||
validation_results["database"] = True
|
||||
logger.info("✅ Database connection validated")
|
||||
|
||||
# Test Gemini
|
||||
from services.llm_providers.gemini_provider import get_gemini_api_key
|
||||
api_key = get_gemini_api_key()
|
||||
if api_key and api_key != "your_gemini_api_key_here":
|
||||
validation_results["gemini"] = True
|
||||
logger.info("✅ Gemini API key configured")
|
||||
else:
|
||||
logger.warning("⚠️ Gemini API key not configured")
|
||||
|
||||
# Test services
|
||||
from services.persona_analysis_service import PersonaAnalysisService
|
||||
from services.persona_replication_engine import PersonaReplicationEngine
|
||||
|
||||
PersonaAnalysisService()
|
||||
PersonaReplicationEngine()
|
||||
validation_results["persona_service"] = True
|
||||
validation_results["replication_engine"] = True
|
||||
logger.info("✅ Services validated")
|
||||
|
||||
# Test API routes
|
||||
from api.persona_routes import router
|
||||
if len(router.routes) > 0:
|
||||
validation_results["api_routes"] = True
|
||||
logger.info("✅ API routes validated")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Validation error: {str(e)}")
|
||||
|
||||
# Summary
|
||||
passed = sum(validation_results.values())
|
||||
total = len(validation_results)
|
||||
|
||||
logger.info(f"📊 Validation Results: {passed}/{total} components validated")
|
||||
|
||||
if passed == total:
|
||||
logger.info("🎉 All components validated successfully!")
|
||||
return True
|
||||
else:
|
||||
logger.warning("⚠️ Some components failed validation")
|
||||
for component, status in validation_results.items():
|
||||
status_icon = "✅" if status else "❌"
|
||||
logger.info(f" {status_icon} {component}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Deploy system
|
||||
deployment_success = deploy_persona_system()
|
||||
|
||||
if deployment_success:
|
||||
# Validate deployment
|
||||
validation_success = validate_deployment()
|
||||
|
||||
if validation_success:
|
||||
logger.info("🎉 Persona System ready for production!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Deployment validation failed")
|
||||
sys.exit(1)
|
||||
else:
|
||||
logger.error("❌ Deployment failed")
|
||||
sys.exit(1)
|
||||
48
backend/test/fix_imports.py
Normal file
48
backend/test/fix_imports.py
Normal file
@@ -0,0 +1,48 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script to fix import paths in step files
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
def fix_imports_in_file(file_path):
|
||||
"""Fix import paths in a file."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Fix the base_step import path
|
||||
# Change from ..base_step to ...base_step for subdirectories
|
||||
if '/step9_content_recommendations/' in file_path or '/step10_performance_optimization/' in file_path or '/step11_strategy_alignment_validation/' in file_path or '/step12_final_calendar_assembly/' in file_path:
|
||||
content = re.sub(r'from \.\.base_step import', 'from ...base_step import', content)
|
||||
|
||||
with open(file_path, 'w', encoding='utf-8') as f:
|
||||
f.write(content)
|
||||
|
||||
print(f"✅ Fixed imports in {file_path}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Error fixing {file_path}: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main function to fix all import paths."""
|
||||
base_path = "services/calendar_generation_datasource_framework/prompt_chaining/steps"
|
||||
|
||||
# Files that need fixing
|
||||
files_to_fix = [
|
||||
f"{base_path}/phase3/step9_content_recommendations/step9_main.py",
|
||||
f"{base_path}/phase4/step10_performance_optimization/step10_main.py",
|
||||
f"{base_path}/phase4/step11_strategy_alignment_validation/step11_main.py",
|
||||
f"{base_path}/phase4/step12_final_calendar_assembly/step12_main.py",
|
||||
]
|
||||
|
||||
for file_path in files_to_fix:
|
||||
if os.path.exists(file_path):
|
||||
fix_imports_in_file(file_path)
|
||||
else:
|
||||
print(f"⚠️ File not found: {file_path}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
104
backend/test/linkedin_keyword_test_report_20250914_223930.json
Normal file
104
backend/test/linkedin_keyword_test_report_20250914_223930.json
Normal file
@@ -0,0 +1,104 @@
|
||||
{
|
||||
"test_summary": {
|
||||
"total_duration": 52.56023073196411,
|
||||
"total_tests": 4,
|
||||
"successful_tests": 4,
|
||||
"failed_tests": 0,
|
||||
"total_api_calls": 4
|
||||
},
|
||||
"test_results": [
|
||||
{
|
||||
"test_name": "Single Phrase Test (Should be preserved as-is)",
|
||||
"keyword_phrase": "ALwrity content generation",
|
||||
"success": true,
|
||||
"duration": 8.364419937133789,
|
||||
"api_calls": 1,
|
||||
"error": null,
|
||||
"content_length": 44,
|
||||
"sources_count": 0,
|
||||
"citations_count": 0,
|
||||
"grounding_status": {
|
||||
"status": "success",
|
||||
"sources_used": 0,
|
||||
"citation_coverage": 0,
|
||||
"quality_score": 0.0
|
||||
},
|
||||
"generation_metadata": {
|
||||
"model_used": "gemini-2.0-flash-001",
|
||||
"generation_time": 0.002626,
|
||||
"research_time": 0.000537,
|
||||
"grounding_enabled": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"test_name": "Comma-Separated Test (Should be split by commas)",
|
||||
"keyword_phrase": "AI tools, content creation, marketing automation",
|
||||
"success": true,
|
||||
"duration": 12.616755723953247,
|
||||
"api_calls": 1,
|
||||
"error": null,
|
||||
"content_length": 44,
|
||||
"sources_count": 5,
|
||||
"citations_count": 3,
|
||||
"grounding_status": {
|
||||
"status": "success",
|
||||
"sources_used": 5,
|
||||
"citation_coverage": 0.6,
|
||||
"quality_score": 0.359
|
||||
},
|
||||
"generation_metadata": {
|
||||
"model_used": "gemini-2.0-flash-001",
|
||||
"generation_time": 0.009273,
|
||||
"research_time": 0.000285,
|
||||
"grounding_enabled": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"test_name": "Another Single Phrase Test",
|
||||
"keyword_phrase": "LinkedIn content strategy",
|
||||
"success": true,
|
||||
"duration": 11.366000652313232,
|
||||
"api_calls": 1,
|
||||
"error": null,
|
||||
"content_length": 44,
|
||||
"sources_count": 4,
|
||||
"citations_count": 3,
|
||||
"grounding_status": {
|
||||
"status": "success",
|
||||
"sources_used": 4,
|
||||
"citation_coverage": 0.75,
|
||||
"quality_score": 0.359
|
||||
},
|
||||
"generation_metadata": {
|
||||
"model_used": "gemini-2.0-flash-001",
|
||||
"generation_time": 0.008166,
|
||||
"research_time": 0.000473,
|
||||
"grounding_enabled": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"test_name": "Another Comma-Separated Test",
|
||||
"keyword_phrase": "social media, digital marketing, brand awareness",
|
||||
"success": true,
|
||||
"duration": 12.107932806015015,
|
||||
"api_calls": 1,
|
||||
"error": null,
|
||||
"content_length": 44,
|
||||
"sources_count": 0,
|
||||
"citations_count": 0,
|
||||
"grounding_status": {
|
||||
"status": "success",
|
||||
"sources_used": 0,
|
||||
"citation_coverage": 0,
|
||||
"quality_score": 0.0
|
||||
},
|
||||
"generation_metadata": {
|
||||
"model_used": "gemini-2.0-flash-001",
|
||||
"generation_time": 0.004575,
|
||||
"research_time": 0.000323,
|
||||
"grounding_enabled": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"timestamp": "2025-09-14T22:39:30.220518"
|
||||
}
|
||||
2
backend/test/temp_import.txt
Normal file
2
backend/test/temp_import.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
# Import content planning endpoints
|
||||
from api.content_planning import router as content_planning_router
|
||||
177
backend/test/test_backend.py
Normal file
177
backend/test/test_backend.py
Normal file
@@ -0,0 +1,177 @@
|
||||
"""Test script for ALwrity backend."""
|
||||
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
|
||||
def test_backend():
|
||||
"""Test the backend endpoints."""
|
||||
base_url = "http://localhost:8000"
|
||||
|
||||
print("🧪 Testing ALwrity Backend API...")
|
||||
|
||||
# Test 1: Health check
|
||||
print("\n1. Testing health check...")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/health")
|
||||
if response.status_code == 200:
|
||||
print("✅ Health check passed")
|
||||
print(f" Response: {response.json()}")
|
||||
else:
|
||||
print(f"❌ Health check failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Health check error: {e}")
|
||||
return False
|
||||
|
||||
# Test 2: Get onboarding status
|
||||
print("\n2. Testing onboarding status...")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/api/onboarding/status")
|
||||
if response.status_code == 200:
|
||||
print("✅ Onboarding status passed")
|
||||
data = response.json()
|
||||
print(f" Current step: {data.get('current_step')}")
|
||||
print(f" Completion: {data.get('completion_percentage')}%")
|
||||
else:
|
||||
print(f"❌ Onboarding status failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Onboarding status error: {e}")
|
||||
return False
|
||||
|
||||
# Test 3: Get onboarding config
|
||||
print("\n3. Testing onboarding config...")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/api/onboarding/config")
|
||||
if response.status_code == 200:
|
||||
print("✅ Onboarding config passed")
|
||||
data = response.json()
|
||||
print(f" Total steps: {data.get('total_steps')}")
|
||||
else:
|
||||
print(f"❌ Onboarding config failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Onboarding config error: {e}")
|
||||
return False
|
||||
|
||||
# Test 4: Get API keys
|
||||
print("\n4. Testing API keys endpoint...")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/api/onboarding/api-keys")
|
||||
if response.status_code == 200:
|
||||
print("✅ API keys endpoint passed")
|
||||
data = response.json()
|
||||
print(f" Configured keys: {data.get('total_configured')}")
|
||||
else:
|
||||
print(f"❌ API keys endpoint failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ API keys endpoint error: {e}")
|
||||
return False
|
||||
|
||||
# Test 5: Save API key
|
||||
print("\n5. Testing save API key...")
|
||||
try:
|
||||
test_key_data = {
|
||||
"provider": "openai",
|
||||
"api_key": "sk-test1234567890abcdef",
|
||||
"description": "Test API key"
|
||||
}
|
||||
response = requests.post(
|
||||
f"{base_url}/api/onboarding/api-keys",
|
||||
json=test_key_data,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
if response.status_code == 200:
|
||||
print("✅ Save API key passed")
|
||||
data = response.json()
|
||||
print(f" Message: {data.get('message')}")
|
||||
else:
|
||||
print(f"❌ Save API key failed: {response.status_code}")
|
||||
print(f" Response: {response.text}")
|
||||
except Exception as e:
|
||||
print(f"❌ Save API key error: {e}")
|
||||
return False
|
||||
|
||||
# Test 6: Complete step
|
||||
print("\n6. Testing complete step...")
|
||||
try:
|
||||
step_data = {
|
||||
"data": {"api_keys": ["openai"]},
|
||||
"validation_errors": []
|
||||
}
|
||||
response = requests.post(
|
||||
f"{base_url}/api/onboarding/step/1/complete",
|
||||
json=step_data,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
if response.status_code == 200:
|
||||
print("✅ Complete step passed")
|
||||
data = response.json()
|
||||
print(f" Message: {data.get('message')}")
|
||||
else:
|
||||
print(f"❌ Complete step failed: {response.status_code}")
|
||||
print(f" Response: {response.text}")
|
||||
except Exception as e:
|
||||
print(f"❌ Complete step error: {e}")
|
||||
return False
|
||||
|
||||
# Test 7: Get updated status
|
||||
print("\n7. Testing updated status...")
|
||||
try:
|
||||
response = requests.get(f"{base_url}/api/onboarding/status")
|
||||
if response.status_code == 200:
|
||||
print("✅ Updated status passed")
|
||||
data = response.json()
|
||||
print(f" Current step: {data.get('current_step')}")
|
||||
print(f" Completion: {data.get('completion_percentage')}%")
|
||||
else:
|
||||
print(f"❌ Updated status failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Updated status error: {e}")
|
||||
return False
|
||||
|
||||
print("\n🎉 All tests completed!")
|
||||
return True
|
||||
|
||||
def test_api_docs():
|
||||
"""Test if API documentation is accessible."""
|
||||
base_url = "http://localhost:8000"
|
||||
|
||||
print("\n📚 Testing API documentation...")
|
||||
|
||||
try:
|
||||
# Test Swagger docs
|
||||
response = requests.get(f"{base_url}/api/docs")
|
||||
if response.status_code == 200:
|
||||
print("✅ Swagger docs accessible")
|
||||
else:
|
||||
print(f"❌ Swagger docs failed: {response.status_code}")
|
||||
|
||||
# Test ReDoc
|
||||
response = requests.get(f"{base_url}/api/redoc")
|
||||
if response.status_code == 200:
|
||||
print("✅ ReDoc accessible")
|
||||
else:
|
||||
print(f"❌ ReDoc failed: {response.status_code}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ API docs error: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("🚀 Starting ALwrity Backend Tests")
|
||||
print("=" * 50)
|
||||
|
||||
# Wait a moment for server to start
|
||||
print("⏳ Waiting for server to be ready...")
|
||||
time.sleep(2)
|
||||
|
||||
# Run tests
|
||||
success = test_backend()
|
||||
test_api_docs()
|
||||
|
||||
if success:
|
||||
print("\n✅ All tests passed! Backend is working correctly.")
|
||||
print("\n📖 You can now:")
|
||||
print(" - View API docs at: http://localhost:8000/api/docs")
|
||||
print(" - Test endpoints manually")
|
||||
print(" - Integrate with React frontend")
|
||||
else:
|
||||
print("\n❌ Some tests failed. Please check the backend logs.")
|
||||
66
backend/test/test_calendar_generation.py
Normal file
66
backend/test/test_calendar_generation.py
Normal file
@@ -0,0 +1,66 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for calendar generation API
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import aiohttp
|
||||
import json
|
||||
|
||||
async def test_calendar_generation():
|
||||
"""Test the calendar generation API."""
|
||||
|
||||
url = "http://localhost:8000/api/content-planning/calendar-generation/start"
|
||||
|
||||
payload = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
try:
|
||||
async with session.post(url, json=payload) as response:
|
||||
if response.status == 200:
|
||||
result = await response.json()
|
||||
print("✅ Calendar generation started successfully!")
|
||||
print(f"Session ID: {result.get('session_id')}")
|
||||
|
||||
# Test progress endpoint
|
||||
session_id = result.get('session_id')
|
||||
if session_id:
|
||||
print(f"\n🔄 Testing progress for session: {session_id}")
|
||||
progress_url = f"http://localhost:8000/api/content-planning/calendar-generation/progress/{session_id}"
|
||||
|
||||
async with session.get(progress_url) as progress_response:
|
||||
if progress_response.status == 200:
|
||||
progress_data = await progress_response.json()
|
||||
print("✅ Progress endpoint working!")
|
||||
print(f"Status: {progress_data.get('status')}")
|
||||
print(f"Current Step: {progress_data.get('current_step')}")
|
||||
print(f"Overall Progress: {progress_data.get('overall_progress')}%")
|
||||
|
||||
# Check for Step 4 specifically
|
||||
step_results = progress_data.get('step_results', {})
|
||||
if 'step_04' in step_results:
|
||||
step4_result = step_results['step_04']
|
||||
print(f"\n📊 Step 4 Status: {step4_result.get('status')}")
|
||||
print(f"Step 4 Quality: {step4_result.get('quality_score')}")
|
||||
if step4_result.get('status') == 'error':
|
||||
print(f"Step 4 Error: {step4_result.get('error_message')}")
|
||||
else:
|
||||
print("⚠️ Step 4 results not yet available")
|
||||
else:
|
||||
print(f"❌ Progress endpoint failed: {progress_response.status}")
|
||||
else:
|
||||
print(f"❌ Calendar generation failed: {response.status}")
|
||||
error_text = await response.text()
|
||||
print(f"Error: {error_text}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing calendar generation: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_calendar_generation())
|
||||
383
backend/test/test_calendar_generation_datasource_framework.py
Normal file
383
backend/test/test_calendar_generation_datasource_framework.py
Normal file
@@ -0,0 +1,383 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for Calendar Generation Data Source Framework
|
||||
|
||||
Demonstrates the functionality of the scalable framework for evolving data sources
|
||||
in calendar generation without architectural changes.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from services.calendar_generation_datasource_framework import (
|
||||
DataSourceRegistry,
|
||||
StrategyAwarePromptBuilder,
|
||||
QualityGateManager,
|
||||
DataSourceEvolutionManager,
|
||||
ContentStrategyDataSource,
|
||||
GapAnalysisDataSource,
|
||||
KeywordsDataSource,
|
||||
ContentPillarsDataSource,
|
||||
PerformanceDataSource,
|
||||
AIAnalysisDataSource
|
||||
)
|
||||
|
||||
|
||||
async def test_framework_initialization():
|
||||
"""Test framework initialization and component setup."""
|
||||
print("🧪 Testing Framework Initialization...")
|
||||
|
||||
try:
|
||||
# Initialize registry
|
||||
registry = DataSourceRegistry()
|
||||
print("✅ DataSourceRegistry initialized successfully")
|
||||
|
||||
# Initialize data sources
|
||||
content_strategy = ContentStrategyDataSource()
|
||||
gap_analysis = GapAnalysisDataSource()
|
||||
keywords = KeywordsDataSource()
|
||||
content_pillars = ContentPillarsDataSource()
|
||||
performance_data = PerformanceDataSource()
|
||||
ai_analysis = AIAnalysisDataSource()
|
||||
|
||||
print("✅ All data sources initialized successfully")
|
||||
|
||||
# Register data sources
|
||||
registry.register_source(content_strategy)
|
||||
registry.register_source(gap_analysis)
|
||||
registry.register_source(keywords)
|
||||
registry.register_source(content_pillars)
|
||||
registry.register_source(performance_data)
|
||||
registry.register_source(ai_analysis)
|
||||
|
||||
print("✅ All data sources registered successfully")
|
||||
|
||||
# Initialize framework components
|
||||
prompt_builder = StrategyAwarePromptBuilder(registry)
|
||||
quality_manager = QualityGateManager()
|
||||
evolution_manager = DataSourceEvolutionManager(registry)
|
||||
|
||||
print("✅ Framework components initialized successfully")
|
||||
|
||||
return registry, prompt_builder, quality_manager, evolution_manager
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Framework initialization failed: {e}")
|
||||
return None, None, None, None
|
||||
|
||||
|
||||
async def test_data_source_registry(registry):
|
||||
"""Test data source registry functionality."""
|
||||
print("\n🧪 Testing Data Source Registry...")
|
||||
|
||||
try:
|
||||
# Test registry status
|
||||
status = registry.get_registry_status()
|
||||
print(f"✅ Registry status: {status['total_sources']} sources, {status['active_sources']} active")
|
||||
|
||||
# Test source retrieval
|
||||
content_strategy = registry.get_source("content_strategy")
|
||||
if content_strategy:
|
||||
print(f"✅ Content strategy source retrieved: {content_strategy}")
|
||||
|
||||
# Test active sources
|
||||
active_sources = registry.get_active_sources()
|
||||
print(f"✅ Active sources: {len(active_sources)}")
|
||||
|
||||
# Test source types
|
||||
strategy_sources = registry.get_sources_by_type("strategy")
|
||||
print(f"✅ Strategy sources: {len(strategy_sources)}")
|
||||
|
||||
# Test priorities
|
||||
critical_sources = registry.get_sources_by_priority(1)
|
||||
print(f"✅ Critical priority sources: {len(critical_sources)}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Registry test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_data_source_validation(registry):
|
||||
"""Test data source validation functionality."""
|
||||
print("\n🧪 Testing Data Source Validation...")
|
||||
|
||||
try:
|
||||
# Validate all sources
|
||||
validation_results = await registry.validate_all_sources()
|
||||
print(f"✅ Validation completed for {len(validation_results)} sources")
|
||||
|
||||
# Check validation results
|
||||
for source_id, result in validation_results.items():
|
||||
if hasattr(result, 'quality_score'):
|
||||
print(f" - {source_id}: {result.quality_score:.2f} quality score")
|
||||
else:
|
||||
print(f" - {source_id}: {result.get('quality_score', 0):.2f} quality score")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Validation test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_prompt_builder(prompt_builder):
|
||||
"""Test strategy-aware prompt builder functionality."""
|
||||
print("\n🧪 Testing Strategy-Aware Prompt Builder...")
|
||||
|
||||
try:
|
||||
# Test available steps
|
||||
available_steps = prompt_builder.get_available_steps()
|
||||
print(f"✅ Available steps: {len(available_steps)}")
|
||||
|
||||
# Test step dependencies
|
||||
step_1_deps = prompt_builder.get_step_dependencies("step_1_content_strategy_analysis")
|
||||
print(f"✅ Step 1 dependencies: {step_1_deps}")
|
||||
|
||||
# Test step requirements validation
|
||||
step_validation = prompt_builder.validate_step_requirements("step_1_content_strategy_analysis")
|
||||
print(f"✅ Step 1 validation: {step_validation['is_ready']}")
|
||||
|
||||
# Test prompt building (simplified)
|
||||
try:
|
||||
prompt = await prompt_builder.build_prompt("step_1_content_strategy_analysis", 1, 1)
|
||||
print(f"✅ Prompt built successfully (length: {len(prompt)} characters)")
|
||||
except Exception as e:
|
||||
print(f"⚠️ Prompt building failed (expected for test): {e}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Prompt builder test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_quality_gates(quality_manager):
|
||||
"""Test quality gate functionality."""
|
||||
print("\n🧪 Testing Quality Gates...")
|
||||
|
||||
try:
|
||||
# Test quality gate info
|
||||
gate_info = quality_manager.get_gate_info()
|
||||
print(f"✅ Quality gates: {len(gate_info)} gates available")
|
||||
|
||||
# Test specific gate validation
|
||||
sample_calendar_data = {
|
||||
"content_items": [
|
||||
{"title": "Sample Content 1", "type": "blog", "theme": "technology"},
|
||||
{"title": "Sample Content 2", "type": "video", "theme": "marketing"}
|
||||
]
|
||||
}
|
||||
|
||||
# Test all gates validation
|
||||
validation_results = await quality_manager.validate_all_gates(sample_calendar_data, "test_step")
|
||||
print(f"✅ All gates validation: {len(validation_results)} gates validated")
|
||||
|
||||
# Test specific gate validation
|
||||
content_uniqueness_result = await quality_manager.validate_specific_gate("content_uniqueness", sample_calendar_data, "test_step")
|
||||
print(f"✅ Content uniqueness validation: {content_uniqueness_result['passed']}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Quality gates test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_evolution_manager(evolution_manager):
|
||||
"""Test evolution manager functionality."""
|
||||
print("\n🧪 Testing Evolution Manager...")
|
||||
|
||||
try:
|
||||
# Test evolution status
|
||||
status = evolution_manager.get_evolution_status()
|
||||
print(f"✅ Evolution status for {len(status)} sources")
|
||||
|
||||
# Test evolution summary
|
||||
summary = evolution_manager.get_evolution_summary()
|
||||
print(f"✅ Evolution summary: {summary['sources_needing_evolution']} need evolution")
|
||||
|
||||
# Test evolution plan
|
||||
plan = evolution_manager.get_evolution_plan("content_strategy")
|
||||
print(f"✅ Content strategy evolution plan: {plan['is_ready_for_evolution']}")
|
||||
|
||||
# Test evolution (simplified)
|
||||
try:
|
||||
success = await evolution_manager.evolve_data_source("content_strategy", "2.5.0")
|
||||
print(f"✅ Evolution test: {'Success' if success else 'Failed'}")
|
||||
except Exception as e:
|
||||
print(f"⚠️ Evolution test failed (expected for test): {e}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Evolution manager test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_framework_integration(registry, prompt_builder, quality_manager, evolution_manager):
|
||||
"""Test framework integration and end-to-end functionality."""
|
||||
print("\n🧪 Testing Framework Integration...")
|
||||
|
||||
try:
|
||||
# Test comprehensive workflow
|
||||
print("📊 Testing comprehensive workflow...")
|
||||
|
||||
# 1. Get data from sources
|
||||
print(" 1. Retrieving data from sources...")
|
||||
for source_id in ["content_strategy", "gap_analysis", "keywords"]:
|
||||
try:
|
||||
data = await registry.get_data_with_dependencies(source_id, 1, 1)
|
||||
print(f" ✅ {source_id}: Data retrieved")
|
||||
except Exception as e:
|
||||
print(f" ⚠️ {source_id}: Data retrieval failed (expected)")
|
||||
|
||||
# 2. Build enhanced prompts
|
||||
print(" 2. Building enhanced prompts...")
|
||||
for step in ["step_1_content_strategy_analysis", "step_2_gap_analysis"]:
|
||||
try:
|
||||
base_prompt = await prompt_builder.build_prompt(step, 1, 1)
|
||||
print(f" ✅ {step}: Prompt built")
|
||||
except Exception as e:
|
||||
print(f" ⚠️ {step}: Prompt building failed (expected)")
|
||||
|
||||
# 3. Check evolution readiness
|
||||
print(" 3. Checking evolution readiness...")
|
||||
for source_id in ["content_strategy", "gap_analysis", "keywords"]:
|
||||
plan = evolution_manager.get_evolution_plan(source_id)
|
||||
print(f" ✅ {source_id}: Ready for evolution: {plan['is_ready_for_evolution']}")
|
||||
|
||||
print("✅ Framework integration test completed")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Framework integration test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_scalability_features(registry, evolution_manager):
|
||||
"""Test scalability features of the framework."""
|
||||
print("\n🧪 Testing Scalability Features...")
|
||||
|
||||
try:
|
||||
# Test adding custom data source
|
||||
print("📈 Testing custom data source addition...")
|
||||
|
||||
# Create a custom data source (simplified)
|
||||
from services.calendar_generation_datasource_framework.interfaces import DataSourceInterface, DataSourceType, DataSourcePriority
|
||||
|
||||
class CustomDataSource(DataSourceInterface):
|
||||
def __init__(self):
|
||||
super().__init__("custom_source", DataSourceType.CUSTOM, DataSourcePriority.LOW)
|
||||
|
||||
async def get_data(self, user_id: int, strategy_id: int):
|
||||
return {"custom_data": "test"}
|
||||
|
||||
async def validate_data(self, data):
|
||||
return {"is_valid": True, "quality_score": 0.8}
|
||||
|
||||
async def enhance_data(self, data):
|
||||
return {**data, "enhanced": True}
|
||||
|
||||
# Register custom source
|
||||
custom_source = CustomDataSource()
|
||||
registry.register_source(custom_source)
|
||||
print("✅ Custom data source registered successfully")
|
||||
|
||||
# Test evolution config addition
|
||||
custom_config = {
|
||||
"current_version": "1.0.0",
|
||||
"target_version": "1.5.0",
|
||||
"enhancement_plan": ["Custom enhancement"],
|
||||
"implementation_steps": ["Implement custom enhancement"],
|
||||
"priority": "low",
|
||||
"estimated_effort": "low"
|
||||
}
|
||||
|
||||
evolution_manager.add_evolution_config("custom_source", custom_config)
|
||||
print("✅ Custom evolution config added successfully")
|
||||
|
||||
# Test framework status with new source
|
||||
status = registry.get_registry_status()
|
||||
print(f"✅ Framework now has {status['total_sources']} sources")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Scalability test failed: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Run all framework tests."""
|
||||
print("🚀 Starting Calendar Generation Data Source Framework Tests...")
|
||||
print("=" * 80)
|
||||
|
||||
# Initialize framework
|
||||
registry, prompt_builder, quality_manager, evolution_manager = await test_framework_initialization()
|
||||
|
||||
if not all([registry, prompt_builder, quality_manager, evolution_manager]):
|
||||
print("❌ Framework initialization failed. Exiting.")
|
||||
return False
|
||||
|
||||
# Run individual component tests
|
||||
tests = [
|
||||
("Data Source Registry", test_data_source_registry, registry),
|
||||
("Data Source Validation", test_data_source_validation, registry),
|
||||
("Prompt Builder", test_prompt_builder, prompt_builder),
|
||||
("Quality Gates", test_quality_gates, quality_manager),
|
||||
("Evolution Manager", test_evolution_manager, evolution_manager),
|
||||
("Framework Integration", test_framework_integration, registry, prompt_builder, quality_manager, evolution_manager),
|
||||
("Scalability Features", test_scalability_features, registry, evolution_manager)
|
||||
]
|
||||
|
||||
results = []
|
||||
for test_name, test_func, *args in tests:
|
||||
try:
|
||||
result = await test_func(*args)
|
||||
results.append((test_name, result))
|
||||
except Exception as e:
|
||||
print(f"❌ {test_name} test failed with exception: {e}")
|
||||
results.append((test_name, False))
|
||||
|
||||
# Print test summary
|
||||
print("\n" + "=" * 80)
|
||||
print("📋 Test Results Summary:")
|
||||
|
||||
passed = 0
|
||||
total = len(results)
|
||||
|
||||
for test_name, result in results:
|
||||
status = "✅ PASSED" if result else "❌ FAILED"
|
||||
print(f" {status} - {test_name}")
|
||||
if result:
|
||||
passed += 1
|
||||
|
||||
print(f"\n🎯 Overall Results: {passed}/{total} tests passed")
|
||||
|
||||
if passed == total:
|
||||
print("🎉 All tests passed! Framework is working correctly.")
|
||||
print("\n✅ Framework Features Verified:")
|
||||
print(" - Scalable data source management")
|
||||
print(" - Strategy-aware prompt building")
|
||||
print(" - Quality gate integration")
|
||||
print(" - Evolution management")
|
||||
print(" - Framework integration")
|
||||
print(" - Scalability and extensibility")
|
||||
return True
|
||||
else:
|
||||
print("⚠️ Some tests failed. Please check the implementation.")
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the tests
|
||||
success = asyncio.run(main())
|
||||
sys.exit(0 if success else 1)
|
||||
264
backend/test/test_content_planning_services.py
Normal file
264
backend/test/test_content_planning_services.py
Normal file
@@ -0,0 +1,264 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script for content planning services."""
|
||||
|
||||
import asyncio
|
||||
from loguru import logger
|
||||
|
||||
# Import all content planning services
|
||||
from services.content_gap_analyzer import ContentGapAnalyzer
|
||||
from services.competitor_analyzer import CompetitorAnalyzer
|
||||
from services.keyword_researcher import KeywordResearcher
|
||||
from services.ai_engine_service import AIEngineService
|
||||
from services.website_analyzer import WebsiteAnalyzer
|
||||
|
||||
async def test_content_planning_services():
|
||||
"""Test all content planning services."""
|
||||
logger.info("🧪 Testing Content Planning Services")
|
||||
|
||||
try:
|
||||
# Test 1: Initialize all services
|
||||
logger.info("1. Initializing services...")
|
||||
content_gap_analyzer = ContentGapAnalyzer()
|
||||
competitor_analyzer = CompetitorAnalyzer()
|
||||
keyword_researcher = KeywordResearcher()
|
||||
ai_engine = AIEngineService()
|
||||
website_analyzer = WebsiteAnalyzer()
|
||||
logger.info("✅ All services initialized successfully")
|
||||
|
||||
# Test 2: Test content gap analysis
|
||||
logger.info("2. Testing content gap analysis...")
|
||||
target_url = "https://alwrity.com"
|
||||
competitor_urls = ["https://competitor1.com", "https://competitor2.com"]
|
||||
target_keywords = ["content planning", "digital marketing", "seo strategy"]
|
||||
|
||||
gap_analysis = await content_gap_analyzer.analyze_comprehensive_gap(
|
||||
target_url=target_url,
|
||||
competitor_urls=competitor_urls,
|
||||
target_keywords=target_keywords,
|
||||
industry="technology"
|
||||
)
|
||||
|
||||
if gap_analysis:
|
||||
logger.info(f"✅ Content gap analysis completed: {len(gap_analysis.get('recommendations', []))} recommendations")
|
||||
else:
|
||||
logger.warning("⚠️ Content gap analysis returned empty results")
|
||||
|
||||
# Test 3: Test competitor analysis
|
||||
logger.info("3. Testing competitor analysis...")
|
||||
competitor_analysis = await competitor_analyzer.analyze_competitors(
|
||||
competitor_urls=competitor_urls,
|
||||
industry="technology"
|
||||
)
|
||||
|
||||
if competitor_analysis:
|
||||
logger.info(f"✅ Competitor analysis completed: {len(competitor_analysis.get('competitors', []))} competitors analyzed")
|
||||
else:
|
||||
logger.warning("⚠️ Competitor analysis returned empty results")
|
||||
|
||||
# Test 4: Test keyword research
|
||||
logger.info("4. Testing keyword research...")
|
||||
keyword_analysis = await keyword_researcher.analyze_keywords(
|
||||
industry="technology",
|
||||
url=target_url,
|
||||
target_keywords=target_keywords
|
||||
)
|
||||
|
||||
if keyword_analysis:
|
||||
logger.info(f"✅ Keyword analysis completed: {len(keyword_analysis.get('opportunities', []))} opportunities found")
|
||||
else:
|
||||
logger.warning("⚠️ Keyword analysis returned empty results")
|
||||
|
||||
# Test 5: Test website analysis
|
||||
logger.info("5. Testing website analysis...")
|
||||
website_analysis = await website_analyzer.analyze_website(
|
||||
url=target_url,
|
||||
industry="technology"
|
||||
)
|
||||
|
||||
if website_analysis:
|
||||
logger.info(f"✅ Website analysis completed: {website_analysis.get('content_analysis', {}).get('total_pages', 0)} pages analyzed")
|
||||
else:
|
||||
logger.warning("⚠️ Website analysis returned empty results")
|
||||
|
||||
# Test 6: Test AI engine
|
||||
logger.info("6. Testing AI engine...")
|
||||
analysis_summary = {
|
||||
'target_url': target_url,
|
||||
'industry': 'technology',
|
||||
'serp_opportunities': 5,
|
||||
'expanded_keywords_count': 25,
|
||||
'competitors_analyzed': 2,
|
||||
'dominant_themes': ['content strategy', 'digital marketing', 'seo']
|
||||
}
|
||||
|
||||
ai_insights = await ai_engine.analyze_content_gaps(analysis_summary)
|
||||
|
||||
if ai_insights:
|
||||
logger.info(f"✅ AI insights generated: {len(ai_insights.get('strategic_insights', []))} insights")
|
||||
else:
|
||||
logger.warning("⚠️ AI insights returned empty results")
|
||||
|
||||
# Test 7: Test content quality analysis
|
||||
logger.info("7. Testing content quality analysis...")
|
||||
content_quality = await website_analyzer.analyze_content_quality(target_url)
|
||||
|
||||
if content_quality:
|
||||
logger.info(f"✅ Content quality analysis completed: Score {content_quality.get('overall_quality_score', 0)}/10")
|
||||
else:
|
||||
logger.warning("⚠️ Content quality analysis returned empty results")
|
||||
|
||||
# Test 8: Test user experience analysis
|
||||
logger.info("8. Testing user experience analysis...")
|
||||
ux_analysis = await website_analyzer.analyze_user_experience(target_url)
|
||||
|
||||
if ux_analysis:
|
||||
logger.info(f"✅ UX analysis completed: Score {ux_analysis.get('overall_ux_score', 0)}/10")
|
||||
else:
|
||||
logger.warning("⚠️ UX analysis returned empty results")
|
||||
|
||||
# Test 9: Test keyword expansion
|
||||
logger.info("9. Testing keyword expansion...")
|
||||
seed_keywords = ["content planning", "digital marketing"]
|
||||
expanded_keywords = await keyword_researcher.expand_keywords(
|
||||
seed_keywords=seed_keywords,
|
||||
industry="technology"
|
||||
)
|
||||
|
||||
if expanded_keywords:
|
||||
logger.info(f"✅ Keyword expansion completed: {len(expanded_keywords.get('expanded_keywords', []))} keywords generated")
|
||||
else:
|
||||
logger.warning("⚠️ Keyword expansion returned empty results")
|
||||
|
||||
# Test 10: Test search intent analysis
|
||||
logger.info("10. Testing search intent analysis...")
|
||||
keywords = ["content planning guide", "digital marketing tips", "seo best practices"]
|
||||
intent_analysis = await keyword_researcher.analyze_search_intent(keywords)
|
||||
|
||||
if intent_analysis:
|
||||
logger.info(f"✅ Search intent analysis completed: {len(intent_analysis.get('keyword_intents', {}))} keywords analyzed")
|
||||
else:
|
||||
logger.warning("⚠️ Search intent analysis returned empty results")
|
||||
|
||||
logger.info("🎉 All content planning services tested successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing content planning services: {str(e)}")
|
||||
return False
|
||||
|
||||
async def test_ai_engine_features():
|
||||
"""Test specific AI engine features."""
|
||||
logger.info("🤖 Testing AI Engine Features")
|
||||
|
||||
try:
|
||||
ai_engine = AIEngineService()
|
||||
|
||||
# Test market position analysis
|
||||
market_data = {
|
||||
'competitors_analyzed': 3,
|
||||
'avg_content_count': 150,
|
||||
'avg_quality_score': 8.5,
|
||||
'frequency_distribution': {'3x/week': 2, '2x/week': 1},
|
||||
'industry': 'technology'
|
||||
}
|
||||
|
||||
market_position = await ai_engine.analyze_market_position(market_data)
|
||||
if market_position:
|
||||
logger.info("✅ Market position analysis completed")
|
||||
else:
|
||||
logger.warning("⚠️ Market position analysis failed")
|
||||
|
||||
# Test content recommendations
|
||||
analysis_data = {
|
||||
'target_url': 'https://alwrity.com',
|
||||
'industry': 'technology',
|
||||
'keywords': ['content planning', 'digital marketing'],
|
||||
'competitors': ['competitor1.com', 'competitor2.com']
|
||||
}
|
||||
|
||||
recommendations = await ai_engine.generate_content_recommendations(analysis_data)
|
||||
if recommendations:
|
||||
logger.info(f"✅ Content recommendations generated: {len(recommendations)} recommendations")
|
||||
else:
|
||||
logger.warning("⚠️ Content recommendations failed")
|
||||
|
||||
# Test performance predictions
|
||||
content_data = {
|
||||
'content_type': 'blog_post',
|
||||
'target_keywords': ['content planning'],
|
||||
'industry': 'technology',
|
||||
'content_length': 1500
|
||||
}
|
||||
|
||||
predictions = await ai_engine.predict_content_performance(content_data)
|
||||
if predictions:
|
||||
logger.info("✅ Performance predictions generated")
|
||||
else:
|
||||
logger.warning("⚠️ Performance predictions failed")
|
||||
|
||||
# Test competitive intelligence
|
||||
competitor_data = {
|
||||
'competitors': ['competitor1.com', 'competitor2.com'],
|
||||
'industry': 'technology',
|
||||
'analysis_depth': 'comprehensive'
|
||||
}
|
||||
|
||||
competitive_intelligence = await ai_engine.analyze_competitive_intelligence(competitor_data)
|
||||
if competitive_intelligence:
|
||||
logger.info("✅ Competitive intelligence analysis completed")
|
||||
else:
|
||||
logger.warning("⚠️ Competitive intelligence analysis failed")
|
||||
|
||||
# Test strategic insights
|
||||
analysis_data = {
|
||||
'industry': 'technology',
|
||||
'target_audience': 'marketing professionals',
|
||||
'business_goals': ['increase traffic', 'improve conversions'],
|
||||
'current_performance': 'moderate'
|
||||
}
|
||||
|
||||
strategic_insights = await ai_engine.generate_strategic_insights(analysis_data)
|
||||
if strategic_insights:
|
||||
logger.info(f"✅ Strategic insights generated: {len(strategic_insights)} insights")
|
||||
else:
|
||||
logger.warning("⚠️ Strategic insights failed")
|
||||
|
||||
# Test content quality analysis
|
||||
content_data = {
|
||||
'content_text': 'Sample content for analysis',
|
||||
'target_keywords': ['content planning'],
|
||||
'industry': 'technology'
|
||||
}
|
||||
|
||||
quality_analysis = await ai_engine.analyze_content_quality(content_data)
|
||||
if quality_analysis:
|
||||
logger.info(f"✅ Content quality analysis completed: Score {quality_analysis.get('overall_quality_score', 0)}/10")
|
||||
else:
|
||||
logger.warning("⚠️ Content quality analysis failed")
|
||||
|
||||
logger.info("🎉 All AI engine features tested successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing AI engine features: {str(e)}")
|
||||
return False
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting Content Planning Services Test Suite")
|
||||
|
||||
# Test 1: Basic services
|
||||
services_result = await test_content_planning_services()
|
||||
|
||||
# Test 2: AI engine features
|
||||
ai_result = await test_ai_engine_features()
|
||||
|
||||
if services_result and ai_result:
|
||||
logger.info("🎉 All tests passed! Content Planning Services are ready for Phase 1 implementation.")
|
||||
else:
|
||||
logger.error("❌ Some tests failed. Please check the logs above.")
|
||||
|
||||
logger.info("🏁 Test suite completed")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
139
backend/test/test_database.py
Normal file
139
backend/test/test_database.py
Normal file
@@ -0,0 +1,139 @@
|
||||
"""
|
||||
Test script for database functionality.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.database import init_database, get_db_session, close_database
|
||||
from services.website_analysis_service import WebsiteAnalysisService
|
||||
from models.onboarding import WebsiteAnalysis, OnboardingSession
|
||||
|
||||
def test_database_functionality():
|
||||
"""Test database initialization and basic operations."""
|
||||
try:
|
||||
print("Testing database functionality...")
|
||||
|
||||
# Initialize database
|
||||
init_database()
|
||||
print("✅ Database initialized successfully")
|
||||
|
||||
# Get database session
|
||||
db_session = get_db_session()
|
||||
if not db_session:
|
||||
print("❌ Failed to get database session")
|
||||
return False
|
||||
|
||||
print("✅ Database session created successfully")
|
||||
|
||||
# Test website analysis service
|
||||
analysis_service = WebsiteAnalysisService(db_session)
|
||||
print("✅ Website analysis service created successfully")
|
||||
|
||||
# Test creating a session
|
||||
session = OnboardingSession(user_id=1, current_step=2, progress=25.0)
|
||||
db_session.add(session)
|
||||
db_session.commit()
|
||||
print(f"✅ Created onboarding session with ID: {session.id}")
|
||||
|
||||
# Test saving analysis
|
||||
test_analysis_data = {
|
||||
'style_analysis': {
|
||||
'writing_style': {
|
||||
'tone': 'professional',
|
||||
'voice': 'active',
|
||||
'complexity': 'moderate',
|
||||
'engagement_level': 'high'
|
||||
},
|
||||
'target_audience': {
|
||||
'demographics': ['professionals', 'business owners'],
|
||||
'expertise_level': 'intermediate',
|
||||
'industry_focus': 'technology',
|
||||
'geographic_focus': 'global'
|
||||
},
|
||||
'content_type': {
|
||||
'primary_type': 'blog',
|
||||
'secondary_types': ['article', 'guide'],
|
||||
'purpose': 'informational',
|
||||
'call_to_action': 'subscribe'
|
||||
},
|
||||
'recommended_settings': {
|
||||
'writing_tone': 'professional',
|
||||
'target_audience': 'business professionals',
|
||||
'content_type': 'blog posts',
|
||||
'creativity_level': 'balanced',
|
||||
'geographic_location': 'global'
|
||||
}
|
||||
},
|
||||
'crawl_result': {
|
||||
'content': 'Sample website content...',
|
||||
'word_count': 1500
|
||||
},
|
||||
'style_patterns': {
|
||||
'sentence_length': 'medium',
|
||||
'paragraph_structure': 'well-organized'
|
||||
},
|
||||
'style_guidelines': {
|
||||
'tone_guidelines': 'Maintain professional tone',
|
||||
'structure_guidelines': 'Use clear headings'
|
||||
}
|
||||
}
|
||||
|
||||
analysis_id = analysis_service.save_analysis(
|
||||
session_id=session.id,
|
||||
website_url='https://example.com',
|
||||
analysis_data=test_analysis_data
|
||||
)
|
||||
|
||||
if analysis_id:
|
||||
print(f"✅ Saved analysis with ID: {analysis_id}")
|
||||
else:
|
||||
print("❌ Failed to save analysis")
|
||||
return False
|
||||
|
||||
# Test retrieving analysis
|
||||
analysis = analysis_service.get_analysis(analysis_id)
|
||||
if analysis:
|
||||
print("✅ Retrieved analysis successfully")
|
||||
print(f" Website URL: {analysis['website_url']}")
|
||||
print(f" Writing Style: {analysis['writing_style']['tone']}")
|
||||
else:
|
||||
print("❌ Failed to retrieve analysis")
|
||||
return False
|
||||
|
||||
# Test checking existing analysis
|
||||
existing_check = analysis_service.check_existing_analysis(
|
||||
session_id=session.id,
|
||||
website_url='https://example.com'
|
||||
)
|
||||
|
||||
if existing_check and existing_check.get('exists'):
|
||||
print("✅ Existing analysis check works")
|
||||
else:
|
||||
print("❌ Existing analysis check failed")
|
||||
return False
|
||||
|
||||
# Clean up
|
||||
if analysis_id:
|
||||
analysis_service.delete_analysis(analysis_id)
|
||||
db_session.delete(session)
|
||||
db_session.commit()
|
||||
print("✅ Cleanup completed")
|
||||
|
||||
print("\n🎉 All database tests passed!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database test failed: {str(e)}")
|
||||
return False
|
||||
finally:
|
||||
close_database()
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_database_functionality()
|
||||
if success:
|
||||
print("\n✅ Database functionality is working correctly!")
|
||||
else:
|
||||
print("\n❌ Database functionality has issues!")
|
||||
sys.exit(1)
|
||||
43
backend/test/test_detailed.py
Normal file
43
backend/test/test_detailed.py
Normal file
@@ -0,0 +1,43 @@
|
||||
import requests
|
||||
import json
|
||||
|
||||
# Test the research endpoint with more detailed output
|
||||
url = "http://localhost:8000/api/blog/research"
|
||||
payload = {
|
||||
"keywords": ["AI content generation", "blog writing"],
|
||||
"topic": "ALwrity content generation",
|
||||
"industry": "Technology",
|
||||
"target_audience": "content creators"
|
||||
}
|
||||
|
||||
try:
|
||||
print("Sending request to research endpoint...")
|
||||
response = requests.post(url, json=payload, timeout=60)
|
||||
print(f"Status Code: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print("\n=== FULL RESPONSE ===")
|
||||
print(json.dumps(data, indent=2))
|
||||
|
||||
# Check if we got the expected fields
|
||||
expected_fields = ['success', 'sources', 'keyword_analysis', 'competitor_analysis', 'suggested_angles', 'search_widget', 'search_queries']
|
||||
print(f"\n=== FIELD ANALYSIS ===")
|
||||
for field in expected_fields:
|
||||
value = data.get(field)
|
||||
if field == 'sources':
|
||||
print(f"{field}: {len(value) if value else 0} items")
|
||||
elif field == 'search_queries':
|
||||
print(f"{field}: {len(value) if value else 0} items")
|
||||
elif field == 'search_widget':
|
||||
print(f"{field}: {'Present' if value else 'Missing'}")
|
||||
else:
|
||||
print(f"{field}: {type(value).__name__} - {str(value)[:100]}...")
|
||||
|
||||
else:
|
||||
print(f"Error Response: {response.text}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Request failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
228
backend/test/test_enhanced_prompt_generation.py
Normal file
228
backend/test/test_enhanced_prompt_generation.py
Normal file
@@ -0,0 +1,228 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for Enhanced LinkedIn Prompt Generation
|
||||
|
||||
This script demonstrates how the enhanced LinkedIn prompt generator analyzes
|
||||
generated content and creates context-aware image prompts.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_path = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_path))
|
||||
|
||||
from loguru import logger
|
||||
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stdout, colorize=True, format="<level>{level}</level>| {message}")
|
||||
|
||||
|
||||
async def test_enhanced_prompt_generation():
|
||||
"""Test the enhanced LinkedIn prompt generation with content analysis."""
|
||||
|
||||
logger.info("🧪 Testing Enhanced LinkedIn Prompt Generation")
|
||||
logger.info("=" * 70)
|
||||
|
||||
try:
|
||||
# Import the enhanced prompt generator
|
||||
from services.linkedin.image_prompts import LinkedInPromptGenerator
|
||||
|
||||
# Initialize the service
|
||||
prompt_generator = LinkedInPromptGenerator()
|
||||
logger.success("✅ LinkedIn Prompt Generator initialized successfully")
|
||||
|
||||
# Test cases with different types of LinkedIn content
|
||||
test_cases = [
|
||||
{
|
||||
'name': 'AI Marketing Post',
|
||||
'content': {
|
||||
'topic': 'AI in Marketing',
|
||||
'industry': 'Technology',
|
||||
'content_type': 'post',
|
||||
'content': """🚀 Exciting news! Artificial Intelligence is revolutionizing how we approach marketing strategies.
|
||||
|
||||
Here are 3 game-changing ways AI is transforming the industry:
|
||||
|
||||
1️⃣ **Predictive Analytics**: AI algorithms can now predict customer behavior with 95% accuracy, allowing marketers to create hyper-personalized campaigns.
|
||||
|
||||
2️⃣ **Content Optimization**: Machine learning models analyze engagement patterns to optimize content timing, format, and messaging for maximum impact.
|
||||
|
||||
3️⃣ **Automated Personalization**: AI-powered tools automatically adjust marketing messages based on individual user preferences and behavior.
|
||||
|
||||
The future of marketing is here, and it's powered by AI! 🎯
|
||||
|
||||
What's your experience with AI in marketing? Share your thoughts below! 👇
|
||||
|
||||
#AIMarketing #DigitalTransformation #MarketingInnovation #TechTrends #FutureOfMarketing"""
|
||||
}
|
||||
},
|
||||
{
|
||||
'name': 'Leadership Article',
|
||||
'content': {
|
||||
'topic': 'Building High-Performance Teams',
|
||||
'industry': 'Business',
|
||||
'content_type': 'article',
|
||||
'content': """Building High-Performance Teams: A Comprehensive Guide
|
||||
|
||||
In today's competitive business landscape, the ability to build and lead high-performance teams is not just a skill—it's a strategic imperative. After 15 years of leading teams across various industries, I've identified the key principles that consistently drive exceptional results.
|
||||
|
||||
**The Foundation: Clear Vision and Purpose**
|
||||
Every high-performance team starts with a crystal-clear understanding of their mission. Team members need to know not just what they're doing, but why it matters. This creates intrinsic motivation that external rewards simply cannot match.
|
||||
|
||||
**Communication: The Lifeblood of Success**
|
||||
Effective communication in high-performance teams goes beyond regular meetings. It involves creating an environment where feedback flows freely, ideas are shared without fear, and every voice is heard and valued.
|
||||
|
||||
**Trust and Psychological Safety**
|
||||
High-performance teams operate in environments where team members feel safe to take risks, make mistakes, and learn from failures. This psychological safety is the bedrock of innovation and continuous improvement.
|
||||
|
||||
**Continuous Learning and Adaptation**
|
||||
The best teams never rest on their laurels. They continuously seek new knowledge, adapt to changing circumstances, and evolve their approaches based on results and feedback.
|
||||
|
||||
**Results and Accountability**
|
||||
While process matters, high-performance teams are ultimately measured by their results. Clear metrics, regular check-ins, and a culture of accountability ensure that the team stays focused on delivering value.
|
||||
|
||||
Building high-performance teams is both an art and a science. It requires patience, persistence, and a genuine commitment to developing people. The investment pays dividends not just in results, but in the satisfaction of seeing individuals grow and teams achieve what once seemed impossible.
|
||||
|
||||
What strategies have you found most effective in building high-performance teams? Share your insights in the comments below."""
|
||||
}
|
||||
},
|
||||
{
|
||||
'name': 'Data Analytics Carousel',
|
||||
'content': {
|
||||
'topic': 'Data-Driven Decision Making',
|
||||
'industry': 'Finance',
|
||||
'content_type': 'carousel',
|
||||
'content': """📊 Data-Driven Decision Making: Your Competitive Advantage
|
||||
|
||||
Slide 1: The Power of Data
|
||||
• 73% of companies using data-driven decision making report improved performance
|
||||
• Data-driven organizations are 23x more likely to acquire customers
|
||||
• 58% of executives say data analytics has improved their decision-making process
|
||||
|
||||
Slide 2: Key Metrics to Track
|
||||
• Customer Acquisition Cost (CAC)
|
||||
• Customer Lifetime Value (CLV)
|
||||
• Conversion Rates
|
||||
• Churn Rate
|
||||
• Revenue Growth
|
||||
|
||||
Slide 3: Implementation Steps
|
||||
1. Define clear objectives
|
||||
2. Identify relevant data sources
|
||||
3. Establish data quality standards
|
||||
4. Build analytical capabilities
|
||||
5. Create feedback loops
|
||||
|
||||
Slide 4: Common Pitfalls
|
||||
• Analysis paralysis
|
||||
• Ignoring qualitative insights
|
||||
• Not validating assumptions
|
||||
• Over-relying on historical data
|
||||
• Poor data visualization
|
||||
|
||||
Slide 5: Success Stories
|
||||
• Netflix: 75% of viewing decisions influenced by data
|
||||
• Amazon: Dynamic pricing increases revenue by 25%
|
||||
• Spotify: Personalized recommendations drive 40% of listening time
|
||||
|
||||
Slide 6: Getting Started
|
||||
• Start small with key metrics
|
||||
• Invest in data literacy training
|
||||
• Use visualization tools
|
||||
• Establish regular review cycles
|
||||
• Celebrate data-driven wins
|
||||
|
||||
Ready to transform your decision-making process? Let's discuss your data strategy! 💬
|
||||
|
||||
#DataDriven #Analytics #BusinessIntelligence #DecisionMaking #Finance #Strategy"""
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# Test each case
|
||||
for i, test_case in enumerate(test_cases, 1):
|
||||
logger.info(f"\n📝 Test Case {i}: {test_case['name']}")
|
||||
logger.info("-" * 50)
|
||||
|
||||
# Generate prompts using the enhanced generator
|
||||
prompts = await prompt_generator.generate_three_prompts(
|
||||
test_case['content'],
|
||||
aspect_ratio="1:1"
|
||||
)
|
||||
|
||||
if prompts and len(prompts) >= 3:
|
||||
logger.success(f"✅ Generated {len(prompts)} context-aware prompts")
|
||||
|
||||
# Display each prompt
|
||||
for j, prompt in enumerate(prompts, 1):
|
||||
logger.info(f"\n🎨 Prompt {j}: {prompt['style']}")
|
||||
logger.info(f" Description: {prompt['description']}")
|
||||
logger.info(f" Content Context: {prompt.get('content_context', 'N/A')}")
|
||||
|
||||
# Show a preview of the prompt
|
||||
prompt_text = prompt['prompt']
|
||||
if len(prompt_text) > 200:
|
||||
prompt_text = prompt_text[:200] + "..."
|
||||
logger.info(f" Prompt Preview: {prompt_text}")
|
||||
|
||||
# Validate prompt quality
|
||||
quality_result = await prompt_generator.validate_prompt_quality(prompt)
|
||||
if quality_result.get('valid'):
|
||||
logger.success(f" ✅ Quality Score: {quality_result['overall_score']}/100")
|
||||
else:
|
||||
logger.warning(f" ⚠️ Quality Score: {quality_result.get('overall_score', 'N/A')}/100")
|
||||
else:
|
||||
logger.error(f"❌ Failed to generate prompts for {test_case['name']}")
|
||||
|
||||
# Test content analysis functionality directly
|
||||
logger.info(f"\n🔍 Testing Content Analysis Functionality")
|
||||
logger.info("-" * 50)
|
||||
|
||||
test_content = test_cases[0]['content']['content']
|
||||
content_analysis = prompt_generator._analyze_content_for_image_context(
|
||||
test_content,
|
||||
test_cases[0]['content']['content_type']
|
||||
)
|
||||
|
||||
logger.info("Content Analysis Results:")
|
||||
for key, value in content_analysis.items():
|
||||
logger.info(f" {key}: {value}")
|
||||
|
||||
logger.info("=" * 70)
|
||||
logger.success("🎉 Enhanced LinkedIn Prompt Generation Test Completed Successfully!")
|
||||
|
||||
return True
|
||||
|
||||
except ImportError as e:
|
||||
logger.error(f"❌ Import Error: {e}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test Failed: {e}")
|
||||
import traceback
|
||||
logger.error(f"Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting Enhanced LinkedIn Prompt Generation Tests")
|
||||
|
||||
success = await test_enhanced_prompt_generation()
|
||||
|
||||
if success:
|
||||
logger.success("✅ All tests passed! The enhanced prompt generation is working correctly.")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Some tests failed. Please check the errors above.")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the async test
|
||||
asyncio.run(main())
|
||||
201
backend/test/test_enhanced_strategy_processing.py
Normal file
201
backend/test/test_enhanced_strategy_processing.py
Normal file
@@ -0,0 +1,201 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Enhanced Strategy Data Processing
|
||||
Verifies that the enhanced strategy data processing is working correctly.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from services.content_planning_db import ContentPlanningDBService
|
||||
|
||||
async def test_enhanced_strategy_processing():
|
||||
"""Test the enhanced strategy data processing functionality."""
|
||||
print("🧪 Testing Enhanced Strategy Data Processing...")
|
||||
|
||||
try:
|
||||
# Initialize the database service
|
||||
db_service = ContentPlanningDBService()
|
||||
|
||||
# Test with a sample strategy ID
|
||||
strategy_id = 1 # You can change this to test with different strategies
|
||||
|
||||
print(f"📊 Testing strategy data retrieval for strategy ID: {strategy_id}")
|
||||
|
||||
# Test the enhanced strategy data retrieval
|
||||
strategy_data = await db_service.get_strategy_data(strategy_id)
|
||||
|
||||
if strategy_data:
|
||||
print("✅ Strategy data retrieved successfully!")
|
||||
print(f"📈 Strategy data contains {len(strategy_data)} fields")
|
||||
|
||||
# Check for enhanced fields
|
||||
enhanced_fields = [
|
||||
"strategy_analysis",
|
||||
"quality_indicators",
|
||||
"data_completeness",
|
||||
"strategic_alignment",
|
||||
"quality_gate_data",
|
||||
"prompt_chain_data"
|
||||
]
|
||||
|
||||
print("\n🔍 Checking for enhanced strategy fields:")
|
||||
for field in enhanced_fields:
|
||||
if field in strategy_data:
|
||||
print(f" ✅ {field}: Present")
|
||||
if isinstance(strategy_data[field], dict):
|
||||
print(f" Contains {len(strategy_data[field])} sub-fields")
|
||||
else:
|
||||
print(f" ❌ {field}: Missing")
|
||||
|
||||
# Check strategy analysis
|
||||
if "strategy_analysis" in strategy_data:
|
||||
analysis = strategy_data["strategy_analysis"]
|
||||
print(f"\n📊 Strategy Analysis:")
|
||||
print(f" - Completion Percentage: {analysis.get('completion_percentage', 0)}%")
|
||||
print(f" - Filled Fields: {analysis.get('filled_fields', 0)}/{analysis.get('total_fields', 30)}")
|
||||
print(f" - Data Quality Score: {analysis.get('data_quality_score', 0)}%")
|
||||
print(f" - Strategy Coherence: {analysis.get('strategy_coherence', {}).get('overall_coherence', 0)}%")
|
||||
|
||||
# Check quality indicators
|
||||
if "quality_indicators" in strategy_data:
|
||||
quality = strategy_data["quality_indicators"]
|
||||
print(f"\n🎯 Quality Indicators:")
|
||||
print(f" - Data Completeness: {quality.get('data_completeness', 0)}%")
|
||||
print(f" - Strategic Alignment: {quality.get('strategic_alignment', 0)}%")
|
||||
print(f" - Market Relevance: {quality.get('market_relevance', 0)}%")
|
||||
print(f" - Audience Alignment: {quality.get('audience_alignment', 0)}%")
|
||||
print(f" - Content Strategy Coherence: {quality.get('content_strategy_coherence', 0)}%")
|
||||
print(f" - Overall Quality Score: {quality.get('overall_quality_score', 0)}%")
|
||||
|
||||
# Check quality gate data
|
||||
if "quality_gate_data" in strategy_data:
|
||||
quality_gates = strategy_data["quality_gate_data"]
|
||||
print(f"\n🚪 Quality Gate Data:")
|
||||
for gate_name, gate_data in quality_gates.items():
|
||||
if isinstance(gate_data, dict):
|
||||
print(f" - {gate_name}: {len(gate_data)} fields")
|
||||
else:
|
||||
print(f" - {gate_name}: {type(gate_data).__name__}")
|
||||
|
||||
# Check prompt chain data
|
||||
if "prompt_chain_data" in strategy_data:
|
||||
prompt_chain = strategy_data["prompt_chain_data"]
|
||||
print(f"\n🔗 Prompt Chain Data:")
|
||||
for step_name, step_data in prompt_chain.items():
|
||||
if isinstance(step_data, dict):
|
||||
print(f" - {step_name}: {len(step_data)} sub-sections")
|
||||
else:
|
||||
print(f" - {step_name}: {type(step_data).__name__}")
|
||||
|
||||
print(f"\n✅ Enhanced Strategy Data Processing Test PASSED!")
|
||||
return True
|
||||
|
||||
else:
|
||||
print("❌ No strategy data retrieved")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during enhanced strategy data processing test: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
async def test_comprehensive_user_data():
|
||||
"""Test the comprehensive user data retrieval with enhanced strategy data."""
|
||||
print("\n🧪 Testing Comprehensive User Data with Enhanced Strategy...")
|
||||
|
||||
try:
|
||||
# Initialize the database service
|
||||
db_service = ContentPlanningDBService()
|
||||
|
||||
# Test with a sample user ID and strategy ID
|
||||
user_id = 1
|
||||
strategy_id = 1
|
||||
|
||||
print(f"📊 Testing comprehensive user data for user {user_id} with strategy {strategy_id}")
|
||||
|
||||
# Test the comprehensive user data retrieval
|
||||
user_data = await calendar_service._get_comprehensive_user_data(user_id, strategy_id)
|
||||
|
||||
if user_data:
|
||||
print("✅ Comprehensive user data retrieved successfully!")
|
||||
print(f"📈 User data contains {len(user_data)} fields")
|
||||
|
||||
# Check for enhanced strategy fields in user data
|
||||
enhanced_fields = [
|
||||
"strategy_analysis",
|
||||
"quality_indicators",
|
||||
"data_completeness",
|
||||
"strategic_alignment",
|
||||
"quality_gate_data",
|
||||
"prompt_chain_data"
|
||||
]
|
||||
|
||||
print("\n🔍 Checking for enhanced strategy fields in user data:")
|
||||
for field in enhanced_fields:
|
||||
if field in user_data:
|
||||
print(f" ✅ {field}: Present")
|
||||
if isinstance(user_data[field], dict):
|
||||
print(f" Contains {len(user_data[field])} sub-fields")
|
||||
else:
|
||||
print(f" ❌ {field}: Missing")
|
||||
|
||||
# Check strategy data quality
|
||||
if "strategy_data" in user_data:
|
||||
strategy_data = user_data["strategy_data"]
|
||||
print(f"\n📊 Strategy Data Quality:")
|
||||
print(f" - Strategy ID: {strategy_data.get('strategy_id', 'N/A')}")
|
||||
print(f" - Strategy Name: {strategy_data.get('strategy_name', 'N/A')}")
|
||||
print(f" - Industry: {strategy_data.get('industry', 'N/A')}")
|
||||
print(f" - Content Pillars: {len(strategy_data.get('content_pillars', []))} pillars")
|
||||
print(f" - Target Audience: {len(strategy_data.get('target_audience', {}))} audience fields")
|
||||
|
||||
print(f"\n✅ Comprehensive User Data Test PASSED!")
|
||||
return True
|
||||
|
||||
else:
|
||||
print("❌ No comprehensive user data retrieved")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during comprehensive user data test: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
async def main():
|
||||
"""Run all tests for enhanced strategy data processing."""
|
||||
print("🚀 Starting Enhanced Strategy Data Processing Tests...")
|
||||
print("=" * 60)
|
||||
|
||||
# Test 1: Enhanced Strategy Data Processing
|
||||
test1_passed = await test_enhanced_strategy_processing()
|
||||
|
||||
# Test 2: Comprehensive User Data
|
||||
test2_passed = await test_comprehensive_user_data()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("📋 Test Results Summary:")
|
||||
print(f" ✅ Enhanced Strategy Data Processing: {'PASSED' if test1_passed else 'FAILED'}")
|
||||
print(f" ✅ Comprehensive User Data: {'PASSED' if test2_passed else 'FAILED'}")
|
||||
|
||||
if test1_passed and test2_passed:
|
||||
print("\n🎉 All Enhanced Strategy Data Processing Tests PASSED!")
|
||||
print("✅ The enhanced strategy data processing is working correctly.")
|
||||
print("✅ Ready for 12-step prompt chaining and quality gates integration.")
|
||||
return True
|
||||
else:
|
||||
print("\n❌ Some tests failed. Please check the implementation.")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the tests
|
||||
success = asyncio.run(main())
|
||||
sys.exit(0 if success else 1)
|
||||
232
backend/test/test_facebook_writer.py
Normal file
232
backend/test/test_facebook_writer.py
Normal file
@@ -0,0 +1,232 @@
|
||||
"""Test script for Facebook Writer API endpoints."""
|
||||
|
||||
import requests
|
||||
import json
|
||||
from typing import Dict, Any
|
||||
|
||||
# Base URL for the API
|
||||
BASE_URL = "http://localhost:8000"
|
||||
|
||||
def test_health_check():
|
||||
"""Test the health check endpoint."""
|
||||
try:
|
||||
response = requests.get(f"{BASE_URL}/api/facebook-writer/health")
|
||||
print(f"Health Check: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
print(f"Response: {response.json()}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Health check failed: {e}")
|
||||
return False
|
||||
|
||||
def test_get_tools():
|
||||
"""Test getting available tools."""
|
||||
try:
|
||||
response = requests.get(f"{BASE_URL}/api/facebook-writer/tools")
|
||||
print(f"Get Tools: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print(f"Available tools: {data['total_count']}")
|
||||
for tool in data['tools'][:3]: # Show first 3 tools
|
||||
print(f" - {tool['name']}: {tool['description']}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Get tools failed: {e}")
|
||||
return False
|
||||
|
||||
def test_generate_post():
|
||||
"""Test Facebook post generation."""
|
||||
payload = {
|
||||
"business_type": "Fitness coach",
|
||||
"target_audience": "Fitness enthusiasts aged 25-35",
|
||||
"post_goal": "Increase engagement",
|
||||
"post_tone": "Inspirational",
|
||||
"include": "Success story, workout tips",
|
||||
"avoid": "Generic advice",
|
||||
"media_type": "Image",
|
||||
"advanced_options": {
|
||||
"use_hook": True,
|
||||
"use_story": True,
|
||||
"use_cta": True,
|
||||
"use_question": True,
|
||||
"use_emoji": True,
|
||||
"use_hashtags": True
|
||||
}
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/api/facebook-writer/post/generate",
|
||||
json=payload,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
print(f"Generate Post: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
if data['success']:
|
||||
print(f"Post generated successfully!")
|
||||
print(f"Content preview: {data['content'][:100]}...")
|
||||
if data.get('analytics'):
|
||||
print(f"Expected reach: {data['analytics']['expected_reach']}")
|
||||
else:
|
||||
print(f"Generation failed: {data.get('error', 'Unknown error')}")
|
||||
else:
|
||||
print(f"Request failed: {response.text}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Generate post failed: {e}")
|
||||
return False
|
||||
|
||||
def test_generate_story():
|
||||
"""Test Facebook story generation."""
|
||||
payload = {
|
||||
"business_type": "Fashion brand",
|
||||
"target_audience": "Fashion enthusiasts aged 18-30",
|
||||
"story_type": "Product showcase",
|
||||
"story_tone": "Fun",
|
||||
"include": "Behind the scenes",
|
||||
"avoid": "Too much text",
|
||||
"visual_options": {
|
||||
"background_type": "Gradient",
|
||||
"text_overlay": True,
|
||||
"stickers": True,
|
||||
"interactive_elements": True
|
||||
}
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/api/facebook-writer/story/generate",
|
||||
json=payload,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
print(f"Generate Story: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
if data['success']:
|
||||
print(f"Story generated successfully!")
|
||||
print(f"Content preview: {data['content'][:100]}...")
|
||||
if data.get('visual_suggestions'):
|
||||
print(f"Visual suggestions: {len(data['visual_suggestions'])} items")
|
||||
else:
|
||||
print(f"Generation failed: {data.get('error', 'Unknown error')}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Generate story failed: {e}")
|
||||
return False
|
||||
|
||||
def test_generate_ad_copy():
|
||||
"""Test Facebook ad copy generation."""
|
||||
payload = {
|
||||
"business_type": "E-commerce store",
|
||||
"product_service": "Wireless headphones",
|
||||
"ad_objective": "Conversions",
|
||||
"ad_format": "Single image",
|
||||
"target_audience": "Tech enthusiasts and music lovers",
|
||||
"targeting_options": {
|
||||
"age_group": "25-34",
|
||||
"interests": "Technology, Music, Audio equipment",
|
||||
"location": "United States"
|
||||
},
|
||||
"unique_selling_proposition": "Premium sound quality at affordable prices",
|
||||
"offer_details": "20% off for first-time buyers",
|
||||
"budget_range": "Medium"
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/api/facebook-writer/ad-copy/generate",
|
||||
json=payload,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
print(f"Generate Ad Copy: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
if data['success']:
|
||||
print(f"Ad copy generated successfully!")
|
||||
if data.get('primary_ad_copy'):
|
||||
print(f"Headline: {data['primary_ad_copy'].get('headline', 'N/A')}")
|
||||
if data.get('performance_predictions'):
|
||||
print(f"Estimated reach: {data['performance_predictions']['estimated_reach']}")
|
||||
else:
|
||||
print(f"Generation failed: {data.get('error', 'Unknown error')}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Generate ad copy failed: {e}")
|
||||
return False
|
||||
|
||||
def test_analyze_engagement():
|
||||
"""Test engagement analysis."""
|
||||
payload = {
|
||||
"content": "🚀 Ready to transform your fitness journey? Our new 30-day challenge is here! Join thousands who've already seen amazing results. What's your biggest fitness goal? 💪 #FitnessMotivation #Challenge #Transformation",
|
||||
"content_type": "Post",
|
||||
"analysis_type": "Performance prediction",
|
||||
"business_type": "Fitness coach",
|
||||
"target_audience": "Fitness enthusiasts"
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/api/facebook-writer/engagement/analyze",
|
||||
json=payload,
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
print(f"Analyze Engagement: {response.status_code}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
if data['success']:
|
||||
print(f"Analysis completed successfully!")
|
||||
print(f"Content score: {data.get('content_score', 'N/A')}/100")
|
||||
if data.get('engagement_metrics'):
|
||||
print(f"Predicted engagement: {data['engagement_metrics']['predicted_engagement_rate']}")
|
||||
else:
|
||||
print(f"Analysis failed: {data.get('error', 'Unknown error')}")
|
||||
return response.status_code == 200
|
||||
except Exception as e:
|
||||
print(f"Analyze engagement failed: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Run all tests."""
|
||||
print("🧪 Testing Facebook Writer API Endpoints")
|
||||
print("=" * 50)
|
||||
|
||||
tests = [
|
||||
("Health Check", test_health_check),
|
||||
("Get Tools", test_get_tools),
|
||||
("Generate Post", test_generate_post),
|
||||
("Generate Story", test_generate_story),
|
||||
("Generate Ad Copy", test_generate_ad_copy),
|
||||
("Analyze Engagement", test_analyze_engagement)
|
||||
]
|
||||
|
||||
results = []
|
||||
for test_name, test_func in tests:
|
||||
print(f"\n🔍 Running {test_name}...")
|
||||
try:
|
||||
success = test_func()
|
||||
results.append((test_name, success))
|
||||
status = "✅ PASS" if success else "❌ FAIL"
|
||||
print(f"{status}")
|
||||
except Exception as e:
|
||||
print(f"❌ FAIL - {e}")
|
||||
results.append((test_name, False))
|
||||
|
||||
print(f"\n📊 Test Results Summary")
|
||||
print("=" * 50)
|
||||
passed = sum(1 for _, success in results if success)
|
||||
total = len(results)
|
||||
|
||||
for test_name, success in results:
|
||||
status = "✅ PASS" if success else "❌ FAIL"
|
||||
print(f"{status} {test_name}")
|
||||
|
||||
print(f"\nOverall: {passed}/{total} tests passed ({passed/total*100:.1f}%)")
|
||||
|
||||
if passed == total:
|
||||
print("🎉 All tests passed! Facebook Writer API is ready to use.")
|
||||
else:
|
||||
print("⚠️ Some tests failed. Check the server logs for details.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
140
backend/test/test_full_flow.py
Normal file
140
backend/test/test_full_flow.py
Normal file
@@ -0,0 +1,140 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test the full 12-step calendar generation process to verify Step 5 fix.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from loguru import logger
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.orchestrator import PromptChainOrchestrator
|
||||
|
||||
async def test_full_12_step_process():
|
||||
"""Test the complete 12-step process to verify Step 5 fix."""
|
||||
try:
|
||||
logger.info("🧪 Testing full 12-step calendar generation process")
|
||||
|
||||
# Create orchestrator
|
||||
logger.info("✅ Creating orchestrator...")
|
||||
orchestrator = PromptChainOrchestrator()
|
||||
|
||||
# Test parameters
|
||||
user_id = 1
|
||||
strategy_id = 1
|
||||
calendar_type = "monthly"
|
||||
industry = "technology"
|
||||
business_size = "sme"
|
||||
|
||||
logger.info(f"🎯 Starting calendar generation for user {user_id}, strategy {strategy_id}")
|
||||
logger.info(f"📋 Parameters: {calendar_type}, {industry}, {business_size}")
|
||||
|
||||
# Start the full process
|
||||
start_time = time.time()
|
||||
|
||||
# Generate calendar using the orchestrator's main method
|
||||
logger.info("🚀 Executing full 12-step process...")
|
||||
final_calendar = await orchestrator.generate_calendar(
|
||||
user_id=user_id,
|
||||
strategy_id=strategy_id,
|
||||
calendar_type=calendar_type,
|
||||
industry=industry,
|
||||
business_size=business_size
|
||||
)
|
||||
|
||||
# Extract context from the result for analysis
|
||||
context = {
|
||||
"step_results": final_calendar.get("step_results", {}),
|
||||
"quality_scores": final_calendar.get("quality_scores", {})
|
||||
}
|
||||
|
||||
execution_time = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Full 12-step process completed in {execution_time:.2f} seconds")
|
||||
|
||||
# Analyze results
|
||||
step_results = context.get("step_results", {})
|
||||
quality_scores = context.get("quality_scores", {})
|
||||
|
||||
logger.info("📊 Step Results Analysis:")
|
||||
logger.info(f" Total steps executed: {len(step_results)}")
|
||||
|
||||
# Check each step
|
||||
for step_key in sorted(step_results.keys()):
|
||||
step_result = step_results[step_key]
|
||||
status = step_result.get("status", "unknown")
|
||||
quality_score = step_result.get("quality_score", 0.0)
|
||||
validation_passed = step_result.get("validation_passed", False)
|
||||
|
||||
logger.info(f" {step_key}: status={status}, quality={quality_score:.2f}, validation_passed={validation_passed}")
|
||||
|
||||
if status == "failed" or status == "error":
|
||||
logger.error(f" ❌ {step_key} failed with status: {status}")
|
||||
error_message = step_result.get("error_message", "No error message")
|
||||
logger.error(f" Error: {error_message}")
|
||||
|
||||
# Check Step 5 specifically
|
||||
step_05_result = step_results.get("step_05", {})
|
||||
if step_05_result:
|
||||
step_05_status = step_05_result.get("status", "unknown")
|
||||
step_05_quality = step_05_result.get("quality_score", 0.0)
|
||||
step_05_validation = step_05_result.get("validation_passed", False)
|
||||
|
||||
logger.info(f"🎯 Step 5 Analysis:")
|
||||
logger.info(f" Status: {step_05_status}")
|
||||
logger.info(f" Quality Score: {step_05_quality:.2f}")
|
||||
logger.info(f" Validation Passed: {step_05_validation}")
|
||||
|
||||
if step_05_status == "completed" and step_05_validation:
|
||||
logger.info("✅ Step 5 FIX VERIFIED - Working correctly in full process!")
|
||||
else:
|
||||
logger.error("❌ Step 5 still has issues in full process")
|
||||
else:
|
||||
logger.error("❌ Step 5 result not found in step_results")
|
||||
|
||||
# Overall quality
|
||||
overall_quality = sum(quality_scores.values()) / len(quality_scores) if quality_scores else 0.0
|
||||
logger.info(f"📊 Overall Quality Score: {overall_quality:.2f}")
|
||||
|
||||
# Success criteria
|
||||
completed_steps = sum(1 for result in step_results.values() if result.get("status") == "completed")
|
||||
total_steps = len(step_results)
|
||||
|
||||
logger.info(f"📊 Process Summary:")
|
||||
logger.info(f" Completed Steps: {completed_steps}/{total_steps}")
|
||||
logger.info(f" Success Rate: {(completed_steps/total_steps)*100:.1f}%")
|
||||
logger.info(f" Overall Quality: {overall_quality:.2f}")
|
||||
|
||||
if completed_steps == total_steps and overall_quality > 0.8:
|
||||
logger.info("🎉 SUCCESS: Full 12-step process completed successfully!")
|
||||
return True
|
||||
else:
|
||||
logger.error("❌ FAILURE: Full 12-step process had issues")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error in full 12-step process test: {str(e)}")
|
||||
import traceback
|
||||
logger.error(f"📋 Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the test
|
||||
success = asyncio.run(test_full_12_step_process())
|
||||
|
||||
if success:
|
||||
print("\n🎉 Test completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
print("\n❌ Test failed!")
|
||||
sys.exit(1)
|
||||
60
backend/test/test_gemini_direct.py
Normal file
60
backend/test/test_gemini_direct.py
Normal file
@@ -0,0 +1,60 @@
|
||||
import asyncio
|
||||
from services.llm_providers.gemini_grounded_provider import GeminiGroundedProvider
|
||||
|
||||
async def test_gemini_direct():
|
||||
gemini = GeminiGroundedProvider()
|
||||
|
||||
prompt = """
|
||||
Research the topic "AI content generation" in the Technology industry for content creators audience. Provide a comprehensive analysis including:
|
||||
|
||||
1. Current trends and insights (2024-2025)
|
||||
2. Key statistics and data points with sources
|
||||
3. Industry expert opinions and quotes
|
||||
4. Recent developments and news
|
||||
5. Market analysis and forecasts
|
||||
6. Best practices and case studies
|
||||
7. Keyword analysis: primary, secondary, and long-tail opportunities
|
||||
8. Competitor analysis: top players and content gaps
|
||||
9. Content angle suggestions: 5 compelling angles for blog posts
|
||||
|
||||
Focus on factual, up-to-date information from credible sources.
|
||||
Include specific data points, percentages, and recent developments.
|
||||
Structure your response with clear sections for each analysis area.
|
||||
"""
|
||||
|
||||
try:
|
||||
result = await gemini.generate_grounded_content(
|
||||
prompt=prompt,
|
||||
content_type="research",
|
||||
max_tokens=2000
|
||||
)
|
||||
|
||||
print("=== GEMINI RESULT ===")
|
||||
print(f"Type: {type(result)}")
|
||||
print(f"Keys: {list(result.keys()) if isinstance(result, dict) else 'Not a dict'}")
|
||||
|
||||
if isinstance(result, dict):
|
||||
print(f"Sources count: {len(result.get('sources', []))}")
|
||||
print(f"Search queries count: {len(result.get('search_queries', []))}")
|
||||
print(f"Has search widget: {bool(result.get('search_widget'))}")
|
||||
print(f"Content length: {len(result.get('content', ''))}")
|
||||
|
||||
print("\n=== FIRST SOURCE ===")
|
||||
sources = result.get('sources', [])
|
||||
if sources:
|
||||
print(f"Source: {sources[0]}")
|
||||
|
||||
print("\n=== SEARCH QUERIES (First 3) ===")
|
||||
queries = result.get('search_queries', [])
|
||||
for i, query in enumerate(queries[:3]):
|
||||
print(f"{i+1}. {query}")
|
||||
else:
|
||||
print(f"Result is not a dict: {result}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_gemini_direct())
|
||||
495
backend/test/test_grounding_engine.py
Normal file
495
backend/test/test_grounding_engine.py
Normal file
@@ -0,0 +1,495 @@
|
||||
"""
|
||||
Unit tests for GroundingContextEngine.
|
||||
|
||||
Tests the enhanced grounding metadata utilization functionality.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from typing import List
|
||||
|
||||
from models.blog_models import (
|
||||
GroundingMetadata,
|
||||
GroundingChunk,
|
||||
GroundingSupport,
|
||||
Citation,
|
||||
BlogOutlineSection,
|
||||
BlogResearchResponse,
|
||||
ResearchSource,
|
||||
)
|
||||
from services.blog_writer.outline.grounding_engine import GroundingContextEngine
|
||||
|
||||
|
||||
class TestGroundingContextEngine:
|
||||
"""Test cases for GroundingContextEngine."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.engine = GroundingContextEngine()
|
||||
|
||||
# Create sample grounding chunks
|
||||
self.sample_chunks = [
|
||||
GroundingChunk(
|
||||
title="AI Research Study 2025: Machine Learning Breakthroughs",
|
||||
url="https://research.university.edu/ai-study-2025",
|
||||
confidence_score=0.95
|
||||
),
|
||||
GroundingChunk(
|
||||
title="Enterprise AI Implementation Guide",
|
||||
url="https://techcorp.com/enterprise-ai-guide",
|
||||
confidence_score=0.88
|
||||
),
|
||||
GroundingChunk(
|
||||
title="Machine Learning Algorithms Explained",
|
||||
url="https://blog.datascience.com/ml-algorithms",
|
||||
confidence_score=0.82
|
||||
),
|
||||
GroundingChunk(
|
||||
title="AI Ethics and Responsible Development",
|
||||
url="https://ethics.org/ai-responsible-development",
|
||||
confidence_score=0.90
|
||||
),
|
||||
GroundingChunk(
|
||||
title="Personal Opinion on AI Trends",
|
||||
url="https://personal-blog.com/ai-opinion",
|
||||
confidence_score=0.65
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample grounding supports
|
||||
self.sample_supports = [
|
||||
GroundingSupport(
|
||||
confidence_scores=[0.92, 0.89],
|
||||
grounding_chunk_indices=[0, 1],
|
||||
segment_text="Recent research shows that artificial intelligence is transforming enterprise operations with significant improvements in efficiency and decision-making capabilities.",
|
||||
start_index=0,
|
||||
end_index=150
|
||||
),
|
||||
GroundingSupport(
|
||||
confidence_scores=[0.85, 0.78],
|
||||
grounding_chunk_indices=[2, 3],
|
||||
segment_text="Machine learning algorithms are becoming more sophisticated, enabling better pattern recognition and predictive analytics in business applications.",
|
||||
start_index=151,
|
||||
end_index=300
|
||||
),
|
||||
GroundingSupport(
|
||||
confidence_scores=[0.45, 0.52],
|
||||
grounding_chunk_indices=[4],
|
||||
segment_text="Some people think AI is overhyped and won't deliver on its promises.",
|
||||
start_index=301,
|
||||
end_index=400
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample citations
|
||||
self.sample_citations = [
|
||||
Citation(
|
||||
citation_type="expert_opinion",
|
||||
start_index=0,
|
||||
end_index=50,
|
||||
text="AI research shows significant improvements in enterprise operations",
|
||||
source_indices=[0],
|
||||
reference="Source 1"
|
||||
),
|
||||
Citation(
|
||||
citation_type="statistical_data",
|
||||
start_index=51,
|
||||
end_index=100,
|
||||
text="85% of enterprises report improved efficiency with AI implementation",
|
||||
source_indices=[1],
|
||||
reference="Source 2"
|
||||
),
|
||||
Citation(
|
||||
citation_type="research_study",
|
||||
start_index=101,
|
||||
end_index=150,
|
||||
text="University study demonstrates 40% increase in decision-making accuracy",
|
||||
source_indices=[0],
|
||||
reference="Source 1"
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample grounding metadata
|
||||
self.sample_grounding_metadata = GroundingMetadata(
|
||||
grounding_chunks=self.sample_chunks,
|
||||
grounding_supports=self.sample_supports,
|
||||
citations=self.sample_citations,
|
||||
search_entry_point="AI trends and enterprise implementation",
|
||||
web_search_queries=[
|
||||
"AI trends 2025 enterprise",
|
||||
"machine learning business applications",
|
||||
"AI implementation best practices"
|
||||
]
|
||||
)
|
||||
|
||||
# Create sample outline section
|
||||
self.sample_section = BlogOutlineSection(
|
||||
id="s1",
|
||||
heading="AI Implementation in Enterprise",
|
||||
subheadings=["Benefits of AI", "Implementation Challenges", "Best Practices"],
|
||||
key_points=["Improved efficiency", "Cost reduction", "Better decision making"],
|
||||
references=[],
|
||||
target_words=400,
|
||||
keywords=["AI", "enterprise", "implementation", "machine learning"]
|
||||
)
|
||||
|
||||
def test_extract_contextual_insights(self):
|
||||
"""Test extraction of contextual insights from grounding metadata."""
|
||||
insights = self.engine.extract_contextual_insights(self.sample_grounding_metadata)
|
||||
|
||||
# Should have all insight categories
|
||||
expected_categories = [
|
||||
'confidence_analysis', 'authority_analysis', 'temporal_analysis',
|
||||
'content_relationships', 'citation_insights', 'search_intent_insights',
|
||||
'quality_indicators'
|
||||
]
|
||||
|
||||
for category in expected_categories:
|
||||
assert category in insights
|
||||
|
||||
# Test confidence analysis
|
||||
confidence_analysis = insights['confidence_analysis']
|
||||
assert 'average_confidence' in confidence_analysis
|
||||
assert 'high_confidence_count' in confidence_analysis
|
||||
assert confidence_analysis['average_confidence'] > 0.0
|
||||
|
||||
# Test authority analysis
|
||||
authority_analysis = insights['authority_analysis']
|
||||
assert 'average_authority' in authority_analysis
|
||||
assert 'high_authority_sources' in authority_analysis
|
||||
assert 'authority_distribution' in authority_analysis
|
||||
|
||||
def test_extract_contextual_insights_empty_metadata(self):
|
||||
"""Test extraction with empty grounding metadata."""
|
||||
insights = self.engine.extract_contextual_insights(None)
|
||||
|
||||
# Should return empty insights structure
|
||||
assert insights['confidence_analysis']['average_confidence'] == 0.0
|
||||
assert insights['authority_analysis']['high_authority_sources'] == 0
|
||||
assert insights['temporal_analysis']['recent_content'] == 0
|
||||
|
||||
def test_analyze_confidence_patterns(self):
|
||||
"""Test confidence pattern analysis."""
|
||||
confidence_analysis = self.engine._analyze_confidence_patterns(self.sample_grounding_metadata)
|
||||
|
||||
assert 'average_confidence' in confidence_analysis
|
||||
assert 'high_confidence_count' in confidence_analysis
|
||||
assert 'confidence_distribution' in confidence_analysis
|
||||
|
||||
# Should have reasonable confidence values
|
||||
assert 0.0 <= confidence_analysis['average_confidence'] <= 1.0
|
||||
assert confidence_analysis['high_confidence_count'] >= 0
|
||||
|
||||
def test_analyze_source_authority(self):
|
||||
"""Test source authority analysis."""
|
||||
authority_analysis = self.engine._analyze_source_authority(self.sample_grounding_metadata)
|
||||
|
||||
assert 'average_authority' in authority_analysis
|
||||
assert 'high_authority_sources' in authority_analysis
|
||||
assert 'authority_distribution' in authority_analysis
|
||||
|
||||
# Should have reasonable authority values
|
||||
assert 0.0 <= authority_analysis['average_authority'] <= 1.0
|
||||
assert authority_analysis['high_authority_sources'] >= 0
|
||||
|
||||
def test_analyze_temporal_relevance(self):
|
||||
"""Test temporal relevance analysis."""
|
||||
temporal_analysis = self.engine._analyze_temporal_relevance(self.sample_grounding_metadata)
|
||||
|
||||
assert 'recent_content' in temporal_analysis
|
||||
assert 'trending_topics' in temporal_analysis
|
||||
assert 'evergreen_content' in temporal_analysis
|
||||
assert 'temporal_balance' in temporal_analysis
|
||||
|
||||
# Should have reasonable temporal values
|
||||
assert temporal_analysis['recent_content'] >= 0
|
||||
assert temporal_analysis['evergreen_content'] >= 0
|
||||
assert temporal_analysis['temporal_balance'] in ['recent_heavy', 'evergreen_heavy', 'balanced', 'unknown']
|
||||
|
||||
def test_analyze_content_relationships(self):
|
||||
"""Test content relationship analysis."""
|
||||
relationships = self.engine._analyze_content_relationships(self.sample_grounding_metadata)
|
||||
|
||||
assert 'related_concepts' in relationships
|
||||
assert 'content_gaps' in relationships
|
||||
assert 'concept_coverage' in relationships
|
||||
assert 'gap_count' in relationships
|
||||
|
||||
# Should have reasonable relationship values
|
||||
assert isinstance(relationships['related_concepts'], list)
|
||||
assert isinstance(relationships['content_gaps'], list)
|
||||
assert relationships['concept_coverage'] >= 0
|
||||
assert relationships['gap_count'] >= 0
|
||||
|
||||
def test_analyze_citation_patterns(self):
|
||||
"""Test citation pattern analysis."""
|
||||
citation_analysis = self.engine._analyze_citation_patterns(self.sample_grounding_metadata)
|
||||
|
||||
assert 'citation_types' in citation_analysis
|
||||
assert 'total_citations' in citation_analysis
|
||||
assert 'citation_density' in citation_analysis
|
||||
assert 'citation_quality' in citation_analysis
|
||||
|
||||
# Should have reasonable citation values
|
||||
assert citation_analysis['total_citations'] == len(self.sample_citations)
|
||||
assert citation_analysis['citation_density'] >= 0.0
|
||||
assert 0.0 <= citation_analysis['citation_quality'] <= 1.0
|
||||
|
||||
def test_analyze_search_intent(self):
|
||||
"""Test search intent analysis."""
|
||||
intent_analysis = self.engine._analyze_search_intent(self.sample_grounding_metadata)
|
||||
|
||||
assert 'intent_signals' in intent_analysis
|
||||
assert 'user_questions' in intent_analysis
|
||||
assert 'primary_intent' in intent_analysis
|
||||
|
||||
# Should have reasonable intent values
|
||||
assert isinstance(intent_analysis['intent_signals'], list)
|
||||
assert isinstance(intent_analysis['user_questions'], list)
|
||||
assert intent_analysis['primary_intent'] in ['informational', 'comparison', 'transactional']
|
||||
|
||||
def test_assess_quality_indicators(self):
|
||||
"""Test quality indicator assessment."""
|
||||
quality_indicators = self.engine._assess_quality_indicators(self.sample_grounding_metadata)
|
||||
|
||||
assert 'overall_quality' in quality_indicators
|
||||
assert 'quality_factors' in quality_indicators
|
||||
assert 'quality_grade' in quality_indicators
|
||||
|
||||
# Should have reasonable quality values
|
||||
assert 0.0 <= quality_indicators['overall_quality'] <= 1.0
|
||||
assert isinstance(quality_indicators['quality_factors'], list)
|
||||
assert quality_indicators['quality_grade'] in ['A', 'B', 'C', 'D', 'F']
|
||||
|
||||
def test_calculate_chunk_authority(self):
|
||||
"""Test chunk authority calculation."""
|
||||
# Test high authority chunk
|
||||
high_authority_chunk = self.sample_chunks[0] # Research study
|
||||
authority_score = self.engine._calculate_chunk_authority(high_authority_chunk)
|
||||
assert 0.0 <= authority_score <= 1.0
|
||||
assert authority_score > 0.5 # Should be high authority
|
||||
|
||||
# Test low authority chunk
|
||||
low_authority_chunk = self.sample_chunks[4] # Personal opinion
|
||||
authority_score = self.engine._calculate_chunk_authority(low_authority_chunk)
|
||||
assert 0.0 <= authority_score <= 1.0
|
||||
assert authority_score < 0.7 # Should be lower authority
|
||||
|
||||
def test_get_authority_sources(self):
|
||||
"""Test getting high-authority sources."""
|
||||
authority_sources = self.engine.get_authority_sources(self.sample_grounding_metadata)
|
||||
|
||||
# Should return list of tuples
|
||||
assert isinstance(authority_sources, list)
|
||||
|
||||
# Each item should be (chunk, score) tuple
|
||||
for chunk, score in authority_sources:
|
||||
assert isinstance(chunk, GroundingChunk)
|
||||
assert isinstance(score, float)
|
||||
assert 0.0 <= score <= 1.0
|
||||
|
||||
# Should be sorted by authority score (descending)
|
||||
if len(authority_sources) > 1:
|
||||
for i in range(len(authority_sources) - 1):
|
||||
assert authority_sources[i][1] >= authority_sources[i + 1][1]
|
||||
|
||||
def test_get_high_confidence_insights(self):
|
||||
"""Test getting high-confidence insights."""
|
||||
insights = self.engine.get_high_confidence_insights(self.sample_grounding_metadata)
|
||||
|
||||
# Should return list of insights
|
||||
assert isinstance(insights, list)
|
||||
|
||||
# Each insight should be a string
|
||||
for insight in insights:
|
||||
assert isinstance(insight, str)
|
||||
assert len(insight) > 0
|
||||
|
||||
def test_enhance_sections_with_grounding(self):
|
||||
"""Test section enhancement with grounding insights."""
|
||||
sections = [self.sample_section]
|
||||
insights = self.engine.extract_contextual_insights(self.sample_grounding_metadata)
|
||||
|
||||
enhanced_sections = self.engine.enhance_sections_with_grounding(
|
||||
sections, self.sample_grounding_metadata, insights
|
||||
)
|
||||
|
||||
# Should return same number of sections
|
||||
assert len(enhanced_sections) == len(sections)
|
||||
|
||||
# Enhanced section should have same basic structure
|
||||
enhanced_section = enhanced_sections[0]
|
||||
assert enhanced_section.id == self.sample_section.id
|
||||
assert enhanced_section.heading == self.sample_section.heading
|
||||
|
||||
# Should have enhanced content
|
||||
assert len(enhanced_section.subheadings) >= len(self.sample_section.subheadings)
|
||||
assert len(enhanced_section.key_points) >= len(self.sample_section.key_points)
|
||||
assert len(enhanced_section.keywords) >= len(self.sample_section.keywords)
|
||||
|
||||
def test_enhance_sections_with_empty_grounding(self):
|
||||
"""Test section enhancement with empty grounding metadata."""
|
||||
sections = [self.sample_section]
|
||||
|
||||
enhanced_sections = self.engine.enhance_sections_with_grounding(
|
||||
sections, None, {}
|
||||
)
|
||||
|
||||
# Should return original sections unchanged
|
||||
assert len(enhanced_sections) == len(sections)
|
||||
assert enhanced_sections[0].subheadings == self.sample_section.subheadings
|
||||
assert enhanced_sections[0].key_points == self.sample_section.key_points
|
||||
assert enhanced_sections[0].keywords == self.sample_section.keywords
|
||||
|
||||
def test_find_relevant_chunks(self):
|
||||
"""Test finding relevant chunks for a section."""
|
||||
relevant_chunks = self.engine._find_relevant_chunks(
|
||||
self.sample_section, self.sample_grounding_metadata
|
||||
)
|
||||
|
||||
# Should return list of relevant chunks
|
||||
assert isinstance(relevant_chunks, list)
|
||||
|
||||
# Each chunk should be a GroundingChunk
|
||||
for chunk in relevant_chunks:
|
||||
assert isinstance(chunk, GroundingChunk)
|
||||
|
||||
def test_find_relevant_supports(self):
|
||||
"""Test finding relevant supports for a section."""
|
||||
relevant_supports = self.engine._find_relevant_supports(
|
||||
self.sample_section, self.sample_grounding_metadata
|
||||
)
|
||||
|
||||
# Should return list of relevant supports
|
||||
assert isinstance(relevant_supports, list)
|
||||
|
||||
# Each support should be a GroundingSupport
|
||||
for support in relevant_supports:
|
||||
assert isinstance(support, GroundingSupport)
|
||||
|
||||
def test_extract_insight_from_segment(self):
|
||||
"""Test insight extraction from segment text."""
|
||||
# Test with valid segment
|
||||
segment = "This is a comprehensive analysis of AI trends in enterprise applications."
|
||||
insight = self.engine._extract_insight_from_segment(segment)
|
||||
assert insight == segment
|
||||
|
||||
# Test with short segment
|
||||
short_segment = "Short"
|
||||
insight = self.engine._extract_insight_from_segment(short_segment)
|
||||
assert insight is None
|
||||
|
||||
# Test with long segment
|
||||
long_segment = "This is a very long segment that exceeds the maximum length limit and should be truncated appropriately to ensure it fits within the expected constraints and provides comprehensive coverage of the topic while maintaining readability and clarity for the intended audience."
|
||||
insight = self.engine._extract_insight_from_segment(long_segment)
|
||||
assert insight is not None
|
||||
assert len(insight) <= 203 # 200 + "..."
|
||||
assert insight.endswith("...")
|
||||
|
||||
def test_get_confidence_distribution(self):
|
||||
"""Test confidence distribution calculation."""
|
||||
confidences = [0.95, 0.88, 0.82, 0.90, 0.65]
|
||||
distribution = self.engine._get_confidence_distribution(confidences)
|
||||
|
||||
assert 'high' in distribution
|
||||
assert 'medium' in distribution
|
||||
assert 'low' in distribution
|
||||
|
||||
# Should have reasonable distribution
|
||||
total = distribution['high'] + distribution['medium'] + distribution['low']
|
||||
assert total == len(confidences)
|
||||
|
||||
def test_calculate_temporal_balance(self):
|
||||
"""Test temporal balance calculation."""
|
||||
# Test recent heavy
|
||||
balance = self.engine._calculate_temporal_balance(8, 2)
|
||||
assert balance == 'recent_heavy'
|
||||
|
||||
# Test evergreen heavy
|
||||
balance = self.engine._calculate_temporal_balance(2, 8)
|
||||
assert balance == 'evergreen_heavy'
|
||||
|
||||
# Test balanced
|
||||
balance = self.engine._calculate_temporal_balance(5, 5)
|
||||
assert balance == 'balanced'
|
||||
|
||||
# Test empty
|
||||
balance = self.engine._calculate_temporal_balance(0, 0)
|
||||
assert balance == 'unknown'
|
||||
|
||||
def test_extract_related_concepts(self):
|
||||
"""Test related concept extraction."""
|
||||
text_list = [
|
||||
"Artificial Intelligence is transforming Machine Learning applications",
|
||||
"Deep Learning algorithms are improving Neural Network performance",
|
||||
"Natural Language Processing is advancing AI capabilities"
|
||||
]
|
||||
|
||||
concepts = self.engine._extract_related_concepts(text_list)
|
||||
|
||||
# Should extract capitalized concepts
|
||||
assert isinstance(concepts, list)
|
||||
assert len(concepts) > 0
|
||||
|
||||
# Should contain expected concepts
|
||||
expected_concepts = ['Artificial', 'Intelligence', 'Machine', 'Learning', 'Deep', 'Neural', 'Network']
|
||||
for concept in expected_concepts:
|
||||
assert concept in concepts
|
||||
|
||||
def test_identify_content_gaps(self):
|
||||
"""Test content gap identification."""
|
||||
text_list = [
|
||||
"The research shows significant improvements in AI applications",
|
||||
"However, there is a lack of comprehensive studies on AI ethics",
|
||||
"The gap in understanding AI bias remains unexplored",
|
||||
"Current research does not cover all aspects of AI implementation"
|
||||
]
|
||||
|
||||
gaps = self.engine._identify_content_gaps(text_list)
|
||||
|
||||
# Should identify gaps
|
||||
assert isinstance(gaps, list)
|
||||
assert len(gaps) > 0
|
||||
|
||||
def test_assess_citation_quality(self):
|
||||
"""Test citation quality assessment."""
|
||||
quality = self.engine._assess_citation_quality(self.sample_citations)
|
||||
|
||||
# Should have reasonable quality score
|
||||
assert 0.0 <= quality <= 1.0
|
||||
assert quality > 0.0 # Should have some quality
|
||||
|
||||
def test_determine_primary_intent(self):
|
||||
"""Test primary intent determination."""
|
||||
# Test informational intent
|
||||
intent = self.engine._determine_primary_intent(['informational', 'informational', 'comparison'])
|
||||
assert intent == 'informational'
|
||||
|
||||
# Test empty signals
|
||||
intent = self.engine._determine_primary_intent([])
|
||||
assert intent == 'informational'
|
||||
|
||||
def test_get_quality_grade(self):
|
||||
"""Test quality grade calculation."""
|
||||
# Test A grade
|
||||
grade = self.engine._get_quality_grade(0.95)
|
||||
assert grade == 'A'
|
||||
|
||||
# Test B grade
|
||||
grade = self.engine._get_quality_grade(0.85)
|
||||
assert grade == 'B'
|
||||
|
||||
# Test C grade
|
||||
grade = self.engine._get_quality_grade(0.75)
|
||||
assert grade == 'C'
|
||||
|
||||
# Test D grade
|
||||
grade = self.engine._get_quality_grade(0.65)
|
||||
assert grade == 'D'
|
||||
|
||||
# Test F grade
|
||||
grade = self.engine._get_quality_grade(0.45)
|
||||
assert grade == 'F'
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
pytest.main([__file__])
|
||||
62
backend/test/test_grounding_flow.py
Normal file
62
backend/test/test_grounding_flow.py
Normal file
@@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to debug the grounding data flow
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.linkedin_service import LinkedInService
|
||||
from models.linkedin_models import LinkedInPostRequest, GroundingLevel
|
||||
|
||||
async def test_grounding_flow():
|
||||
"""Test the grounding data flow"""
|
||||
try:
|
||||
print("🔍 Testing grounding data flow...")
|
||||
|
||||
# Initialize the service
|
||||
service = LinkedInService()
|
||||
print("✅ LinkedInService initialized")
|
||||
|
||||
# Create a test request
|
||||
request = LinkedInPostRequest(
|
||||
topic="AI in healthcare transformation",
|
||||
industry="Healthcare",
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
include_citations=True,
|
||||
research_enabled=True,
|
||||
search_engine="google",
|
||||
max_length=2000
|
||||
)
|
||||
print("✅ Test request created")
|
||||
|
||||
# Generate post
|
||||
print("🚀 Generating LinkedIn post...")
|
||||
response = await service.generate_linkedin_post(request)
|
||||
|
||||
if response.success:
|
||||
print("✅ Post generated successfully!")
|
||||
print(f"📊 Research sources count: {len(response.research_sources) if response.research_sources else 0}")
|
||||
print(f"📝 Citations count: {len(response.data.citations) if response.data.citations else 0}")
|
||||
print(f"🔗 Source list: {response.data.source_list[:200] if response.data.source_list else 'None'}")
|
||||
|
||||
if response.research_sources:
|
||||
print(f"📚 First research source: {response.research_sources[0]}")
|
||||
print(f"📚 Research source types: {[type(s) for s in response.research_sources[:3]]}")
|
||||
|
||||
if response.data.citations:
|
||||
print(f"📝 First citation: {response.data.citations[0]}")
|
||||
else:
|
||||
print(f"❌ Post generation failed: {response.error}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during test: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_grounding_flow())
|
||||
228
backend/test/test_grounding_integration.py
Normal file
228
backend/test/test_grounding_integration.py
Normal file
@@ -0,0 +1,228 @@
|
||||
"""
|
||||
Test script for LinkedIn grounding integration.
|
||||
|
||||
This script tests the integration of the new grounding services:
|
||||
- GoogleSearchService
|
||||
- GeminiGroundedProvider
|
||||
- CitationManager
|
||||
- ContentQualityAnalyzer
|
||||
- Enhanced LinkedInService
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
from datetime import datetime
|
||||
from loguru import logger
|
||||
|
||||
# Set up environment variables for testing
|
||||
os.environ.setdefault('GOOGLE_SEARCH_API_KEY', 'test_key')
|
||||
os.environ.setdefault('GOOGLE_SEARCH_ENGINE_ID', 'test_engine_id')
|
||||
os.environ.setdefault('GEMINI_API_KEY', 'test_gemini_key')
|
||||
|
||||
from services.linkedin_service import LinkedInService
|
||||
from models.linkedin_models import (
|
||||
LinkedInPostRequest, LinkedInArticleRequest, LinkedInCarouselRequest,
|
||||
LinkedInVideoScriptRequest, LinkedInCommentResponseRequest,
|
||||
GroundingLevel, SearchEngine, LinkedInTone, LinkedInPostType
|
||||
)
|
||||
|
||||
|
||||
async def test_grounding_integration():
|
||||
"""Test the complete grounding integration."""
|
||||
logger.info("Starting LinkedIn grounding integration test")
|
||||
|
||||
try:
|
||||
# Initialize the enhanced LinkedIn service
|
||||
linkedin_service = LinkedInService()
|
||||
logger.info("LinkedIn service initialized successfully")
|
||||
|
||||
# Test 1: Basic post generation with grounding disabled
|
||||
logger.info("\n=== Test 1: Basic Post Generation (No Grounding) ===")
|
||||
basic_request = LinkedInPostRequest(
|
||||
topic="AI in Marketing",
|
||||
industry="Marketing",
|
||||
post_type=LinkedInPostType.PROFESSIONAL,
|
||||
tone=LinkedInTone.PROFESSIONAL,
|
||||
research_enabled=False,
|
||||
grounding_level=GroundingLevel.NONE,
|
||||
include_citations=False
|
||||
)
|
||||
|
||||
basic_response = await linkedin_service.generate_linkedin_post(basic_request)
|
||||
logger.info(f"Basic post generation: {'SUCCESS' if basic_response.success else 'FAILED'}")
|
||||
if basic_response.success:
|
||||
logger.info(f"Content length: {basic_response.data.character_count}")
|
||||
logger.info(f"Grounding enabled: {basic_response.data.grounding_enabled}")
|
||||
|
||||
# Test 2: Enhanced post generation with grounding enabled
|
||||
logger.info("\n=== Test 2: Enhanced Post Generation (With Grounding) ===")
|
||||
enhanced_request = LinkedInPostRequest(
|
||||
topic="Digital Transformation in Healthcare",
|
||||
industry="Healthcare",
|
||||
post_type=LinkedInPostType.THOUGHT_LEADERSHIP,
|
||||
tone=LinkedInTone.AUTHORITATIVE,
|
||||
research_enabled=True,
|
||||
search_engine=SearchEngine.GOOGLE,
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
include_citations=True,
|
||||
max_length=2000
|
||||
)
|
||||
|
||||
enhanced_response = await linkedin_service.generate_linkedin_post(enhanced_request)
|
||||
logger.info(f"Enhanced post generation: {'SUCCESS' if enhanced_response.success else 'FAILED'}")
|
||||
if enhanced_response.success:
|
||||
logger.info(f"Content length: {enhanced_response.data.character_count}")
|
||||
logger.info(f"Grounding enabled: {enhanced_response.data.grounding_enabled}")
|
||||
logger.info(f"Research sources: {len(enhanced_response.research_sources)}")
|
||||
logger.info(f"Citations: {len(enhanced_response.data.citations)}")
|
||||
if enhanced_response.data.quality_metrics:
|
||||
logger.info(f"Quality score: {enhanced_response.data.quality_metrics.overall_score:.2f}")
|
||||
if enhanced_response.grounding_status:
|
||||
logger.info(f"Grounding status: {enhanced_response.grounding_status['status']}")
|
||||
|
||||
# Test 3: Article generation with grounding
|
||||
logger.info("\n=== Test 3: Article Generation (With Grounding) ===")
|
||||
article_request = LinkedInArticleRequest(
|
||||
topic="Future of Remote Work",
|
||||
industry="Technology",
|
||||
tone=LinkedInTone.EDUCATIONAL,
|
||||
research_enabled=True,
|
||||
search_engine=SearchEngine.GOOGLE,
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
include_citations=True,
|
||||
word_count=1500
|
||||
)
|
||||
|
||||
article_response = await linkedin_service.generate_linkedin_article(article_request)
|
||||
logger.info(f"Article generation: {'SUCCESS' if article_response.success else 'FAILED'}")
|
||||
if article_response.success:
|
||||
logger.info(f"Word count: {article_response.data.word_count}")
|
||||
logger.info(f"Grounding enabled: {article_response.data.grounding_enabled}")
|
||||
logger.info(f"Research sources: {len(article_response.research_sources)}")
|
||||
logger.info(f"Citations: {len(article_response.data.citations)}")
|
||||
|
||||
# Test 4: Carousel generation with grounding
|
||||
logger.info("\n=== Test 4: Carousel Generation (With Grounding) ===")
|
||||
carousel_request = LinkedInCarouselRequest(
|
||||
topic="Cybersecurity Best Practices",
|
||||
industry="Technology",
|
||||
tone=LinkedInTone.EDUCATIONAL,
|
||||
research_enabled=True,
|
||||
search_engine=SearchEngine.GOOGLE,
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
include_citations=True,
|
||||
number_of_slides=5
|
||||
)
|
||||
|
||||
carousel_response = await linkedin_service.generate_linkedin_carousel(carousel_request)
|
||||
logger.info(f"Carousel generation: {'SUCCESS' if carousel_response.success else 'FAILED'}")
|
||||
if carousel_response.success:
|
||||
logger.info(f"Number of slides: {len(carousel_response.data.slides)}")
|
||||
logger.info(f"Grounding enabled: {carousel_response.data.grounding_enabled}")
|
||||
logger.info(f"Research sources: {len(carousel_response.research_sources)}")
|
||||
|
||||
# Test 5: Video script generation with grounding
|
||||
logger.info("\n=== Test 5: Video Script Generation (With Grounding) ===")
|
||||
video_request = LinkedInVideoScriptRequest(
|
||||
topic="AI Ethics in Business",
|
||||
industry="Technology",
|
||||
tone=LinkedInTone.EDUCATIONAL,
|
||||
research_enabled=True,
|
||||
search_engine=SearchEngine.GOOGLE,
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
include_citations=True,
|
||||
video_duration=90
|
||||
)
|
||||
|
||||
video_response = await linkedin_service.generate_linkedin_video_script(video_request)
|
||||
logger.info(f"Video script generation: {'SUCCESS' if video_response.success else 'FAILED'}")
|
||||
if video_response.success:
|
||||
logger.info(f"Grounding enabled: {video_response.data.grounding_enabled}")
|
||||
logger.info(f"Research sources: {len(video_response.research_sources)}")
|
||||
logger.info(f"Citations: {len(video_response.data.citations)}")
|
||||
|
||||
# Test 6: Comment response generation
|
||||
logger.info("\n=== Test 6: Comment Response Generation ===")
|
||||
comment_request = LinkedInCommentResponseRequest(
|
||||
original_comment="Great insights on AI implementation!",
|
||||
post_context="Post about AI transformation in healthcare",
|
||||
industry="Healthcare",
|
||||
tone=LinkedInTone.FRIENDLY,
|
||||
response_length="medium",
|
||||
include_questions=True,
|
||||
research_enabled=False,
|
||||
grounding_level=GroundingLevel.BASIC
|
||||
)
|
||||
|
||||
comment_response = await linkedin_service.generate_linkedin_comment_response(comment_request)
|
||||
logger.info(f"Comment response generation: {'SUCCESS' if comment_response.success else 'FAILED'}")
|
||||
if comment_response.success:
|
||||
logger.info(f"Response length: {len(comment_response.response) if comment_response.response else 0}")
|
||||
logger.info(f"Grounding enabled: {comment_response.grounding_status['status'] if comment_response.grounding_status else 'N/A'}")
|
||||
|
||||
logger.info("\n=== Integration Test Summary ===")
|
||||
logger.info("All tests completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Integration test failed: {str(e)}")
|
||||
raise
|
||||
|
||||
|
||||
async def test_individual_services():
|
||||
"""Test individual service components."""
|
||||
logger.info("\n=== Testing Individual Service Components ===")
|
||||
|
||||
try:
|
||||
# Test Google Search Service
|
||||
from services.research import GoogleSearchService
|
||||
google_search = GoogleSearchService()
|
||||
logger.info("GoogleSearchService initialized successfully")
|
||||
|
||||
# Test Citation Manager
|
||||
from services.citation import CitationManager
|
||||
citation_manager = CitationManager()
|
||||
logger.info("CitationManager initialized successfully")
|
||||
|
||||
# Test Content Quality Analyzer
|
||||
from services.quality import ContentQualityAnalyzer
|
||||
quality_analyzer = ContentQualityAnalyzer()
|
||||
logger.info("ContentQualityAnalyzer initialized successfully")
|
||||
|
||||
# Test Gemini Grounded Provider
|
||||
from services.llm_providers.gemini_grounded_provider import GeminiGroundedProvider
|
||||
gemini_grounded = GeminiGroundedProvider()
|
||||
logger.info("GeminiGroundedProvider initialized successfully")
|
||||
|
||||
logger.info("All individual services initialized successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Service component test failed: {str(e)}")
|
||||
raise
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("Starting LinkedIn Grounding Integration Tests")
|
||||
logger.info(f"Test timestamp: {datetime.now().isoformat()}")
|
||||
|
||||
try:
|
||||
# Test individual services first
|
||||
await test_individual_services()
|
||||
|
||||
# Test complete integration
|
||||
await test_grounding_integration()
|
||||
|
||||
logger.info("\n🎉 All tests completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Test suite failed: {str(e)}")
|
||||
logger.error("Please check the error details above and ensure all services are properly configured.")
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the tests
|
||||
exit_code = asyncio.run(main())
|
||||
exit(exit_code)
|
||||
134
backend/test/test_hallucination_detector.py
Normal file
134
backend/test/test_hallucination_detector.py
Normal file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for the hallucination detector service.
|
||||
|
||||
This script tests the hallucination detector functionality
|
||||
without requiring the full FastAPI server to be running.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from services.hallucination_detector import HallucinationDetector
|
||||
|
||||
async def test_hallucination_detector():
|
||||
"""Test the hallucination detector with sample text."""
|
||||
|
||||
print("🧪 Testing Hallucination Detector")
|
||||
print("=" * 50)
|
||||
|
||||
# Initialize detector
|
||||
detector = HallucinationDetector()
|
||||
|
||||
# Test text with various types of claims
|
||||
test_text = """
|
||||
The Eiffel Tower is located in Paris, France. It was built in 1889 and stands 330 meters tall.
|
||||
The tower was designed by Gustave Eiffel and is one of the most visited monuments in the world.
|
||||
Our company increased sales by 25% last quarter and launched three new products.
|
||||
The weather today is sunny with a temperature of 22 degrees Celsius.
|
||||
"""
|
||||
|
||||
print(f"📝 Test Text:\n{test_text.strip()}\n")
|
||||
|
||||
try:
|
||||
# Test claim extraction
|
||||
print("🔍 Testing claim extraction...")
|
||||
claims = await detector._extract_claims(test_text)
|
||||
print(f"✅ Extracted {len(claims)} claims:")
|
||||
for i, claim in enumerate(claims, 1):
|
||||
print(f" {i}. {claim}")
|
||||
print()
|
||||
|
||||
# Test full hallucination detection
|
||||
print("🔍 Testing full hallucination detection...")
|
||||
result = await detector.detect_hallucinations(test_text)
|
||||
|
||||
print(f"✅ Analysis completed:")
|
||||
print(f" Overall Confidence: {result.overall_confidence:.2f}")
|
||||
print(f" Total Claims: {result.total_claims}")
|
||||
print(f" Supported: {result.supported_claims}")
|
||||
print(f" Refuted: {result.refuted_claims}")
|
||||
print(f" Insufficient: {result.insufficient_claims}")
|
||||
print()
|
||||
|
||||
# Display individual claims
|
||||
print("📊 Individual Claim Analysis:")
|
||||
for i, claim in enumerate(result.claims, 1):
|
||||
print(f"\n Claim {i}: {claim.text}")
|
||||
print(f" Assessment: {claim.assessment}")
|
||||
print(f" Confidence: {claim.confidence:.2f}")
|
||||
print(f" Supporting Sources: {len(claim.supporting_sources)}")
|
||||
print(f" Refuting Sources: {len(claim.refuting_sources)}")
|
||||
|
||||
if claim.supporting_sources:
|
||||
print(" Supporting Sources:")
|
||||
for j, source in enumerate(claim.supporting_sources[:2], 1): # Show first 2
|
||||
print(f" {j}. {source.get('title', 'Untitled')} (Score: {source.get('score', 0):.2f})")
|
||||
|
||||
if claim.refuting_sources:
|
||||
print(" Refuting Sources:")
|
||||
for j, source in enumerate(claim.refuting_sources[:2], 1): # Show first 2
|
||||
print(f" {j}. {source.get('title', 'Untitled')} (Score: {source.get('score', 0):.2f})")
|
||||
|
||||
print("\n✅ Test completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed with error: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
async def test_health_check():
|
||||
"""Test the health check functionality."""
|
||||
|
||||
print("\n🏥 Testing Health Check")
|
||||
print("=" * 30)
|
||||
|
||||
detector = HallucinationDetector()
|
||||
|
||||
# Check API availability
|
||||
exa_available = bool(detector.exa_api_key)
|
||||
openai_available = bool(detector.openai_api_key)
|
||||
|
||||
print(f"Exa.ai API Available: {'✅' if exa_available else '❌'}")
|
||||
print(f"OpenAI API Available: {'✅' if openai_available else '❌'}")
|
||||
|
||||
if not exa_available:
|
||||
print("⚠️ Exa.ai API key not found. Set EXA_API_KEY environment variable.")
|
||||
|
||||
if not openai_available:
|
||||
print("⚠️ OpenAI API key not found. Set OPENAI_API_KEY environment variable.")
|
||||
|
||||
if exa_available and openai_available:
|
||||
print("✅ All APIs are available for full functionality.")
|
||||
elif openai_available:
|
||||
print("⚠️ Limited functionality available (claim extraction only).")
|
||||
else:
|
||||
print("❌ No APIs available. Only fallback functionality will work.")
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
|
||||
print("🚀 Hallucination Detector Test Suite")
|
||||
print("=" * 50)
|
||||
|
||||
# Check environment variables
|
||||
print("🔧 Environment Check:")
|
||||
exa_key = os.getenv('EXA_API_KEY')
|
||||
openai_key = os.getenv('OPENAI_API_KEY')
|
||||
|
||||
print(f"EXA_API_KEY: {'✅ Set' if exa_key else '❌ Not set'}")
|
||||
print(f"OPENAI_API_KEY: {'✅ Set' if openai_key else '❌ Not set'}")
|
||||
print()
|
||||
|
||||
# Run tests
|
||||
asyncio.run(test_health_check())
|
||||
asyncio.run(test_hallucination_detector())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
95
backend/test/test_image_api.py
Normal file
95
backend/test/test_image_api.py
Normal file
@@ -0,0 +1,95 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for LinkedIn Image Generation API endpoints
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import aiohttp
|
||||
import json
|
||||
|
||||
async def test_image_generation_api():
|
||||
"""Test the LinkedIn image generation API endpoints"""
|
||||
|
||||
base_url = "http://localhost:8000"
|
||||
|
||||
print("🧪 Testing LinkedIn Image Generation API...")
|
||||
print("=" * 50)
|
||||
|
||||
# Test 1: Health Check
|
||||
print("\n1️⃣ Testing Health Check...")
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(f"{base_url}/api/linkedin/image-generation-health") as response:
|
||||
if response.status == 200:
|
||||
health_data = await response.json()
|
||||
print(f"✅ Health Check: {health_data['status']}")
|
||||
print(f" Services: {health_data['services']}")
|
||||
print(f" Test Prompts: {health_data['test_prompts_generated']}")
|
||||
else:
|
||||
print(f"❌ Health Check Failed: {response.status}")
|
||||
return
|
||||
|
||||
# Test 2: Generate Image Prompts
|
||||
print("\n2️⃣ Testing Image Prompt Generation...")
|
||||
prompt_data = {
|
||||
"content_type": "post",
|
||||
"topic": "AI in Marketing",
|
||||
"industry": "Technology",
|
||||
"content": "This is a test LinkedIn post about AI in marketing. It demonstrates the image generation capabilities."
|
||||
}
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(
|
||||
f"{base_url}/api/linkedin/generate-image-prompts",
|
||||
json=prompt_data
|
||||
) as response:
|
||||
if response.status == 200:
|
||||
prompts = await response.json()
|
||||
print(f"✅ Generated {len(prompts)} image prompts:")
|
||||
for i, prompt in enumerate(prompts, 1):
|
||||
print(f" {i}. {prompt['style']}: {prompt['description']}")
|
||||
|
||||
# Test 3: Generate Image from First Prompt
|
||||
print("\n3️⃣ Testing Image Generation...")
|
||||
image_data = {
|
||||
"prompt": prompts[0]['prompt'],
|
||||
"content_context": {
|
||||
"topic": prompt_data["topic"],
|
||||
"industry": prompt_data["industry"],
|
||||
"content_type": prompt_data["content_type"],
|
||||
"content": prompt_data["content"],
|
||||
"style": prompts[0]['style']
|
||||
},
|
||||
"aspect_ratio": "1:1"
|
||||
}
|
||||
|
||||
async with session.post(
|
||||
f"{base_url}/api/linkedin/generate-image",
|
||||
json=image_data
|
||||
) as img_response:
|
||||
if img_response.status == 200:
|
||||
result = await img_response.json()
|
||||
if result.get('success'):
|
||||
print(f"✅ Image Generated Successfully!")
|
||||
print(f" Image ID: {result.get('image_id')}")
|
||||
print(f" Style: {result.get('style')}")
|
||||
print(f" Aspect Ratio: {result.get('aspect_ratio')}")
|
||||
else:
|
||||
print(f"❌ Image Generation Failed: {result.get('error')}")
|
||||
else:
|
||||
print(f"❌ Image Generation Request Failed: {img_response.status}")
|
||||
error_text = await img_response.text()
|
||||
print(f" Error: {error_text}")
|
||||
else:
|
||||
print(f"❌ Prompt Generation Failed: {response.status}")
|
||||
error_text = await response.text()
|
||||
print(f" Error: {error_text}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("🚀 Starting LinkedIn Image Generation API Tests...")
|
||||
try:
|
||||
asyncio.run(test_image_generation_api())
|
||||
print("\n🎉 All tests completed!")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Test failed with error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
139
backend/test/test_imports.py
Normal file
139
backend/test/test_imports.py
Normal file
@@ -0,0 +1,139 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple test script to verify import issues are fixed.
|
||||
|
||||
This script tests that all the required services can be imported and initialized
|
||||
without import errors.
|
||||
|
||||
Usage:
|
||||
python test_imports.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
def test_imports():
|
||||
"""Test that all required modules can be imported."""
|
||||
print("🧪 Testing Imports...")
|
||||
|
||||
try:
|
||||
print("📦 Testing LinkedIn Models...")
|
||||
from models.linkedin_models import (
|
||||
LinkedInPostRequest, LinkedInPostResponse, PostContent, ResearchSource,
|
||||
LinkedInArticleRequest, LinkedInArticleResponse, ArticleContent,
|
||||
LinkedInCarouselRequest, LinkedInCarouselResponse, CarouselContent, CarouselSlide,
|
||||
LinkedInVideoScriptRequest, LinkedInVideoScriptResponse, VideoScript,
|
||||
LinkedInCommentResponseRequest, LinkedInCommentResponseResult,
|
||||
HashtagSuggestion, ImageSuggestion, Citation, ContentQualityMetrics,
|
||||
GroundingLevel
|
||||
)
|
||||
print("✅ LinkedIn Models imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ LinkedIn Models import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing Research Service...")
|
||||
from services.research import GoogleSearchService
|
||||
print("✅ Research Service imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Research Service import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing Citation Service...")
|
||||
from services.citation import CitationManager
|
||||
print("✅ Citation Service imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Citation Service import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing Quality Service...")
|
||||
from services.quality import ContentQualityAnalyzer
|
||||
print("✅ Quality Service imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Quality Service import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing LLM Providers...")
|
||||
from services.llm_providers.gemini_provider import gemini_structured_json_response, gemini_text_response
|
||||
print("✅ LLM Providers imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ LLM Providers import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing Gemini Grounded Provider...")
|
||||
from services.llm_providers.gemini_grounded_provider import GeminiGroundedProvider
|
||||
print("✅ Gemini Grounded Provider imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Gemini Grounded Provider import failed: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
print("📦 Testing LinkedIn Service...")
|
||||
from services.linkedin_service import LinkedInService
|
||||
print("✅ LinkedIn Service imported successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ LinkedIn Service import failed: {e}")
|
||||
return False
|
||||
|
||||
print("\n🎉 All imports successful!")
|
||||
return True
|
||||
|
||||
def test_service_initialization():
|
||||
"""Test that services can be initialized without errors."""
|
||||
print("\n🔧 Testing Service Initialization...")
|
||||
|
||||
try:
|
||||
print("📦 Initializing LinkedIn Service...")
|
||||
from services.linkedin_service import LinkedInService
|
||||
service = LinkedInService()
|
||||
print("✅ LinkedIn Service initialized successfully")
|
||||
|
||||
# Check which services are available
|
||||
print(f" - Google Search: {'✅' if service.google_search else '❌'}")
|
||||
print(f" - Gemini Grounded: {'✅' if service.gemini_grounded else '❌'}")
|
||||
print(f" - Citation Manager: {'✅' if service.citation_manager else '❌'}")
|
||||
print(f" - Quality Analyzer: {'✅' if service.quality_analyzer else '❌'}")
|
||||
print(f" - Fallback Provider: {'✅' if service.fallback_provider else '❌'}")
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ LinkedIn Service initialization failed: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
print("🚀 Starting Import Tests")
|
||||
print("=" * 50)
|
||||
|
||||
# Test imports
|
||||
import_success = test_imports()
|
||||
|
||||
if import_success:
|
||||
# Test service initialization
|
||||
init_success = test_service_initialization()
|
||||
|
||||
if init_success:
|
||||
print("\n🎉 SUCCESS: All tests passed!")
|
||||
print("✅ Import issues have been resolved")
|
||||
print("✅ Services can be initialized")
|
||||
print("✅ Ready for testing native grounding")
|
||||
else:
|
||||
print("\n⚠️ PARTIAL SUCCESS: Imports work but initialization failed")
|
||||
print("💡 This may be due to missing dependencies or configuration")
|
||||
else:
|
||||
print("\n❌ FAILURE: Import tests failed")
|
||||
print("💡 There are still import issues to resolve")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
341
backend/test/test_linkedin_endpoints.py
Normal file
341
backend/test/test_linkedin_endpoints.py
Normal file
@@ -0,0 +1,341 @@
|
||||
"""
|
||||
Test script for LinkedIn content generation endpoints.
|
||||
|
||||
This script tests the LinkedIn content generation functionality
|
||||
to ensure proper integration and validation.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from typing import Dict, Any
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to Python path
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from models.linkedin_models import (
|
||||
LinkedInPostRequest, LinkedInArticleRequest, LinkedInCarouselRequest,
|
||||
LinkedInVideoScriptRequest, LinkedInCommentResponseRequest
|
||||
)
|
||||
from services.linkedin_service import linkedin_service
|
||||
from loguru import logger
|
||||
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout, level="INFO", format="<level>{level}</level> | {message}")
|
||||
|
||||
|
||||
async def test_post_generation():
|
||||
"""Test LinkedIn post generation."""
|
||||
logger.info("🧪 Testing LinkedIn Post Generation")
|
||||
|
||||
try:
|
||||
request = LinkedInPostRequest(
|
||||
topic="Artificial Intelligence in Healthcare",
|
||||
industry="Healthcare",
|
||||
post_type="thought_leadership",
|
||||
tone="professional",
|
||||
target_audience="Healthcare executives and AI professionals",
|
||||
key_points=["AI diagnostics", "Patient outcomes", "Cost reduction", "Implementation challenges"],
|
||||
include_hashtags=True,
|
||||
include_call_to_action=True,
|
||||
research_enabled=True,
|
||||
search_engine="metaphor",
|
||||
max_length=2000
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
response = await linkedin_service.generate_post(request)
|
||||
duration = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Post generation completed in {duration:.2f} seconds")
|
||||
logger.info(f"Success: {response.success}")
|
||||
|
||||
if response.success and response.data:
|
||||
logger.info(f"Content length: {response.data.character_count} characters")
|
||||
logger.info(f"Hashtags generated: {len(response.data.hashtags)}")
|
||||
logger.info(f"Call-to-action: {response.data.call_to_action is not None}")
|
||||
logger.info(f"Research sources: {len(response.research_sources)}")
|
||||
|
||||
# Preview content (first 200 chars)
|
||||
content_preview = response.data.content[:200] + "..." if len(response.data.content) > 200 else response.data.content
|
||||
logger.info(f"Content preview: {content_preview}")
|
||||
else:
|
||||
logger.error(f"Post generation failed: {response.error}")
|
||||
|
||||
return response.success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing post generation: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_article_generation():
|
||||
"""Test LinkedIn article generation."""
|
||||
logger.info("🧪 Testing LinkedIn Article Generation")
|
||||
|
||||
try:
|
||||
request = LinkedInArticleRequest(
|
||||
topic="Digital Transformation in Manufacturing",
|
||||
industry="Manufacturing",
|
||||
tone="professional",
|
||||
target_audience="Manufacturing leaders and technology professionals",
|
||||
key_sections=["Current challenges", "Technology solutions", "Implementation strategies", "Future outlook"],
|
||||
include_images=True,
|
||||
seo_optimization=True,
|
||||
research_enabled=True,
|
||||
search_engine="metaphor",
|
||||
word_count=1500
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
response = await linkedin_service.generate_article(request)
|
||||
duration = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Article generation completed in {duration:.2f} seconds")
|
||||
logger.info(f"Success: {response.success}")
|
||||
|
||||
if response.success and response.data:
|
||||
logger.info(f"Word count: {response.data.word_count}")
|
||||
logger.info(f"Sections: {len(response.data.sections)}")
|
||||
logger.info(f"Reading time: {response.data.reading_time} minutes")
|
||||
logger.info(f"Image suggestions: {len(response.data.image_suggestions)}")
|
||||
logger.info(f"SEO metadata: {response.data.seo_metadata is not None}")
|
||||
logger.info(f"Research sources: {len(response.research_sources)}")
|
||||
|
||||
# Preview title
|
||||
logger.info(f"Article title: {response.data.title}")
|
||||
else:
|
||||
logger.error(f"Article generation failed: {response.error}")
|
||||
|
||||
return response.success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing article generation: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_carousel_generation():
|
||||
"""Test LinkedIn carousel generation."""
|
||||
logger.info("🧪 Testing LinkedIn Carousel Generation")
|
||||
|
||||
try:
|
||||
request = LinkedInCarouselRequest(
|
||||
topic="5 Ways to Improve Team Productivity",
|
||||
industry="Business Management",
|
||||
slide_count=8,
|
||||
tone="professional",
|
||||
target_audience="Team leaders and managers",
|
||||
key_takeaways=["Clear communication", "Goal setting", "Tool optimization", "Regular feedback", "Work-life balance"],
|
||||
include_cover_slide=True,
|
||||
include_cta_slide=True,
|
||||
visual_style="modern"
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
response = await linkedin_service.generate_carousel(request)
|
||||
duration = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Carousel generation completed in {duration:.2f} seconds")
|
||||
logger.info(f"Success: {response.success}")
|
||||
|
||||
if response.success and response.data:
|
||||
logger.info(f"Slide count: {len(response.data.slides)}")
|
||||
logger.info(f"Carousel title: {response.data.title}")
|
||||
logger.info(f"Design guidelines: {bool(response.data.design_guidelines)}")
|
||||
|
||||
# Preview first slide
|
||||
if response.data.slides:
|
||||
first_slide = response.data.slides[0]
|
||||
logger.info(f"First slide title: {first_slide.title}")
|
||||
else:
|
||||
logger.error(f"Carousel generation failed: {response.error}")
|
||||
|
||||
return response.success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing carousel generation: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_video_script_generation():
|
||||
"""Test LinkedIn video script generation."""
|
||||
logger.info("🧪 Testing LinkedIn Video Script Generation")
|
||||
|
||||
try:
|
||||
request = LinkedInVideoScriptRequest(
|
||||
topic="Quick tips for remote team management",
|
||||
industry="Human Resources",
|
||||
video_length=90,
|
||||
tone="conversational",
|
||||
target_audience="Remote team managers",
|
||||
key_messages=["Communication tools", "Regular check-ins", "Team building", "Performance tracking"],
|
||||
include_hook=True,
|
||||
include_captions=True
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
response = await linkedin_service.generate_video_script(request)
|
||||
duration = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Video script generation completed in {duration:.2f} seconds")
|
||||
logger.info(f"Success: {response.success}")
|
||||
|
||||
if response.success and response.data:
|
||||
logger.info(f"Hook: {bool(response.data.hook)}")
|
||||
logger.info(f"Main content scenes: {len(response.data.main_content)}")
|
||||
logger.info(f"Conclusion: {bool(response.data.conclusion)}")
|
||||
logger.info(f"Thumbnail suggestions: {len(response.data.thumbnail_suggestions)}")
|
||||
logger.info(f"Captions: {bool(response.data.captions)}")
|
||||
|
||||
# Preview hook
|
||||
if response.data.hook:
|
||||
hook_preview = response.data.hook[:100] + "..." if len(response.data.hook) > 100 else response.data.hook
|
||||
logger.info(f"Hook preview: {hook_preview}")
|
||||
else:
|
||||
logger.error(f"Video script generation failed: {response.error}")
|
||||
|
||||
return response.success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing video script generation: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_comment_response_generation():
|
||||
"""Test LinkedIn comment response generation."""
|
||||
logger.info("🧪 Testing LinkedIn Comment Response Generation")
|
||||
|
||||
try:
|
||||
request = LinkedInCommentResponseRequest(
|
||||
original_post="Just published an article about AI transformation in healthcare. The potential for improving patient outcomes while reducing costs is incredible. Healthcare leaders need to start preparing for this shift now.",
|
||||
comment="Great insights! How do you see this affecting smaller healthcare providers who might not have the resources for large AI implementations?",
|
||||
response_type="value_add",
|
||||
tone="professional",
|
||||
include_question=True,
|
||||
brand_voice="Expert but approachable, data-driven and helpful"
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
response = await linkedin_service.generate_comment_response(request)
|
||||
duration = time.time() - start_time
|
||||
|
||||
logger.info(f"✅ Comment response generation completed in {duration:.2f} seconds")
|
||||
logger.info(f"Success: {response.success}")
|
||||
|
||||
if response.success and response.response:
|
||||
logger.info(f"Primary response length: {len(response.response)} characters")
|
||||
logger.info(f"Alternative responses: {len(response.alternative_responses)}")
|
||||
logger.info(f"Tone analysis: {bool(response.tone_analysis)}")
|
||||
|
||||
# Preview response
|
||||
response_preview = response.response[:150] + "..." if len(response.response) > 150 else response.response
|
||||
logger.info(f"Response preview: {response_preview}")
|
||||
|
||||
if response.tone_analysis:
|
||||
logger.info(f"Detected sentiment: {response.tone_analysis.get('sentiment', 'unknown')}")
|
||||
else:
|
||||
logger.error(f"Comment response generation failed: {response.error}")
|
||||
|
||||
return response.success
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error testing comment response generation: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_error_handling():
|
||||
"""Test error handling with invalid requests."""
|
||||
logger.info("🧪 Testing Error Handling")
|
||||
|
||||
try:
|
||||
# Test with empty topic
|
||||
request = LinkedInPostRequest(
|
||||
topic="", # Empty topic should trigger validation error
|
||||
industry="Technology",
|
||||
)
|
||||
|
||||
response = await linkedin_service.generate_post(request)
|
||||
|
||||
# Should still handle gracefully
|
||||
if not response.success:
|
||||
logger.info("✅ Error handling working correctly for invalid input")
|
||||
return True
|
||||
else:
|
||||
logger.warning("⚠️ Expected error handling but got successful response")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error in error handling test: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def run_all_tests():
|
||||
"""Run all LinkedIn content generation tests."""
|
||||
logger.info("🚀 Starting LinkedIn Content Generation Tests")
|
||||
logger.info("=" * 60)
|
||||
|
||||
test_results = {}
|
||||
|
||||
# Run individual tests
|
||||
test_results["post_generation"] = await test_post_generation()
|
||||
logger.info("-" * 40)
|
||||
|
||||
test_results["article_generation"] = await test_article_generation()
|
||||
logger.info("-" * 40)
|
||||
|
||||
test_results["carousel_generation"] = await test_carousel_generation()
|
||||
logger.info("-" * 40)
|
||||
|
||||
test_results["video_script_generation"] = await test_video_script_generation()
|
||||
logger.info("-" * 40)
|
||||
|
||||
test_results["comment_response_generation"] = await test_comment_response_generation()
|
||||
logger.info("-" * 40)
|
||||
|
||||
test_results["error_handling"] = await test_error_handling()
|
||||
logger.info("-" * 40)
|
||||
|
||||
# Summary
|
||||
logger.info("📊 Test Results Summary")
|
||||
logger.info("=" * 60)
|
||||
|
||||
passed = sum(test_results.values())
|
||||
total = len(test_results)
|
||||
|
||||
for test_name, result in test_results.items():
|
||||
status = "✅ PASSED" if result else "❌ FAILED"
|
||||
logger.info(f"{test_name}: {status}")
|
||||
|
||||
logger.info(f"\nOverall: {passed}/{total} tests passed ({(passed/total)*100:.1f}%)")
|
||||
|
||||
if passed == total:
|
||||
logger.info("🎉 All tests passed! LinkedIn content generation is working correctly.")
|
||||
else:
|
||||
logger.warning(f"⚠️ {total - passed} test(s) failed. Please check the implementation.")
|
||||
|
||||
return passed == total
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the tests
|
||||
success = asyncio.run(run_all_tests())
|
||||
|
||||
if success:
|
||||
logger.info("\n✅ LinkedIn content generation migration completed successfully!")
|
||||
logger.info("The FastAPI endpoints are ready for use.")
|
||||
else:
|
||||
logger.error("\n❌ Some tests failed. Please review the implementation.")
|
||||
|
||||
# Print API endpoint information
|
||||
logger.info("\n📡 Available LinkedIn Content Generation Endpoints:")
|
||||
logger.info("- POST /api/linkedin/generate-post")
|
||||
logger.info("- POST /api/linkedin/generate-article")
|
||||
logger.info("- POST /api/linkedin/generate-carousel")
|
||||
logger.info("- POST /api/linkedin/generate-video-script")
|
||||
logger.info("- POST /api/linkedin/generate-comment-response")
|
||||
logger.info("- GET /api/linkedin/health")
|
||||
logger.info("- GET /api/linkedin/content-types")
|
||||
logger.info("- GET /api/linkedin/usage-stats")
|
||||
191
backend/test/test_linkedin_image_infrastructure.py
Normal file
191
backend/test/test_linkedin_image_infrastructure.py
Normal file
@@ -0,0 +1,191 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for LinkedIn Image Generation Infrastructure
|
||||
|
||||
This script tests the basic functionality of the LinkedIn image generation services
|
||||
to ensure they are properly initialized and can perform basic operations.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_path = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_path))
|
||||
|
||||
from loguru import logger
|
||||
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stdout, colorize=True, format="<level>{level}</level>| {message}")
|
||||
|
||||
|
||||
async def test_linkedin_image_infrastructure():
|
||||
"""Test the LinkedIn image generation infrastructure."""
|
||||
|
||||
logger.info("🧪 Testing LinkedIn Image Generation Infrastructure")
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
# Test 1: Import LinkedIn Image Services
|
||||
logger.info("📦 Test 1: Importing LinkedIn Image Services...")
|
||||
|
||||
from services.linkedin.image_generation import (
|
||||
LinkedInImageGenerator,
|
||||
LinkedInImageEditor,
|
||||
LinkedInImageStorage
|
||||
)
|
||||
from services.linkedin.image_prompts import LinkedInPromptGenerator
|
||||
|
||||
logger.success("✅ All LinkedIn image services imported successfully")
|
||||
|
||||
# Test 2: Initialize Services
|
||||
logger.info("🔧 Test 2: Initializing LinkedIn Image Services...")
|
||||
|
||||
# Initialize services (without API keys for testing)
|
||||
image_generator = LinkedInImageGenerator()
|
||||
image_editor = LinkedInImageEditor()
|
||||
image_storage = LinkedInImageStorage()
|
||||
prompt_generator = LinkedInPromptGenerator()
|
||||
|
||||
logger.success("✅ All LinkedIn image services initialized successfully")
|
||||
|
||||
# Test 3: Test Prompt Generation (without API calls)
|
||||
logger.info("📝 Test 3: Testing Prompt Generation Logic...")
|
||||
|
||||
# Test content context
|
||||
test_content = {
|
||||
'topic': 'AI in Marketing',
|
||||
'industry': 'Technology',
|
||||
'content_type': 'post',
|
||||
'content': 'Exploring how artificial intelligence is transforming modern marketing strategies.'
|
||||
}
|
||||
|
||||
# Test fallback prompt generation
|
||||
fallback_prompts = prompt_generator._get_fallback_prompts(test_content, "1:1")
|
||||
|
||||
if len(fallback_prompts) == 3:
|
||||
logger.success(f"✅ Fallback prompt generation working: {len(fallback_prompts)} prompts created")
|
||||
|
||||
for i, prompt in enumerate(fallback_prompts):
|
||||
logger.info(f" Prompt {i+1}: {prompt['style']} - {prompt['description']}")
|
||||
else:
|
||||
logger.error(f"❌ Fallback prompt generation failed: expected 3, got {len(fallback_prompts)}")
|
||||
|
||||
# Test 4: Test Image Storage Directory Creation
|
||||
logger.info("📁 Test 4: Testing Image Storage Directory Creation...")
|
||||
|
||||
# Check if storage directories were created
|
||||
storage_path = image_storage.base_storage_path
|
||||
if storage_path.exists():
|
||||
logger.success(f"✅ Storage base directory created: {storage_path}")
|
||||
|
||||
# Check subdirectories
|
||||
for subdir in ['images', 'metadata', 'temp']:
|
||||
subdir_path = storage_path / subdir
|
||||
if subdir_path.exists():
|
||||
logger.info(f" ✅ {subdir} directory exists: {subdir_path}")
|
||||
else:
|
||||
logger.warning(f" ⚠️ {subdir} directory missing: {subdir_path}")
|
||||
else:
|
||||
logger.error(f"❌ Storage base directory not created: {storage_path}")
|
||||
|
||||
# Test 5: Test Service Methods
|
||||
logger.info("⚙️ Test 5: Testing Service Method Signatures...")
|
||||
|
||||
# Test image generator methods
|
||||
if hasattr(image_generator, 'generate_image'):
|
||||
logger.success("✅ LinkedInImageGenerator.generate_image method exists")
|
||||
else:
|
||||
logger.error("❌ LinkedInImageGenerator.generate_image method missing")
|
||||
|
||||
if hasattr(image_editor, 'edit_image_conversationally'):
|
||||
logger.success("✅ LinkedInImageEditor.edit_image_conversationally method exists")
|
||||
else:
|
||||
logger.error("❌ LinkedInImageEditor.edit_image_conversationally method missing")
|
||||
|
||||
if hasattr(image_storage, 'store_image'):
|
||||
logger.success("✅ LinkedInImageStorage.store_image method exists")
|
||||
else:
|
||||
logger.error("❌ LinkedInImageStorage.store_image method missing")
|
||||
|
||||
if hasattr(prompt_generator, 'generate_three_prompts'):
|
||||
logger.success("✅ LinkedInPromptGenerator.generate_three_prompts method exists")
|
||||
else:
|
||||
logger.error("❌ LinkedInPromptGenerator.generate_three_prompts method missing")
|
||||
|
||||
# Test 6: Test Prompt Enhancement
|
||||
logger.info("🎨 Test 6: Testing Prompt Enhancement Logic...")
|
||||
|
||||
test_prompt = {
|
||||
'style': 'Professional',
|
||||
'prompt': 'Create a business image',
|
||||
'description': 'Professional style'
|
||||
}
|
||||
|
||||
enhanced_prompt = prompt_generator._enhance_prompt_for_linkedin(
|
||||
test_prompt, test_content, "1:1", 0
|
||||
)
|
||||
|
||||
if enhanced_prompt and 'enhanced_at' in enhanced_prompt:
|
||||
logger.success("✅ Prompt enhancement working")
|
||||
logger.info(f" Enhanced prompt length: {len(enhanced_prompt['prompt'])} characters")
|
||||
else:
|
||||
logger.error("❌ Prompt enhancement failed")
|
||||
|
||||
# Test 7: Test Image Validation Logic
|
||||
logger.info("🔍 Test 7: Testing Image Validation Logic...")
|
||||
|
||||
# Test aspect ratio validation
|
||||
valid_ratios = [(1024, 1024), (1600, 900), (1200, 1600)]
|
||||
invalid_ratios = [(500, 500), (2000, 500)]
|
||||
|
||||
for width, height in valid_ratios:
|
||||
if image_generator._is_aspect_ratio_suitable(width, height):
|
||||
logger.info(f" ✅ Valid ratio {width}:{height} correctly identified")
|
||||
else:
|
||||
logger.warning(f" ⚠️ Valid ratio {width}:{height} incorrectly rejected")
|
||||
|
||||
for width, height in invalid_ratios:
|
||||
if not image_generator._is_aspect_ratio_suitable(width, height):
|
||||
logger.info(f" ✅ Invalid ratio {width}:{height} correctly rejected")
|
||||
else:
|
||||
logger.warning(f" ⚠️ Invalid ratio {width}:{height} incorrectly accepted")
|
||||
|
||||
logger.info("=" * 60)
|
||||
logger.success("🎉 LinkedIn Image Generation Infrastructure Test Completed Successfully!")
|
||||
|
||||
return True
|
||||
|
||||
except ImportError as e:
|
||||
logger.error(f"❌ Import Error: {e}")
|
||||
logger.error("This usually means there's an issue with the module structure or dependencies")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test Failed: {e}")
|
||||
logger.error(f"Error type: {type(e).__name__}")
|
||||
import traceback
|
||||
logger.error(f"Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting LinkedIn Image Generation Infrastructure Tests")
|
||||
|
||||
success = await test_linkedin_image_infrastructure()
|
||||
|
||||
if success:
|
||||
logger.success("✅ All tests passed! The infrastructure is ready for use.")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Some tests failed. Please check the errors above.")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the async test
|
||||
asyncio.run(main())
|
||||
271
backend/test/test_linkedin_keyword_fix.py
Normal file
271
backend/test/test_linkedin_keyword_fix.py
Normal file
@@ -0,0 +1,271 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for LinkedIn Content Generation Keyword Fix
|
||||
|
||||
This script tests the fixed keyword processing by calling the LinkedIn content generation
|
||||
endpoint directly and capturing detailed logs to analyze API usage patterns.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
# Configure detailed logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
||||
handlers=[
|
||||
logging.FileHandler(f'test_linkedin_keyword_fix_{datetime.now().strftime("%Y%m%d_%H%M%S")}.log'),
|
||||
logging.StreamHandler(sys.stdout)
|
||||
]
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Import the LinkedIn service
|
||||
from services.linkedin_service import LinkedInService
|
||||
from models.linkedin_models import LinkedInPostRequest, LinkedInPostType, LinkedInTone, GroundingLevel, SearchEngine
|
||||
|
||||
|
||||
class LinkedInKeywordTest:
|
||||
"""Test class for LinkedIn keyword processing fix."""
|
||||
|
||||
def __init__(self):
|
||||
self.linkedin_service = LinkedInService()
|
||||
self.test_results = []
|
||||
self.api_call_count = 0
|
||||
self.start_time = None
|
||||
|
||||
def log_api_call(self, endpoint: str, duration: float, success: bool):
|
||||
"""Log API call details."""
|
||||
self.api_call_count += 1
|
||||
logger.info(f"API Call #{self.api_call_count}: {endpoint} - Duration: {duration:.2f}s - Success: {success}")
|
||||
|
||||
async def test_keyword_phrase(self, phrase: str, test_name: str) -> Dict[str, Any]:
|
||||
"""Test a specific keyword phrase."""
|
||||
logger.info(f"\n{'='*60}")
|
||||
logger.info(f"TESTING: {test_name}")
|
||||
logger.info(f"KEYWORD PHRASE: '{phrase}'")
|
||||
logger.info(f"{'='*60}")
|
||||
|
||||
test_start = time.time()
|
||||
|
||||
try:
|
||||
# Create the request
|
||||
request = LinkedInPostRequest(
|
||||
topic=phrase,
|
||||
industry="Technology",
|
||||
post_type=LinkedInPostType.PROFESSIONAL,
|
||||
tone=LinkedInTone.PROFESSIONAL,
|
||||
grounding_level=GroundingLevel.ENHANCED,
|
||||
search_engine=SearchEngine.GOOGLE,
|
||||
research_enabled=True,
|
||||
include_citations=True,
|
||||
max_length=1000
|
||||
)
|
||||
|
||||
logger.info(f"Request created: {request.topic}")
|
||||
logger.info(f"Research enabled: {request.research_enabled}")
|
||||
logger.info(f"Search engine: {request.search_engine}")
|
||||
logger.info(f"Grounding level: {request.grounding_level}")
|
||||
|
||||
# Call the LinkedIn service
|
||||
logger.info("Calling LinkedIn service...")
|
||||
response = await self.linkedin_service.generate_linkedin_post(request)
|
||||
|
||||
test_duration = time.time() - test_start
|
||||
self.log_api_call("LinkedIn Post Generation", test_duration, response.success)
|
||||
|
||||
# Analyze the response
|
||||
result = {
|
||||
"test_name": test_name,
|
||||
"keyword_phrase": phrase,
|
||||
"success": response.success,
|
||||
"duration": test_duration,
|
||||
"api_calls": self.api_call_count,
|
||||
"error": response.error if not response.success else None,
|
||||
"content_length": len(response.data.content) if response.success and response.data else 0,
|
||||
"sources_count": len(response.research_sources) if response.success and response.research_sources else 0,
|
||||
"citations_count": len(response.data.citations) if response.success and response.data and response.data.citations else 0,
|
||||
"grounding_status": response.grounding_status if response.success else None,
|
||||
"generation_metadata": response.generation_metadata if response.success else None
|
||||
}
|
||||
|
||||
if response.success:
|
||||
logger.info(f"✅ SUCCESS: Generated {result['content_length']} characters")
|
||||
logger.info(f"📊 Sources: {result['sources_count']}, Citations: {result['citations_count']}")
|
||||
logger.info(f"⏱️ Total duration: {test_duration:.2f}s")
|
||||
logger.info(f"🔢 API calls made: {self.api_call_count}")
|
||||
|
||||
# Log content preview
|
||||
if response.data and response.data.content:
|
||||
content_preview = response.data.content[:200] + "..." if len(response.data.content) > 200 else response.data.content
|
||||
logger.info(f"📝 Content preview: {content_preview}")
|
||||
|
||||
# Log grounding status
|
||||
if response.grounding_status:
|
||||
logger.info(f"🔍 Grounding status: {response.grounding_status}")
|
||||
|
||||
else:
|
||||
logger.error(f"❌ FAILED: {response.error}")
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
test_duration = time.time() - test_start
|
||||
logger.error(f"❌ EXCEPTION in {test_name}: {str(e)}")
|
||||
self.log_api_call("LinkedIn Post Generation", test_duration, False)
|
||||
|
||||
return {
|
||||
"test_name": test_name,
|
||||
"keyword_phrase": phrase,
|
||||
"success": False,
|
||||
"duration": test_duration,
|
||||
"api_calls": self.api_call_count,
|
||||
"error": str(e),
|
||||
"content_length": 0,
|
||||
"sources_count": 0,
|
||||
"citations_count": 0,
|
||||
"grounding_status": None,
|
||||
"generation_metadata": None
|
||||
}
|
||||
|
||||
async def run_comprehensive_test(self):
|
||||
"""Run comprehensive tests for keyword processing."""
|
||||
logger.info("🚀 Starting LinkedIn Keyword Processing Test Suite")
|
||||
logger.info(f"Test started at: {datetime.now()}")
|
||||
|
||||
self.start_time = time.time()
|
||||
|
||||
# Test cases
|
||||
test_cases = [
|
||||
{
|
||||
"phrase": "ALwrity content generation",
|
||||
"name": "Single Phrase Test (Should be preserved as-is)"
|
||||
},
|
||||
{
|
||||
"phrase": "AI tools, content creation, marketing automation",
|
||||
"name": "Comma-Separated Test (Should be split by commas)"
|
||||
},
|
||||
{
|
||||
"phrase": "LinkedIn content strategy",
|
||||
"name": "Another Single Phrase Test"
|
||||
},
|
||||
{
|
||||
"phrase": "social media, digital marketing, brand awareness",
|
||||
"name": "Another Comma-Separated Test"
|
||||
}
|
||||
]
|
||||
|
||||
# Run all tests
|
||||
for test_case in test_cases:
|
||||
result = await self.test_keyword_phrase(
|
||||
test_case["phrase"],
|
||||
test_case["name"]
|
||||
)
|
||||
self.test_results.append(result)
|
||||
|
||||
# Reset API call counter for next test
|
||||
self.api_call_count = 0
|
||||
|
||||
# Small delay between tests
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# Generate summary report
|
||||
self.generate_summary_report()
|
||||
|
||||
def generate_summary_report(self):
|
||||
"""Generate a comprehensive summary report."""
|
||||
total_time = time.time() - self.start_time
|
||||
|
||||
logger.info(f"\n{'='*80}")
|
||||
logger.info("📊 COMPREHENSIVE TEST SUMMARY REPORT")
|
||||
logger.info(f"{'='*80}")
|
||||
|
||||
logger.info(f"🕐 Total test duration: {total_time:.2f} seconds")
|
||||
logger.info(f"🧪 Total tests run: {len(self.test_results)}")
|
||||
|
||||
successful_tests = [r for r in self.test_results if r["success"]]
|
||||
failed_tests = [r for r in self.test_results if not r["success"]]
|
||||
|
||||
logger.info(f"✅ Successful tests: {len(successful_tests)}")
|
||||
logger.info(f"❌ Failed tests: {len(failed_tests)}")
|
||||
|
||||
if successful_tests:
|
||||
avg_duration = sum(r["duration"] for r in successful_tests) / len(successful_tests)
|
||||
avg_content_length = sum(r["content_length"] for r in successful_tests) / len(successful_tests)
|
||||
avg_sources = sum(r["sources_count"] for r in successful_tests) / len(successful_tests)
|
||||
avg_citations = sum(r["citations_count"] for r in successful_tests) / len(successful_tests)
|
||||
|
||||
logger.info(f"📈 Average generation time: {avg_duration:.2f}s")
|
||||
logger.info(f"📝 Average content length: {avg_content_length:.0f} characters")
|
||||
logger.info(f"🔍 Average sources found: {avg_sources:.1f}")
|
||||
logger.info(f"📚 Average citations: {avg_citations:.1f}")
|
||||
|
||||
# Detailed results
|
||||
logger.info(f"\n📋 DETAILED TEST RESULTS:")
|
||||
for i, result in enumerate(self.test_results, 1):
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
logger.info(f"{i}. {status} - {result['test_name']}")
|
||||
logger.info(f" Phrase: '{result['keyword_phrase']}'")
|
||||
logger.info(f" Duration: {result['duration']:.2f}s")
|
||||
if result["success"]:
|
||||
logger.info(f" Content: {result['content_length']} chars, Sources: {result['sources_count']}, Citations: {result['citations_count']}")
|
||||
else:
|
||||
logger.info(f" Error: {result['error']}")
|
||||
|
||||
# API Usage Analysis
|
||||
logger.info(f"\n🔍 API USAGE ANALYSIS:")
|
||||
total_api_calls = sum(r["api_calls"] for r in self.test_results)
|
||||
logger.info(f"Total API calls across all tests: {total_api_calls}")
|
||||
|
||||
if successful_tests:
|
||||
avg_api_calls = sum(r["api_calls"] for r in successful_tests) / len(successful_tests)
|
||||
logger.info(f"Average API calls per successful test: {avg_api_calls:.1f}")
|
||||
|
||||
# Save detailed results to JSON file
|
||||
report_data = {
|
||||
"test_summary": {
|
||||
"total_duration": total_time,
|
||||
"total_tests": len(self.test_results),
|
||||
"successful_tests": len(successful_tests),
|
||||
"failed_tests": len(failed_tests),
|
||||
"total_api_calls": total_api_calls
|
||||
},
|
||||
"test_results": self.test_results,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
report_filename = f"linkedin_keyword_test_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
|
||||
with open(report_filename, 'w') as f:
|
||||
json.dump(report_data, f, indent=2, default=str)
|
||||
|
||||
logger.info(f"📄 Detailed report saved to: {report_filename}")
|
||||
logger.info(f"{'='*80}")
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test execution function."""
|
||||
try:
|
||||
test_suite = LinkedInKeywordTest()
|
||||
await test_suite.run_comprehensive_test()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test suite failed: {str(e)}")
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("🚀 Starting LinkedIn Keyword Processing Test Suite")
|
||||
print("This will test the keyword fix and analyze API usage patterns...")
|
||||
print("=" * 60)
|
||||
|
||||
asyncio.run(main())
|
||||
105
backend/test/test_linkedin_service.py
Normal file
105
backend/test/test_linkedin_service.py
Normal file
@@ -0,0 +1,105 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for LinkedIn service functionality.
|
||||
|
||||
This script tests that the LinkedIn service can be initialized and
|
||||
basic functionality works without errors.
|
||||
|
||||
Usage:
|
||||
python test_linkedin_service.py
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from loguru import logger
|
||||
from models.linkedin_models import LinkedInPostRequest, GroundingLevel
|
||||
from services.linkedin_service import LinkedInService
|
||||
|
||||
|
||||
async def test_linkedin_service():
|
||||
"""Test the LinkedIn service functionality."""
|
||||
try:
|
||||
logger.info("🧪 Testing LinkedIn Service Functionality")
|
||||
|
||||
# Initialize the service
|
||||
logger.info("📦 Initializing LinkedIn Service...")
|
||||
service = LinkedInService()
|
||||
logger.info("✅ LinkedIn Service initialized successfully")
|
||||
|
||||
# Create a test request
|
||||
test_request = LinkedInPostRequest(
|
||||
topic="AI in Marketing",
|
||||
industry="Technology",
|
||||
tone="professional",
|
||||
max_length=500,
|
||||
target_audience="Marketing professionals",
|
||||
key_points=["AI automation", "Personalization", "ROI improvement"],
|
||||
research_enabled=True,
|
||||
search_engine="google",
|
||||
grounding_level=GroundingLevel.BASIC,
|
||||
include_citations=True
|
||||
)
|
||||
|
||||
logger.info("📝 Testing LinkedIn Post Generation...")
|
||||
|
||||
# Test post generation
|
||||
response = await service.generate_linkedin_post(test_request)
|
||||
|
||||
if response.success:
|
||||
logger.info("✅ LinkedIn post generation successful")
|
||||
logger.info(f"📊 Content length: {len(response.data.content)} characters")
|
||||
logger.info(f"🔗 Sources: {len(response.research_sources)}")
|
||||
logger.info(f"📚 Citations: {len(response.data.citations)}")
|
||||
logger.info(f"🏆 Quality score: {response.data.quality_metrics.overall_score if response.data.quality_metrics else 'N/A'}")
|
||||
|
||||
# Display a snippet of the generated content
|
||||
content_preview = response.data.content[:200] + "..." if len(response.data.content) > 200 else response.data.content
|
||||
logger.info(f"📄 Content preview: {content_preview}")
|
||||
|
||||
else:
|
||||
logger.error(f"❌ LinkedIn post generation failed: {response.error}")
|
||||
return False
|
||||
|
||||
logger.info("🎉 LinkedIn service test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ LinkedIn service test failed: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting LinkedIn Service Test")
|
||||
logger.info("=" * 50)
|
||||
|
||||
success = await test_linkedin_service()
|
||||
|
||||
if success:
|
||||
logger.info("\n🎉 SUCCESS: LinkedIn service is working correctly!")
|
||||
logger.info("✅ Service initialization successful")
|
||||
logger.info("✅ Post generation working")
|
||||
logger.info("✅ Ready for production use")
|
||||
else:
|
||||
logger.error("\n❌ FAILURE: LinkedIn service test failed")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stderr,
|
||||
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
|
||||
level="INFO"
|
||||
)
|
||||
|
||||
# Run the test
|
||||
asyncio.run(main())
|
||||
239
backend/test/test_native_grounding.py
Normal file
239
backend/test/test_native_grounding.py
Normal file
@@ -0,0 +1,239 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for native Google Search grounding implementation.
|
||||
|
||||
This script tests the new GeminiGroundedProvider that uses native Google Search
|
||||
grounding instead of custom search implementation.
|
||||
|
||||
Usage:
|
||||
python test_native_grounding.py
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from loguru import logger
|
||||
from services.llm_providers.gemini_grounded_provider import GeminiGroundedProvider
|
||||
|
||||
|
||||
async def test_native_grounding():
|
||||
"""Test the native Google Search grounding functionality."""
|
||||
try:
|
||||
logger.info("🧪 Testing Native Google Search Grounding")
|
||||
|
||||
# Check if GEMINI_API_KEY is set
|
||||
if not os.getenv('GEMINI_API_KEY'):
|
||||
logger.error("❌ GEMINI_API_KEY environment variable not set")
|
||||
logger.info("Please set GEMINI_API_KEY to test native grounding")
|
||||
return False
|
||||
|
||||
# Initialize the grounded provider
|
||||
logger.info("🔧 Initializing Gemini Grounded Provider...")
|
||||
provider = GeminiGroundedProvider()
|
||||
logger.info("✅ Provider initialized successfully")
|
||||
|
||||
# Test 1: Basic grounded content generation
|
||||
logger.info("\n📝 Test 1: Basic LinkedIn Post Generation")
|
||||
test_prompt = "Write a professional LinkedIn post about the latest AI trends in 2025"
|
||||
|
||||
result = await provider.generate_grounded_content(
|
||||
prompt=test_prompt,
|
||||
content_type="linkedin_post",
|
||||
temperature=0.7,
|
||||
max_tokens=500
|
||||
)
|
||||
|
||||
if result and 'content' in result:
|
||||
logger.info("✅ Content generated successfully")
|
||||
logger.info(f"📊 Content length: {len(result['content'])} characters")
|
||||
logger.info(f"🔗 Sources found: {len(result.get('sources', []))}")
|
||||
logger.info(f"📚 Citations found: {len(result.get('citations', []))}")
|
||||
|
||||
# Display the generated content
|
||||
logger.info("\n📄 Generated Content:")
|
||||
logger.info("-" * 50)
|
||||
logger.info(result['content'][:500] + "..." if len(result['content']) > 500 else result['content'])
|
||||
logger.info("-" * 50)
|
||||
|
||||
# Display sources if available
|
||||
if result.get('sources'):
|
||||
logger.info("\n🔗 Sources:")
|
||||
for i, source in enumerate(result['sources']):
|
||||
logger.info(f" {i+1}. {source.get('title', 'Unknown')}")
|
||||
logger.info(f" URL: {source.get('url', 'N/A')}")
|
||||
|
||||
# Display search queries if available
|
||||
if result.get('search_queries'):
|
||||
logger.info(f"\n🔍 Search Queries Used: {result['search_queries']}")
|
||||
|
||||
# Display grounding metadata info
|
||||
if result.get('grounding_metadata'):
|
||||
logger.info("✅ Grounding metadata found")
|
||||
else:
|
||||
logger.warning("⚠️ No grounding metadata found")
|
||||
|
||||
else:
|
||||
logger.error("❌ Content generation failed")
|
||||
if 'error' in result:
|
||||
logger.error(f"Error: {result['error']}")
|
||||
return False
|
||||
|
||||
# Test 2: Article generation
|
||||
logger.info("\n📝 Test 2: LinkedIn Article Generation")
|
||||
article_prompt = "Create a comprehensive article about sustainable business practices in tech companies"
|
||||
|
||||
article_result = await provider.generate_grounded_content(
|
||||
prompt=article_prompt,
|
||||
content_type="linkedin_article",
|
||||
temperature=0.7,
|
||||
max_tokens=1000
|
||||
)
|
||||
|
||||
if article_result and 'content' in article_result:
|
||||
logger.info("✅ Article generated successfully")
|
||||
logger.info(f"📊 Article length: {len(article_result['content'])} characters")
|
||||
logger.info(f"🔗 Sources: {len(article_result.get('sources', []))}")
|
||||
|
||||
# Check for article-specific processing
|
||||
if 'title' in article_result:
|
||||
logger.info(f"📰 Article title: {article_result['title']}")
|
||||
if 'word_count' in article_result:
|
||||
logger.info(f"📊 Word count: {article_result['word_count']}")
|
||||
|
||||
else:
|
||||
logger.error("❌ Article generation failed")
|
||||
return False
|
||||
|
||||
# Test 3: Content quality assessment
|
||||
logger.info("\n📝 Test 3: Content Quality Assessment")
|
||||
if result.get('content') and result.get('sources'):
|
||||
quality_metrics = provider.assess_content_quality(
|
||||
content=result['content'],
|
||||
sources=result['sources']
|
||||
)
|
||||
|
||||
logger.info("✅ Quality assessment completed")
|
||||
logger.info(f"📊 Overall score: {quality_metrics.get('overall_score', 'N/A')}")
|
||||
logger.info(f"🔗 Source coverage: {quality_metrics.get('source_coverage', 'N/A')}")
|
||||
logger.info(f"🎯 Tone score: {quality_metrics.get('tone_score', 'N/A')}")
|
||||
logger.info(f"📝 Word count: {quality_metrics.get('word_count', 'N/A')}")
|
||||
logger.info(f"🏆 Quality level: {quality_metrics.get('quality_level', 'N/A')}")
|
||||
|
||||
# Test 4: Citation extraction
|
||||
logger.info("\n📝 Test 4: Citation Extraction")
|
||||
if result.get('content'):
|
||||
citations = provider.extract_citations(result['content'])
|
||||
logger.info(f"✅ Extracted {len(citations)} citations")
|
||||
|
||||
for i, citation in enumerate(citations):
|
||||
logger.info(f" Citation {i+1}: {citation.get('reference', 'Unknown')}")
|
||||
|
||||
logger.info("\n🎉 All tests completed successfully!")
|
||||
return True
|
||||
|
||||
except ImportError as e:
|
||||
logger.error(f"❌ Import error: {str(e)}")
|
||||
logger.info("💡 Make sure to install required dependencies:")
|
||||
logger.info(" pip install google-genai loguru")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test failed with error: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_individual_components():
|
||||
"""Test individual components of the native grounding system."""
|
||||
try:
|
||||
logger.info("🔧 Testing Individual Components")
|
||||
|
||||
# Test 1: Provider initialization
|
||||
logger.info("\n📋 Test 1: Provider Initialization")
|
||||
if not os.getenv('GEMINI_API_KEY'):
|
||||
logger.warning("⚠️ Skipping provider test - no API key")
|
||||
return False
|
||||
|
||||
provider = GeminiGroundedProvider()
|
||||
logger.info("✅ Provider initialized successfully")
|
||||
|
||||
# Test 2: Prompt building
|
||||
logger.info("\n📋 Test 2: Prompt Building")
|
||||
test_prompt = "Test prompt for LinkedIn post"
|
||||
grounded_prompt = provider._build_grounded_prompt(test_prompt, "linkedin_post")
|
||||
|
||||
if grounded_prompt and len(grounded_prompt) > len(test_prompt):
|
||||
logger.info("✅ Grounded prompt built successfully")
|
||||
logger.info(f"📊 Original length: {len(test_prompt)}")
|
||||
logger.info(f"📊 Enhanced length: {len(grounded_prompt)}")
|
||||
else:
|
||||
logger.error("❌ Prompt building failed")
|
||||
return False
|
||||
|
||||
# Test 3: Content processing methods
|
||||
logger.info("\n📋 Test 3: Content Processing Methods")
|
||||
|
||||
# Test post processing
|
||||
test_content = "This is a test LinkedIn post #AI #Technology"
|
||||
post_processing = provider._process_post_content(test_content)
|
||||
if post_processing:
|
||||
logger.info("✅ Post processing works")
|
||||
logger.info(f"🔖 Hashtags found: {len(post_processing.get('hashtags', []))}")
|
||||
|
||||
# Test article processing
|
||||
test_article = "# Test Article\n\nThis is test content for an article."
|
||||
article_processing = provider._process_article_content(test_article)
|
||||
if article_processing:
|
||||
logger.info("✅ Article processing works")
|
||||
logger.info(f"📊 Word count: {article_processing.get('word_count', 'N/A')}")
|
||||
|
||||
logger.info("✅ All component tests passed")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Component test failed: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting Native Grounding Tests")
|
||||
logger.info("=" * 60)
|
||||
|
||||
# Test individual components first
|
||||
component_success = await test_individual_components()
|
||||
|
||||
if component_success:
|
||||
# Test the full integration
|
||||
integration_success = await test_native_grounding()
|
||||
|
||||
if integration_success:
|
||||
logger.info("\n🎉 SUCCESS: All tests passed!")
|
||||
logger.info("✅ Native Google Search grounding is working correctly")
|
||||
logger.info("✅ Gemini API integration successful")
|
||||
logger.info("✅ Grounding metadata processing working")
|
||||
logger.info("✅ Content generation with sources successful")
|
||||
else:
|
||||
logger.error("\n❌ FAILURE: Integration tests failed")
|
||||
sys.exit(1)
|
||||
else:
|
||||
logger.error("\n❌ FAILURE: Component tests failed")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stderr,
|
||||
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
|
||||
level="INFO"
|
||||
)
|
||||
|
||||
# Run the tests
|
||||
asyncio.run(main())
|
||||
106
backend/test/test_progress_endpoint.py
Normal file
106
backend/test/test_progress_endpoint.py
Normal file
@@ -0,0 +1,106 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify the calendar generation progress endpoint.
|
||||
This script tests the progress endpoint to ensure it returns the correct data structure.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
# Add the current directory to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
def test_progress_endpoint():
|
||||
"""Test the progress endpoint with a mock session."""
|
||||
try:
|
||||
from api.content_planning.services.calendar_generation_service import CalendarGenerationService
|
||||
|
||||
print("🧪 Testing Progress Endpoint")
|
||||
print("=" * 50)
|
||||
|
||||
# Initialize service
|
||||
service = CalendarGenerationService()
|
||||
|
||||
# Create a test session
|
||||
session_id = f"test-session-{int(datetime.now().timestamp())}"
|
||||
test_request_data = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
print(f"📋 Creating test session: {session_id}")
|
||||
|
||||
# Initialize session
|
||||
success = service.initialize_orchestrator_session(session_id, test_request_data)
|
||||
if not success:
|
||||
print("❌ Failed to initialize session")
|
||||
return False
|
||||
|
||||
print("✅ Session initialized successfully")
|
||||
|
||||
# Test progress retrieval
|
||||
print(f"🔍 Testing progress retrieval for session: {session_id}")
|
||||
progress = service.get_orchestrator_progress(session_id)
|
||||
|
||||
if not progress:
|
||||
print("❌ No progress data returned")
|
||||
return False
|
||||
|
||||
print("✅ Progress data retrieved successfully")
|
||||
print(f"📊 Progress data structure:")
|
||||
print(json.dumps(progress, indent=2, default=str))
|
||||
|
||||
# Verify required fields
|
||||
required_fields = [
|
||||
"status", "current_step", "step_progress", "overall_progress",
|
||||
"step_results", "quality_scores", "errors", "warnings"
|
||||
]
|
||||
|
||||
missing_fields = [field for field in required_fields if field not in progress]
|
||||
if missing_fields:
|
||||
print(f"❌ Missing required fields: {missing_fields}")
|
||||
return False
|
||||
|
||||
print("✅ All required fields present")
|
||||
|
||||
# Test data types
|
||||
if not isinstance(progress["current_step"], int):
|
||||
print("❌ current_step should be int")
|
||||
return False
|
||||
|
||||
if not isinstance(progress["step_progress"], (int, float)):
|
||||
print("❌ step_progress should be numeric")
|
||||
return False
|
||||
|
||||
if not isinstance(progress["overall_progress"], (int, float)):
|
||||
print("❌ overall_progress should be numeric")
|
||||
return False
|
||||
|
||||
print("✅ All data types correct")
|
||||
|
||||
# Test quality scores
|
||||
quality_scores = progress["quality_scores"]
|
||||
if not isinstance(quality_scores, dict):
|
||||
print("❌ quality_scores should be dict")
|
||||
return False
|
||||
|
||||
print("✅ Quality scores structure correct")
|
||||
|
||||
print("\n🎉 All tests passed! Progress endpoint is working correctly.")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed with error: {str(e)}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_progress_endpoint()
|
||||
sys.exit(0 if success else 1)
|
||||
304
backend/test/test_real_database_integration.py
Normal file
304
backend/test/test_real_database_integration.py
Normal file
@@ -0,0 +1,304 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Real Database Integration Test for Steps 1-8
|
||||
|
||||
This script tests the calendar generation framework with real database services,
|
||||
replacing all mock data with actual database operations.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from typing import Dict, Any
|
||||
from loguru import logger
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
# Add the services directory to the path
|
||||
services_dir = os.path.join(backend_dir, "services")
|
||||
if services_dir not in sys.path:
|
||||
sys.path.insert(0, services_dir)
|
||||
|
||||
async def test_real_database_integration():
|
||||
"""Test Steps 1-8 with real database services."""
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting real database integration test")
|
||||
|
||||
# Initialize database
|
||||
logger.info("🗄️ Initializing database connection")
|
||||
from services.database import init_database, get_db_session
|
||||
|
||||
try:
|
||||
init_database()
|
||||
logger.info("✅ Database initialized successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Database initialization failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Get database session
|
||||
db_session = get_db_session()
|
||||
if not db_session:
|
||||
logger.error("❌ Failed to get database session")
|
||||
return False
|
||||
|
||||
logger.info("✅ Database session created successfully")
|
||||
|
||||
# Test data
|
||||
test_context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_duration": 7,
|
||||
"posting_preferences": {
|
||||
"posting_frequency": "daily",
|
||||
"preferred_days": ["monday", "wednesday", "friday"],
|
||||
"preferred_times": ["09:00", "12:00", "15:00"],
|
||||
"content_per_day": 2
|
||||
}
|
||||
}
|
||||
|
||||
# Create test data in database
|
||||
logger.info("📝 Creating test data in database")
|
||||
await create_test_data(db_session, test_context)
|
||||
|
||||
# Test Step 1: Content Strategy Analysis with Real Database
|
||||
logger.info("📋 Testing Step 1: Content Strategy Analysis (Real Database)")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import ContentStrategyAnalysisStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.strategy_data import StrategyDataProcessor
|
||||
from services.content_planning_db import ContentPlanningDBService
|
||||
|
||||
# Create real database service
|
||||
content_planning_db = ContentPlanningDBService(db_session)
|
||||
|
||||
# Create strategy processor with real database service
|
||||
strategy_processor = StrategyDataProcessor()
|
||||
strategy_processor.content_planning_db_service = content_planning_db
|
||||
|
||||
step1 = ContentStrategyAnalysisStep()
|
||||
step1.strategy_processor = strategy_processor
|
||||
|
||||
result1 = await step1.execute(test_context)
|
||||
logger.info(f"✅ Step 1 completed: {result1.get('status')}")
|
||||
logger.info(f" Quality Score: {result1.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 1 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 2: Gap Analysis with Real Database
|
||||
logger.info("📋 Testing Step 2: Gap Analysis (Real Database)")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import GapAnalysisStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.gap_analysis_data import GapAnalysisDataProcessor
|
||||
|
||||
# Create gap processor with real database service
|
||||
gap_processor = GapAnalysisDataProcessor()
|
||||
gap_processor.content_planning_db_service = content_planning_db
|
||||
|
||||
step2 = GapAnalysisStep()
|
||||
step2.gap_processor = gap_processor
|
||||
|
||||
result2 = await step2.execute(test_context)
|
||||
logger.info(f"✅ Step 2 completed: {result2.get('status')}")
|
||||
logger.info(f" Quality Score: {result2.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 2 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 3: Audience & Platform Strategy with Real Database
|
||||
logger.info("📋 Testing Step 3: Audience & Platform Strategy (Real Database)")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import AudiencePlatformStrategyStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.comprehensive_user_data import ComprehensiveUserDataProcessor
|
||||
|
||||
# Create comprehensive processor with real database service
|
||||
comprehensive_processor = ComprehensiveUserDataProcessor(db_session)
|
||||
comprehensive_processor.content_planning_db_service = content_planning_db
|
||||
|
||||
step3 = AudiencePlatformStrategyStep()
|
||||
step3.comprehensive_processor = comprehensive_processor
|
||||
|
||||
result3 = await step3.execute(test_context)
|
||||
logger.info(f"✅ Step 3 completed: {result3.get('status')}")
|
||||
logger.info(f" Quality Score: {result3.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 3 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Steps 4-8 with Real Database
|
||||
logger.info("📋 Testing Steps 4-8: Calendar Framework to Daily Content Planning (Real Database)")
|
||||
try:
|
||||
# Test Step 4: Calendar Framework
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step4_implementation import CalendarFrameworkStep
|
||||
step4 = CalendarFrameworkStep()
|
||||
result4 = await step4.execute(test_context)
|
||||
logger.info(f"✅ Step 4 completed: {result4.get('status')}")
|
||||
|
||||
# Test Step 5: Content Pillar Distribution
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step5_implementation import ContentPillarDistributionStep
|
||||
step5 = ContentPillarDistributionStep()
|
||||
result5 = await step5.execute(test_context)
|
||||
logger.info(f"✅ Step 5 completed: {result5.get('status')}")
|
||||
|
||||
# Test Step 6: Platform-Specific Strategy
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step6_implementation import PlatformSpecificStrategyStep
|
||||
step6 = PlatformSpecificStrategyStep()
|
||||
result6 = await step6.execute(test_context)
|
||||
logger.info(f"✅ Step 6 completed: {result6.get('status')}")
|
||||
|
||||
# Test Step 7: Weekly Theme Development
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step7_implementation import WeeklyThemeDevelopmentStep
|
||||
step7 = WeeklyThemeDevelopmentStep()
|
||||
result7 = await step7.execute(test_context)
|
||||
logger.info(f"✅ Step 7 completed: {result7.get('status')}")
|
||||
|
||||
# Test Step 8: Daily Content Planning
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_implementation import DailyContentPlanningStep
|
||||
step8 = DailyContentPlanningStep()
|
||||
result8 = await step8.execute(test_context)
|
||||
logger.info(f"✅ Step 8 completed: {result8.get('status')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Steps 4-8 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Clean up test data
|
||||
logger.info("🧹 Cleaning up test data")
|
||||
await cleanup_test_data(db_session, test_context)
|
||||
|
||||
# Close database session
|
||||
db_session.close()
|
||||
|
||||
logger.info("🎉 All Steps 1-8 completed successfully with real database!")
|
||||
logger.info("📝 Real database integration working perfectly!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test failed with error: {str(e)}")
|
||||
return False
|
||||
|
||||
async def create_test_data(db_session, test_context: Dict[str, Any]):
|
||||
"""Create test data in the database."""
|
||||
try:
|
||||
from services.content_planning_db import ContentPlanningDBService
|
||||
from models.content_planning import ContentStrategy, ContentGapAnalysis
|
||||
|
||||
db_service = ContentPlanningDBService(db_session)
|
||||
|
||||
# Create test content strategy
|
||||
strategy_data = {
|
||||
"user_id": test_context["user_id"],
|
||||
"name": "Test Strategy - Real Database",
|
||||
"industry": "technology",
|
||||
"target_audience": {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {"age_range": "25-45", "location": "Global"}
|
||||
},
|
||||
"content_pillars": [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
],
|
||||
"ai_recommendations": {
|
||||
"strategic_insights": [
|
||||
"Focus on deep-dive technical content",
|
||||
"Emphasize practical implementation guides",
|
||||
"Highlight ROI and business impact"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
strategy = await db_service.create_content_strategy(strategy_data)
|
||||
if strategy:
|
||||
logger.info(f"✅ Created test strategy: {strategy.id}")
|
||||
test_context["strategy_id"] = strategy.id
|
||||
|
||||
# Create test gap analysis
|
||||
gap_data = {
|
||||
"user_id": test_context["user_id"],
|
||||
"website_url": "https://example.com",
|
||||
"competitor_urls": ["https://competitor1.com", "https://competitor2.com"],
|
||||
"target_keywords": [
|
||||
{"keyword": "AI ethics in business", "search_volume": 5000, "competition": "low"},
|
||||
{"keyword": "digital transformation ROI", "search_volume": 8000, "competition": "medium"}
|
||||
],
|
||||
"analysis_results": {
|
||||
"content_gaps": [
|
||||
{"topic": "AI Ethics", "priority": "high", "impact_score": 0.9},
|
||||
{"topic": "Digital Transformation ROI", "priority": "medium", "impact_score": 0.7}
|
||||
],
|
||||
"keyword_opportunities": [
|
||||
{"keyword": "AI ethics in business", "search_volume": 5000, "competition": "low"},
|
||||
{"keyword": "digital transformation ROI", "search_volume": 8000, "competition": "medium"}
|
||||
],
|
||||
"competitor_insights": [
|
||||
{"competitor": "Competitor A", "strength": "Technical content", "weakness": "Practical guides"},
|
||||
{"competitor": "Competitor B", "strength": "Case studies", "weakness": "AI focus"}
|
||||
],
|
||||
"opportunities": [
|
||||
{"type": "content", "topic": "AI Ethics", "priority": "high"},
|
||||
{"type": "content", "topic": "ROI Analysis", "priority": "medium"}
|
||||
]
|
||||
},
|
||||
"recommendations": [
|
||||
"Create comprehensive AI ethics guide",
|
||||
"Develop ROI calculator for digital transformation"
|
||||
],
|
||||
"opportunities": [
|
||||
{"type": "content", "topic": "AI Ethics", "priority": "high"},
|
||||
{"type": "content", "topic": "ROI Analysis", "priority": "medium"}
|
||||
]
|
||||
}
|
||||
|
||||
gap_analysis = await db_service.create_content_gap_analysis(gap_data)
|
||||
if gap_analysis:
|
||||
logger.info(f"✅ Created test gap analysis: {gap_analysis.id}")
|
||||
|
||||
logger.info("✅ Test data created successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error creating test data: {str(e)}")
|
||||
raise
|
||||
|
||||
async def cleanup_test_data(db_session, test_context: Dict[str, Any]):
|
||||
"""Clean up test data from the database."""
|
||||
try:
|
||||
from services.content_planning_db import ContentPlanningDBService
|
||||
|
||||
db_service = ContentPlanningDBService(db_session)
|
||||
|
||||
# Clean up gap analysis (get by user_id and delete)
|
||||
gap_analyses = await db_service.get_user_content_gap_analyses(test_context["user_id"])
|
||||
for gap_analysis in gap_analyses:
|
||||
await db_service.delete_content_gap_analysis(gap_analysis.id)
|
||||
|
||||
# Clean up strategy
|
||||
await db_service.delete_content_strategy(test_context["strategy_id"])
|
||||
|
||||
logger.info("✅ Test data cleaned up successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error cleaning up test data: {str(e)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the test
|
||||
success = asyncio.run(test_real_database_integration())
|
||||
|
||||
if success:
|
||||
logger.info("✅ Real database integration test completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Real database integration test failed!")
|
||||
sys.exit(1)
|
||||
58
backend/test/test_research.py
Normal file
58
backend/test/test_research.py
Normal file
@@ -0,0 +1,58 @@
|
||||
import requests
|
||||
import json
|
||||
|
||||
# Test the research endpoint
|
||||
url = "http://localhost:8000/api/blog/research"
|
||||
payload = {
|
||||
"keywords": ["AI content generation", "blog writing"],
|
||||
"topic": "ALwrity content generation",
|
||||
"industry": "Technology",
|
||||
"target_audience": "content creators"
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(url, json=payload)
|
||||
print(f"Status Code: {response.status_code}")
|
||||
print(f"Response Headers: {dict(response.headers)}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print("\n=== RESEARCH RESPONSE ===")
|
||||
print(f"Success: {data.get('success')}")
|
||||
print(f"Sources Count: {len(data.get('sources', []))}")
|
||||
print(f"Search Queries Count: {len(data.get('search_queries', []))}")
|
||||
print(f"Has Search Widget: {bool(data.get('search_widget'))}")
|
||||
print(f"Suggested Angles Count: {len(data.get('suggested_angles', []))}")
|
||||
|
||||
print("\n=== SOURCES ===")
|
||||
for i, source in enumerate(data.get('sources', [])[:3]):
|
||||
print(f"Source {i+1}: {source.get('title', 'No title')}")
|
||||
print(f" URL: {source.get('url', 'No URL')}")
|
||||
print(f" Type: {source.get('type', 'Unknown')}")
|
||||
|
||||
print("\n=== SEARCH QUERIES (First 5) ===")
|
||||
for i, query in enumerate(data.get('search_queries', [])[:5]):
|
||||
print(f"{i+1}. {query}")
|
||||
|
||||
print("\n=== SUGGESTED ANGLES ===")
|
||||
for i, angle in enumerate(data.get('suggested_angles', [])[:3]):
|
||||
print(f"{i+1}. {angle}")
|
||||
|
||||
print("\n=== KEYWORD ANALYSIS ===")
|
||||
kw_analysis = data.get('keyword_analysis', {})
|
||||
print(f"Primary: {kw_analysis.get('primary', [])}")
|
||||
print(f"Secondary: {kw_analysis.get('secondary', [])}")
|
||||
print(f"Search Intent: {kw_analysis.get('search_intent', 'Unknown')}")
|
||||
|
||||
print("\n=== SEARCH WIDGET (First 200 chars) ===")
|
||||
widget = data.get('search_widget', '')
|
||||
if widget:
|
||||
print(widget[:200] + "..." if len(widget) > 200 else widget)
|
||||
else:
|
||||
print("No search widget provided")
|
||||
|
||||
else:
|
||||
print(f"Error: {response.text}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Request failed: {e}")
|
||||
115
backend/test/test_research_analysis.py
Normal file
115
backend/test/test_research_analysis.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import requests
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
# Test the research endpoint and capture full response
|
||||
url = "http://localhost:8000/api/blog/research"
|
||||
payload = {
|
||||
"keywords": ["AI content generation", "blog writing"],
|
||||
"topic": "ALwrity content generation",
|
||||
"industry": "Technology",
|
||||
"target_audience": "content creators"
|
||||
}
|
||||
|
||||
try:
|
||||
print("Sending request to research endpoint...")
|
||||
response = requests.post(url, json=payload, timeout=120)
|
||||
print(f"Status Code: {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
|
||||
# Create analysis file with timestamp
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"research_analysis_{timestamp}.json"
|
||||
|
||||
# Save full response to file
|
||||
with open(filename, 'w', encoding='utf-8') as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"\n=== RESEARCH RESPONSE ANALYSIS ===")
|
||||
print(f"✅ Full response saved to: {filename}")
|
||||
print(f"Success: {data.get('success')}")
|
||||
print(f"Sources Count: {len(data.get('sources', []))}")
|
||||
print(f"Search Queries Count: {len(data.get('search_queries', []))}")
|
||||
print(f"Has Search Widget: {bool(data.get('search_widget'))}")
|
||||
print(f"Suggested Angles Count: {len(data.get('suggested_angles', []))}")
|
||||
|
||||
print(f"\n=== SOURCES ANALYSIS ===")
|
||||
sources = data.get('sources', [])
|
||||
for i, source in enumerate(sources[:5]): # Show first 5
|
||||
print(f"Source {i+1}: {source.get('title', 'No title')}")
|
||||
print(f" URL: {source.get('url', 'No URL')[:100]}...")
|
||||
print(f" Type: {source.get('type', 'Unknown')}")
|
||||
print(f" Credibility: {source.get('credibility_score', 'N/A')}")
|
||||
|
||||
print(f"\n=== SEARCH QUERIES ANALYSIS ===")
|
||||
queries = data.get('search_queries', [])
|
||||
print(f"Total queries: {len(queries)}")
|
||||
for i, query in enumerate(queries[:10]): # Show first 10
|
||||
print(f"{i+1:2d}. {query}")
|
||||
|
||||
print(f"\n=== SEARCH WIDGET ANALYSIS ===")
|
||||
widget = data.get('search_widget', '')
|
||||
if widget:
|
||||
print(f"Widget HTML length: {len(widget)} characters")
|
||||
print(f"Contains Google branding: {'Google' in widget}")
|
||||
print(f"Contains search chips: {'chip' in widget}")
|
||||
print(f"Contains carousel: {'carousel' in widget}")
|
||||
print(f"First 200 chars: {widget[:200]}...")
|
||||
else:
|
||||
print("No search widget provided")
|
||||
|
||||
print(f"\n=== KEYWORD ANALYSIS ===")
|
||||
kw_analysis = data.get('keyword_analysis', {})
|
||||
print(f"Primary keywords: {kw_analysis.get('primary', [])}")
|
||||
print(f"Secondary keywords: {kw_analysis.get('secondary', [])}")
|
||||
print(f"Long-tail keywords: {kw_analysis.get('long_tail', [])}")
|
||||
print(f"Search intent: {kw_analysis.get('search_intent', 'Unknown')}")
|
||||
print(f"Difficulty score: {kw_analysis.get('difficulty', 'N/A')}")
|
||||
|
||||
print(f"\n=== SUGGESTED ANGLES ===")
|
||||
angles = data.get('suggested_angles', [])
|
||||
for i, angle in enumerate(angles):
|
||||
print(f"{i+1}. {angle}")
|
||||
|
||||
print(f"\n=== UI REPRESENTATION RECOMMENDATIONS ===")
|
||||
print("Based on the response, here's what should be displayed in the Editor UI:")
|
||||
print(f"1. Research Sources Panel: {len(sources)} real web sources")
|
||||
print(f"2. Search Widget: Interactive Google search chips ({len(queries)} queries)")
|
||||
print(f"3. Keyword Analysis: Primary/Secondary/Long-tail breakdown")
|
||||
print(f"4. Content Angles: {len(angles)} suggested blog post angles")
|
||||
print(f"5. Search Queries: {len(queries)} research queries for reference")
|
||||
|
||||
# Additional analysis for UI components
|
||||
print(f"\n=== UI COMPONENT BREAKDOWN ===")
|
||||
|
||||
# Sources for UI
|
||||
print("SOURCES FOR UI:")
|
||||
for i, source in enumerate(sources[:3]):
|
||||
print(f" - {source.get('title')} (Credibility: {source.get('credibility_score')})")
|
||||
|
||||
# Search widget for UI
|
||||
print(f"\nSEARCH WIDGET FOR UI:")
|
||||
print(f" - HTML length: {len(widget)} chars")
|
||||
print(f" - Can be embedded directly in UI")
|
||||
print(f" - Contains {len(queries)} search suggestions")
|
||||
|
||||
# Keywords for UI
|
||||
print(f"\nKEYWORDS FOR UI:")
|
||||
print(f" - Primary: {', '.join(kw_analysis.get('primary', []))}")
|
||||
print(f" - Secondary: {', '.join(kw_analysis.get('secondary', []))}")
|
||||
print(f" - Long-tail: {', '.join(kw_analysis.get('long_tail', []))}")
|
||||
|
||||
# Angles for UI
|
||||
print(f"\nCONTENT ANGLES FOR UI:")
|
||||
for i, angle in enumerate(angles[:3]):
|
||||
print(f" - {angle}")
|
||||
|
||||
else:
|
||||
print(f"Error: {response.text}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Request failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
366
backend/test/test_research_data_filter.py
Normal file
366
backend/test/test_research_data_filter.py
Normal file
@@ -0,0 +1,366 @@
|
||||
"""
|
||||
Unit tests for ResearchDataFilter.
|
||||
|
||||
Tests the filtering and cleaning functionality for research data.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List
|
||||
|
||||
from models.blog_models import (
|
||||
BlogResearchResponse,
|
||||
ResearchSource,
|
||||
GroundingMetadata,
|
||||
GroundingChunk,
|
||||
GroundingSupport,
|
||||
Citation,
|
||||
)
|
||||
from services.blog_writer.research.data_filter import ResearchDataFilter
|
||||
|
||||
|
||||
class TestResearchDataFilter:
|
||||
"""Test cases for ResearchDataFilter."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.filter = ResearchDataFilter()
|
||||
|
||||
# Create sample research sources
|
||||
self.sample_sources = [
|
||||
ResearchSource(
|
||||
title="High Quality AI Article",
|
||||
url="https://example.com/ai-article",
|
||||
excerpt="This is a comprehensive article about artificial intelligence trends in 2024 with detailed analysis and expert insights.",
|
||||
credibility_score=0.95,
|
||||
published_at="2025-08-15",
|
||||
index=0,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="Low Quality Source",
|
||||
url="https://example.com/low-quality",
|
||||
excerpt="This is a low quality source with very poor credibility score and outdated information from 2020.",
|
||||
credibility_score=0.3,
|
||||
published_at="2020-01-01",
|
||||
index=1,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="PDF Document",
|
||||
url="https://example.com/document.pdf",
|
||||
excerpt="This is a PDF document with research data",
|
||||
credibility_score=0.8,
|
||||
published_at="2025-08-01",
|
||||
index=2,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="Recent AI Study",
|
||||
url="https://example.com/ai-study",
|
||||
excerpt="A recent study on AI adoption shows significant growth in enterprise usage with detailed statistics and case studies.",
|
||||
credibility_score=0.9,
|
||||
published_at="2025-09-01",
|
||||
index=3,
|
||||
source_type="web"
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample grounding metadata
|
||||
self.sample_grounding_metadata = GroundingMetadata(
|
||||
grounding_chunks=[
|
||||
GroundingChunk(
|
||||
title="High Confidence Chunk",
|
||||
url="https://example.com/chunk1",
|
||||
confidence_score=0.95
|
||||
),
|
||||
GroundingChunk(
|
||||
title="Low Confidence Chunk",
|
||||
url="https://example.com/chunk2",
|
||||
confidence_score=0.5
|
||||
),
|
||||
GroundingChunk(
|
||||
title="Medium Confidence Chunk",
|
||||
url="https://example.com/chunk3",
|
||||
confidence_score=0.8
|
||||
)
|
||||
],
|
||||
grounding_supports=[
|
||||
GroundingSupport(
|
||||
confidence_scores=[0.9, 0.85],
|
||||
grounding_chunk_indices=[0, 1],
|
||||
segment_text="High confidence support text with expert insights"
|
||||
),
|
||||
GroundingSupport(
|
||||
confidence_scores=[0.4, 0.3],
|
||||
grounding_chunk_indices=[2, 3],
|
||||
segment_text="Low confidence support text"
|
||||
)
|
||||
],
|
||||
citations=[
|
||||
Citation(
|
||||
citation_type="expert_opinion",
|
||||
start_index=0,
|
||||
end_index=50,
|
||||
text="Expert opinion on AI trends",
|
||||
source_indices=[0],
|
||||
reference="Source 1"
|
||||
),
|
||||
Citation(
|
||||
citation_type="statistical_data",
|
||||
start_index=51,
|
||||
end_index=100,
|
||||
text="Statistical data showing AI adoption rates",
|
||||
source_indices=[1],
|
||||
reference="Source 2"
|
||||
),
|
||||
Citation(
|
||||
citation_type="inline",
|
||||
start_index=101,
|
||||
end_index=150,
|
||||
text="Generic inline citation",
|
||||
source_indices=[2],
|
||||
reference="Source 3"
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
# Create sample research response
|
||||
self.sample_research_response = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=self.sample_sources,
|
||||
keyword_analysis={
|
||||
'primary': ['artificial intelligence', 'AI trends', 'machine learning'],
|
||||
'secondary': ['AI adoption', 'enterprise AI', 'AI technology'],
|
||||
'long_tail': ['AI trends 2024', 'enterprise AI adoption rates', 'AI technology benefits'],
|
||||
'semantic_keywords': ['artificial intelligence', 'machine learning', 'deep learning'],
|
||||
'trending_terms': ['AI 2024', 'generative AI', 'AI automation'],
|
||||
'content_gaps': [
|
||||
'AI ethics in small businesses',
|
||||
'AI implementation guide for startups',
|
||||
'AI cost-benefit analysis for SMEs',
|
||||
'general overview', # Should be filtered out
|
||||
'basics' # Should be filtered out
|
||||
],
|
||||
'search_intent': 'informational',
|
||||
'difficulty': 7
|
||||
},
|
||||
competitor_analysis={
|
||||
'top_competitors': ['Competitor A', 'Competitor B', 'Competitor C'],
|
||||
'opportunities': ['Market gap 1', 'Market gap 2'],
|
||||
'competitive_advantages': ['Advantage 1', 'Advantage 2'],
|
||||
'market_positioning': 'Premium positioning'
|
||||
},
|
||||
suggested_angles=[
|
||||
'AI trends in 2024',
|
||||
'Enterprise AI adoption',
|
||||
'AI implementation strategies'
|
||||
],
|
||||
search_widget="<div>Search widget HTML</div>",
|
||||
search_queries=["AI trends 2024", "enterprise AI adoption"],
|
||||
grounding_metadata=self.sample_grounding_metadata
|
||||
)
|
||||
|
||||
def test_filter_sources_quality_filtering(self):
|
||||
"""Test that sources are filtered by quality criteria."""
|
||||
filtered_sources = self.filter.filter_sources(self.sample_sources)
|
||||
|
||||
# Should filter out low quality source (credibility < 0.6) and PDF document
|
||||
assert len(filtered_sources) == 2 # Only high quality and recent AI study should pass
|
||||
assert all(source.credibility_score >= 0.6 for source in filtered_sources)
|
||||
|
||||
# Should filter out sources with short excerpts
|
||||
assert all(len(source.excerpt) >= 50 for source in filtered_sources)
|
||||
|
||||
def test_filter_sources_relevance_filtering(self):
|
||||
"""Test that irrelevant sources are filtered out."""
|
||||
filtered_sources = self.filter.filter_sources(self.sample_sources)
|
||||
|
||||
# Should filter out PDF document
|
||||
pdf_sources = [s for s in filtered_sources if s.url.endswith('.pdf')]
|
||||
assert len(pdf_sources) == 0
|
||||
|
||||
def test_filter_sources_recency_filtering(self):
|
||||
"""Test that old sources are filtered out."""
|
||||
filtered_sources = self.filter.filter_sources(self.sample_sources)
|
||||
|
||||
# Should filter out old source (2020)
|
||||
old_sources = [s for s in filtered_sources if s.published_at == "2020-01-01"]
|
||||
assert len(old_sources) == 0
|
||||
|
||||
def test_filter_sources_max_limit(self):
|
||||
"""Test that sources are limited to max_sources."""
|
||||
# Create more sources than max_sources
|
||||
many_sources = self.sample_sources * 5 # 20 sources
|
||||
filtered_sources = self.filter.filter_sources(many_sources)
|
||||
|
||||
assert len(filtered_sources) <= self.filter.max_sources
|
||||
|
||||
def test_filter_grounding_metadata_confidence_filtering(self):
|
||||
"""Test that grounding metadata is filtered by confidence."""
|
||||
filtered_metadata = self.filter.filter_grounding_metadata(self.sample_grounding_metadata)
|
||||
|
||||
assert filtered_metadata is not None
|
||||
|
||||
# Should filter out low confidence chunks
|
||||
assert len(filtered_metadata.grounding_chunks) == 2
|
||||
assert all(chunk.confidence_score >= 0.7 for chunk in filtered_metadata.grounding_chunks)
|
||||
|
||||
# Should filter out low confidence supports
|
||||
assert len(filtered_metadata.grounding_supports) == 1
|
||||
assert all(max(support.confidence_scores) >= 0.7 for support in filtered_metadata.grounding_supports)
|
||||
|
||||
# Should filter out irrelevant citations
|
||||
assert len(filtered_metadata.citations) == 2
|
||||
relevant_types = ['expert_opinion', 'statistical_data', 'recent_news', 'research_study']
|
||||
assert all(citation.citation_type in relevant_types for citation in filtered_metadata.citations)
|
||||
|
||||
def test_clean_keyword_analysis(self):
|
||||
"""Test that keyword analysis is cleaned and deduplicated."""
|
||||
keyword_analysis = {
|
||||
'primary': ['AI', 'artificial intelligence', 'AI', 'machine learning', ''],
|
||||
'secondary': ['AI adoption', 'enterprise AI', 'ai adoption'], # Case duplicates
|
||||
'long_tail': ['AI trends 2024', 'ai trends 2024', 'AI TRENDS 2024'], # Case duplicates
|
||||
'search_intent': 'informational',
|
||||
'difficulty': 7
|
||||
}
|
||||
|
||||
cleaned_analysis = self.filter.clean_keyword_analysis(keyword_analysis)
|
||||
|
||||
# Should remove duplicates and empty strings (keywords are converted to lowercase)
|
||||
assert len(cleaned_analysis['primary']) == 3
|
||||
assert 'ai' in cleaned_analysis['primary']
|
||||
assert 'artificial intelligence' in cleaned_analysis['primary']
|
||||
assert 'machine learning' in cleaned_analysis['primary']
|
||||
|
||||
# Should handle case-insensitive deduplication
|
||||
assert len(cleaned_analysis['secondary']) == 2
|
||||
assert len(cleaned_analysis['long_tail']) == 1
|
||||
|
||||
# Should preserve other fields
|
||||
assert cleaned_analysis['search_intent'] == 'informational'
|
||||
assert cleaned_analysis['difficulty'] == 7
|
||||
|
||||
def test_filter_content_gaps(self):
|
||||
"""Test that content gaps are filtered for quality and relevance."""
|
||||
content_gaps = [
|
||||
'AI ethics in small businesses',
|
||||
'AI implementation guide for startups',
|
||||
'general overview', # Should be filtered out
|
||||
'basics', # Should be filtered out
|
||||
'a', # Too short, should be filtered out
|
||||
'AI cost-benefit analysis for SMEs'
|
||||
]
|
||||
|
||||
filtered_gaps = self.filter.filter_content_gaps(content_gaps, self.sample_research_response)
|
||||
|
||||
# Should filter out generic and short gaps
|
||||
assert len(filtered_gaps) >= 3 # At least the good ones should pass
|
||||
assert 'AI ethics in small businesses' in filtered_gaps
|
||||
assert 'AI implementation guide for startups' in filtered_gaps
|
||||
assert 'AI cost-benefit analysis for SMEs' in filtered_gaps
|
||||
assert 'general overview' not in filtered_gaps
|
||||
assert 'basics' not in filtered_gaps
|
||||
|
||||
def test_filter_research_data_integration(self):
|
||||
"""Test the complete filtering pipeline."""
|
||||
filtered_research = self.filter.filter_research_data(self.sample_research_response)
|
||||
|
||||
# Should maintain success status
|
||||
assert filtered_research.success is True
|
||||
|
||||
# Should filter sources
|
||||
assert len(filtered_research.sources) < len(self.sample_research_response.sources)
|
||||
assert len(filtered_research.sources) >= 0 # May be 0 if all sources are filtered out
|
||||
|
||||
# Should filter grounding metadata
|
||||
if filtered_research.grounding_metadata:
|
||||
assert len(filtered_research.grounding_metadata.grounding_chunks) < len(self.sample_grounding_metadata.grounding_chunks)
|
||||
|
||||
# Should clean keyword analysis
|
||||
assert 'primary' in filtered_research.keyword_analysis
|
||||
assert len(filtered_research.keyword_analysis['primary']) <= self.filter.max_keywords_per_category
|
||||
|
||||
# Should filter content gaps
|
||||
assert len(filtered_research.keyword_analysis['content_gaps']) < len(self.sample_research_response.keyword_analysis['content_gaps'])
|
||||
|
||||
# Should preserve other fields
|
||||
assert filtered_research.suggested_angles == self.sample_research_response.suggested_angles
|
||||
assert filtered_research.search_widget == self.sample_research_response.search_widget
|
||||
assert filtered_research.search_queries == self.sample_research_response.search_queries
|
||||
|
||||
def test_filter_with_empty_data(self):
|
||||
"""Test filtering with empty or None data."""
|
||||
empty_research = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=[],
|
||||
keyword_analysis={},
|
||||
competitor_analysis={},
|
||||
suggested_angles=[],
|
||||
search_widget="",
|
||||
search_queries=[],
|
||||
grounding_metadata=None
|
||||
)
|
||||
|
||||
filtered_research = self.filter.filter_research_data(empty_research)
|
||||
|
||||
assert filtered_research.success is True
|
||||
assert len(filtered_research.sources) == 0
|
||||
assert filtered_research.grounding_metadata is None
|
||||
# keyword_analysis may contain content_gaps even if empty
|
||||
assert 'content_gaps' in filtered_research.keyword_analysis
|
||||
|
||||
def test_parse_date_functionality(self):
|
||||
"""Test date parsing functionality."""
|
||||
# Test various date formats
|
||||
test_dates = [
|
||||
"2024-01-15",
|
||||
"2024-01-15T10:30:00",
|
||||
"2024-01-15T10:30:00Z",
|
||||
"January 15, 2024",
|
||||
"Jan 15, 2024",
|
||||
"15 January 2024",
|
||||
"01/15/2024",
|
||||
"15/01/2024"
|
||||
]
|
||||
|
||||
for date_str in test_dates:
|
||||
parsed_date = self.filter._parse_date(date_str)
|
||||
assert parsed_date is not None
|
||||
assert isinstance(parsed_date, datetime)
|
||||
|
||||
# Test invalid date
|
||||
invalid_date = self.filter._parse_date("invalid date")
|
||||
assert invalid_date is None
|
||||
|
||||
# Test None date
|
||||
none_date = self.filter._parse_date(None)
|
||||
assert none_date is None
|
||||
|
||||
def test_clean_keyword_list_functionality(self):
|
||||
"""Test keyword list cleaning functionality."""
|
||||
keywords = [
|
||||
'AI',
|
||||
'artificial intelligence',
|
||||
'AI', # Duplicate
|
||||
'the', # Stop word
|
||||
'machine learning',
|
||||
'', # Empty
|
||||
' ', # Whitespace only
|
||||
'MACHINE LEARNING', # Case duplicate
|
||||
'ai' # Case duplicate
|
||||
]
|
||||
|
||||
cleaned_keywords = self.filter._clean_keyword_list(keywords)
|
||||
|
||||
# Should remove duplicates, stop words, and empty strings
|
||||
assert len(cleaned_keywords) == 3
|
||||
assert 'ai' in cleaned_keywords
|
||||
assert 'artificial intelligence' in cleaned_keywords
|
||||
assert 'machine learning' in cleaned_keywords
|
||||
assert 'the' not in cleaned_keywords
|
||||
assert '' not in cleaned_keywords
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
pytest.main([__file__])
|
||||
102
backend/test/test_seo_integration.py
Normal file
102
backend/test/test_seo_integration.py
Normal file
@@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for SEO analyzer integration
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.comprehensive_seo_analyzer import ComprehensiveSEOAnalyzer
|
||||
from services.database import init_database, get_db_session
|
||||
from services.seo_analysis_service import SEOAnalysisService
|
||||
from loguru import logger
|
||||
|
||||
async def test_seo_analyzer():
|
||||
"""Test the SEO analyzer functionality."""
|
||||
|
||||
print("🔍 Testing SEO Analyzer Integration")
|
||||
print("=" * 50)
|
||||
|
||||
try:
|
||||
# Initialize database
|
||||
print("📊 Initializing database...")
|
||||
init_database()
|
||||
print("✅ Database initialized successfully")
|
||||
|
||||
# Test URL
|
||||
test_url = "https://example.com"
|
||||
print(f"🌐 Testing with URL: {test_url}")
|
||||
|
||||
# Create analyzer
|
||||
analyzer = ComprehensiveSEOAnalyzer()
|
||||
|
||||
# Run analysis
|
||||
print("🔍 Running comprehensive SEO analysis...")
|
||||
result = analyzer.analyze_url(test_url)
|
||||
|
||||
print(f"📈 Analysis Results:")
|
||||
print(f" URL: {result.url}")
|
||||
print(f" Overall Score: {result.overall_score}/100")
|
||||
print(f" Health Status: {result.health_status}")
|
||||
print(f" Critical Issues: {len(result.critical_issues)}")
|
||||
print(f" Warnings: {len(result.warnings)}")
|
||||
print(f" Recommendations: {len(result.recommendations)}")
|
||||
|
||||
# Test database storage
|
||||
print("\n💾 Testing database storage...")
|
||||
db_session = get_db_session()
|
||||
if db_session:
|
||||
try:
|
||||
seo_service = SEOAnalysisService(db_session)
|
||||
stored_analysis = seo_service.store_analysis_result(result)
|
||||
|
||||
if stored_analysis:
|
||||
print(f"✅ Analysis stored in database with ID: {stored_analysis.id}")
|
||||
|
||||
# Test retrieval
|
||||
retrieved_analysis = seo_service.get_latest_analysis(test_url)
|
||||
if retrieved_analysis:
|
||||
print(f"✅ Analysis retrieved from database")
|
||||
print(f" Stored Score: {retrieved_analysis.overall_score}")
|
||||
print(f" Stored Status: {retrieved_analysis.health_status}")
|
||||
else:
|
||||
print("❌ Failed to retrieve analysis from database")
|
||||
else:
|
||||
print("❌ Failed to store analysis in database")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database error: {str(e)}")
|
||||
finally:
|
||||
db_session.close()
|
||||
else:
|
||||
print("❌ Failed to get database session")
|
||||
|
||||
# Test statistics
|
||||
print("\n📊 Testing statistics...")
|
||||
db_session = get_db_session()
|
||||
if db_session:
|
||||
try:
|
||||
seo_service = SEOAnalysisService(db_session)
|
||||
stats = seo_service.get_analysis_statistics()
|
||||
print(f"📈 Analysis Statistics:")
|
||||
print(f" Total Analyses: {stats['total_analyses']}")
|
||||
print(f" Total URLs: {stats['total_urls']}")
|
||||
print(f" Average Score: {stats['average_score']}")
|
||||
print(f" Health Distribution: {stats['health_distribution']}")
|
||||
except Exception as e:
|
||||
print(f"❌ Statistics error: {str(e)}")
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
print("\n🎉 SEO Analyzer Integration Test Completed!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed: {str(e)}")
|
||||
logger.error(f"Test failed: {str(e)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_seo_analyzer())
|
||||
179
backend/test/test_seo_tools.py
Normal file
179
backend/test/test_seo_tools.py
Normal file
@@ -0,0 +1,179 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for AI SEO Tools API
|
||||
|
||||
This script tests all the migrated SEO tools endpoints to ensure
|
||||
they are working correctly after migration to FastAPI.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import aiohttp
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
BASE_URL = "http://localhost:8000"
|
||||
|
||||
async def test_endpoint(session, endpoint, method="GET", data=None):
|
||||
"""Test a single endpoint"""
|
||||
url = f"{BASE_URL}{endpoint}"
|
||||
|
||||
try:
|
||||
if method == "POST":
|
||||
async with session.post(url, json=data) as response:
|
||||
result = await response.json()
|
||||
return {
|
||||
"endpoint": endpoint,
|
||||
"status": response.status,
|
||||
"success": response.status == 200,
|
||||
"response": result
|
||||
}
|
||||
else:
|
||||
async with session.get(url) as response:
|
||||
result = await response.json()
|
||||
return {
|
||||
"endpoint": endpoint,
|
||||
"status": response.status,
|
||||
"success": response.status == 200,
|
||||
"response": result
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"endpoint": endpoint,
|
||||
"status": 0,
|
||||
"success": False,
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
async def run_seo_tools_tests():
|
||||
"""Run comprehensive tests for all SEO tools"""
|
||||
|
||||
print("🚀 Starting AI SEO Tools API Tests")
|
||||
print("=" * 50)
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
|
||||
# Test health endpoint
|
||||
print("\n1. Testing Health Endpoints...")
|
||||
health_tests = [
|
||||
("/api/seo/health", "GET", None),
|
||||
("/api/seo/tools/status", "GET", None)
|
||||
]
|
||||
|
||||
for endpoint, method, data in health_tests:
|
||||
result = await test_endpoint(session, endpoint, method, data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} {endpoint} - Status: {result['status']}")
|
||||
|
||||
# Test meta description generation
|
||||
print("\n2. Testing Meta Description Generation...")
|
||||
meta_desc_data = {
|
||||
"keywords": ["SEO", "content marketing", "digital strategy"],
|
||||
"tone": "Professional",
|
||||
"search_intent": "Informational Intent",
|
||||
"language": "English"
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/meta-description", "POST", meta_desc_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} Meta Description Generation - Status: {result['status']}")
|
||||
|
||||
if result["success"]:
|
||||
data = result["response"].get("data", {})
|
||||
descriptions = data.get("meta_descriptions", [])
|
||||
print(f" Generated {len(descriptions)} meta descriptions")
|
||||
|
||||
# Test PageSpeed analysis
|
||||
print("\n3. Testing PageSpeed Analysis...")
|
||||
pagespeed_data = {
|
||||
"url": "https://example.com",
|
||||
"strategy": "DESKTOP",
|
||||
"categories": ["performance"]
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/pagespeed-analysis", "POST", pagespeed_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} PageSpeed Analysis - Status: {result['status']}")
|
||||
|
||||
# Test sitemap analysis
|
||||
print("\n4. Testing Sitemap Analysis...")
|
||||
sitemap_data = {
|
||||
"sitemap_url": "https://www.google.com/sitemap.xml",
|
||||
"analyze_content_trends": False,
|
||||
"analyze_publishing_patterns": False
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/sitemap-analysis", "POST", sitemap_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} Sitemap Analysis - Status: {result['status']}")
|
||||
|
||||
# Test OpenGraph generation
|
||||
print("\n5. Testing OpenGraph Generation...")
|
||||
og_data = {
|
||||
"url": "https://example.com",
|
||||
"title_hint": "Test Page",
|
||||
"description_hint": "Test description",
|
||||
"platform": "General"
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/opengraph-tags", "POST", og_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} OpenGraph Generation - Status: {result['status']}")
|
||||
|
||||
# Test on-page SEO analysis
|
||||
print("\n6. Testing On-Page SEO Analysis...")
|
||||
onpage_data = {
|
||||
"url": "https://example.com",
|
||||
"target_keywords": ["test", "example"],
|
||||
"analyze_images": True,
|
||||
"analyze_content_quality": True
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/on-page-analysis", "POST", onpage_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} On-Page SEO Analysis - Status: {result['status']}")
|
||||
|
||||
# Test technical SEO analysis
|
||||
print("\n7. Testing Technical SEO Analysis...")
|
||||
technical_data = {
|
||||
"url": "https://example.com",
|
||||
"crawl_depth": 2,
|
||||
"include_external_links": True,
|
||||
"analyze_performance": True
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/technical-seo", "POST", technical_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} Technical SEO Analysis - Status: {result['status']}")
|
||||
|
||||
# Test workflow endpoints
|
||||
print("\n8. Testing Workflow Endpoints...")
|
||||
|
||||
# Website audit workflow
|
||||
audit_data = {
|
||||
"website_url": "https://example.com",
|
||||
"workflow_type": "complete_audit",
|
||||
"target_keywords": ["test", "example"]
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/workflow/website-audit", "POST", audit_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} Website Audit Workflow - Status: {result['status']}")
|
||||
|
||||
# Content analysis workflow
|
||||
content_data = {
|
||||
"website_url": "https://example.com",
|
||||
"workflow_type": "content_analysis",
|
||||
"target_keywords": ["content", "strategy"]
|
||||
}
|
||||
|
||||
result = await test_endpoint(session, "/api/seo/workflow/content-analysis", "POST", content_data)
|
||||
status = "✅ PASS" if result["success"] else "❌ FAIL"
|
||||
print(f" {status} Content Analysis Workflow - Status: {result['status']}")
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("🎉 SEO Tools API Testing Completed!")
|
||||
print("\nNote: Some tests may show connection errors if the server is not running.")
|
||||
print("Start the server with: uvicorn app:app --reload --host 0.0.0.0 --port 8000")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(run_seo_tools_tests())
|
||||
83
backend/test/test_session_management.py
Normal file
83
backend/test/test_session_management.py
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to verify session management and duplicate prevention.
|
||||
This script tests the session cleanup and duplicate prevention features.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
# Add the current directory to the Python path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
def test_session_management():
|
||||
"""Test session management features."""
|
||||
try:
|
||||
from api.content_planning.services.calendar_generation_service import CalendarGenerationService
|
||||
|
||||
print("🧪 Testing Session Management")
|
||||
print("=" * 50)
|
||||
|
||||
# Initialize service
|
||||
service = CalendarGenerationService(None) # No DB needed for this test
|
||||
|
||||
# Test 1: Initialize first session
|
||||
print("\n📋 Test 1: Initialize first session")
|
||||
request_data_1 = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
session_id_1 = f"test-session-{int(datetime.now().timestamp())}-1000"
|
||||
success_1 = service.initialize_orchestrator_session(session_id_1, request_data_1)
|
||||
print(f"✅ First session initialized: {success_1}")
|
||||
print(f"📊 Available sessions: {list(service.orchestrator_sessions.keys())}")
|
||||
|
||||
# Test 2: Try to initialize second session for same user (should fail)
|
||||
print("\n📋 Test 2: Try to initialize second session for same user")
|
||||
session_id_2 = f"test-session-{int(datetime.now().timestamp())}-2000"
|
||||
success_2 = service.initialize_orchestrator_session(session_id_2, request_data_1)
|
||||
print(f"❌ Second session should fail: {success_2}")
|
||||
print(f"📊 Available sessions: {list(service.orchestrator_sessions.keys())}")
|
||||
|
||||
# Test 3: Check active session for user
|
||||
print("\n📋 Test 3: Check active session for user")
|
||||
active_session = service._get_active_session_for_user(1)
|
||||
print(f"✅ Active session for user 1: {active_session}")
|
||||
|
||||
# Test 4: Initialize session for different user (should succeed)
|
||||
print("\n📋 Test 4: Initialize session for different user")
|
||||
request_data_2 = {
|
||||
"user_id": 2,
|
||||
"strategy_id": 2,
|
||||
"calendar_type": "weekly",
|
||||
"industry": "finance",
|
||||
"business_size": "enterprise"
|
||||
}
|
||||
|
||||
session_id_3 = f"test-session-{int(datetime.now().timestamp())}-3000"
|
||||
success_3 = service.initialize_orchestrator_session(session_id_3, request_data_2)
|
||||
print(f"✅ Third session for different user: {success_3}")
|
||||
print(f"📊 Available sessions: {list(service.orchestrator_sessions.keys())}")
|
||||
|
||||
# Test 5: Test session cleanup
|
||||
print("\n📋 Test 5: Test session cleanup")
|
||||
print(f"📊 Sessions before cleanup: {len(service.orchestrator_sessions)}")
|
||||
service._cleanup_old_sessions(1)
|
||||
print(f"📊 Sessions after cleanup: {len(service.orchestrator_sessions)}")
|
||||
|
||||
print("\n🎉 Session management tests completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_session_management()
|
||||
73
backend/test/test_simple_grounding.py
Normal file
73
backend/test/test_simple_grounding.py
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple test script to verify basic grounding functionality.
|
||||
|
||||
This script tests the core components without triggering API overload.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from loguru import logger
|
||||
from services.llm_providers.gemini_grounded_provider import GeminiGroundedProvider
|
||||
|
||||
async def test_basic_functionality():
|
||||
"""Test basic grounding functionality."""
|
||||
try:
|
||||
logger.info("🧪 Testing Basic Grounding Functionality")
|
||||
|
||||
# Initialize provider
|
||||
provider = GeminiGroundedProvider()
|
||||
logger.info("✅ Provider initialized successfully")
|
||||
|
||||
# Test prompt building
|
||||
prompt = "Write a short LinkedIn post about AI trends"
|
||||
grounded_prompt = provider._build_grounded_prompt(prompt, "linkedin_post")
|
||||
logger.info(f"✅ Grounded prompt built: {len(grounded_prompt)} characters")
|
||||
|
||||
# Test content processing
|
||||
test_content = "AI is transforming industries #AI #Technology"
|
||||
processed = provider._process_post_content(test_content)
|
||||
logger.info(f"✅ Content processed: {len(processed.get('hashtags', []))} hashtags found")
|
||||
|
||||
logger.info("🎉 Basic functionality test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Basic functionality test failed: {str(e)}")
|
||||
return False
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
logger.info("🚀 Starting Simple Grounding Test")
|
||||
logger.info("=" * 50)
|
||||
|
||||
success = await test_basic_functionality()
|
||||
|
||||
if success:
|
||||
logger.info("\n🎉 SUCCESS: Basic grounding functionality is working!")
|
||||
logger.info("✅ Provider initialization successful")
|
||||
logger.info("✅ Prompt building working")
|
||||
logger.info("✅ Content processing working")
|
||||
logger.info("✅ Ready for API integration")
|
||||
else:
|
||||
logger.error("\n❌ FAILURE: Basic functionality test failed")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stderr,
|
||||
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
|
||||
level="INFO"
|
||||
)
|
||||
|
||||
# Run the test
|
||||
asyncio.run(main())
|
||||
40
backend/test/test_simple_schema.py
Normal file
40
backend/test/test_simple_schema.py
Normal file
@@ -0,0 +1,40 @@
|
||||
import asyncio
|
||||
from services.llm_providers.gemini_provider import gemini_structured_json_response
|
||||
|
||||
async def test_simple_schema():
|
||||
"""Test with a very simple schema to see if structured output works at all"""
|
||||
|
||||
# Very simple schema
|
||||
simple_schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"age": {"type": "integer"}
|
||||
}
|
||||
}
|
||||
|
||||
simple_prompt = "Generate a person with a name and age."
|
||||
|
||||
print("Testing simple schema...")
|
||||
print(f"Schema: {simple_schema}")
|
||||
print(f"Prompt: {simple_prompt}")
|
||||
print("\n" + "="*50 + "\n")
|
||||
|
||||
try:
|
||||
result = gemini_structured_json_response(
|
||||
prompt=simple_prompt,
|
||||
schema=simple_schema,
|
||||
temperature=0.3,
|
||||
max_tokens=100
|
||||
)
|
||||
|
||||
print("Result:")
|
||||
print(result)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_simple_schema())
|
||||
515
backend/test/test_source_mapper.py
Normal file
515
backend/test/test_source_mapper.py
Normal file
@@ -0,0 +1,515 @@
|
||||
"""
|
||||
Unit tests for SourceToSectionMapper.
|
||||
|
||||
Tests the intelligent source-to-section mapping functionality.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from typing import List
|
||||
|
||||
from models.blog_models import (
|
||||
BlogOutlineSection,
|
||||
ResearchSource,
|
||||
BlogResearchResponse,
|
||||
GroundingMetadata,
|
||||
)
|
||||
from services.blog_writer.outline.source_mapper import SourceToSectionMapper
|
||||
|
||||
|
||||
class TestSourceToSectionMapper:
|
||||
"""Test cases for SourceToSectionMapper."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test fixtures."""
|
||||
self.mapper = SourceToSectionMapper()
|
||||
|
||||
# Create sample research sources
|
||||
self.sample_sources = [
|
||||
ResearchSource(
|
||||
title="AI Trends in 2025: Machine Learning Revolution",
|
||||
url="https://example.com/ai-trends-2025",
|
||||
excerpt="Comprehensive analysis of artificial intelligence trends in 2025, focusing on machine learning advancements, deep learning breakthroughs, and AI automation in enterprise environments.",
|
||||
credibility_score=0.95,
|
||||
published_at="2025-08-15",
|
||||
index=0,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="Enterprise AI Implementation Guide",
|
||||
url="https://example.com/enterprise-ai-guide",
|
||||
excerpt="Step-by-step guide for implementing artificial intelligence solutions in enterprise environments, including best practices, challenges, and success stories from leading companies.",
|
||||
credibility_score=0.9,
|
||||
published_at="2025-08-01",
|
||||
index=1,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="Machine Learning Algorithms Explained",
|
||||
url="https://example.com/ml-algorithms",
|
||||
excerpt="Detailed explanation of various machine learning algorithms including supervised learning, unsupervised learning, and reinforcement learning techniques with practical examples.",
|
||||
credibility_score=0.85,
|
||||
published_at="2025-07-20",
|
||||
index=2,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="AI Ethics and Responsible Development",
|
||||
url="https://example.com/ai-ethics",
|
||||
excerpt="Discussion of ethical considerations in artificial intelligence development, including bias mitigation, transparency, and responsible AI practices for developers and organizations.",
|
||||
credibility_score=0.88,
|
||||
published_at="2025-07-10",
|
||||
index=3,
|
||||
source_type="web"
|
||||
),
|
||||
ResearchSource(
|
||||
title="Deep Learning Neural Networks Tutorial",
|
||||
url="https://example.com/deep-learning-tutorial",
|
||||
excerpt="Comprehensive tutorial on deep learning neural networks, covering convolutional neural networks, recurrent neural networks, and transformer architectures with code examples.",
|
||||
credibility_score=0.92,
|
||||
published_at="2025-06-15",
|
||||
index=4,
|
||||
source_type="web"
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample outline sections
|
||||
self.sample_sections = [
|
||||
BlogOutlineSection(
|
||||
id="s1",
|
||||
heading="Introduction to AI and Machine Learning",
|
||||
subheadings=["What is AI?", "Types of Machine Learning", "AI Applications"],
|
||||
key_points=["AI definition and scope", "ML vs traditional programming", "Real-world AI examples"],
|
||||
references=[],
|
||||
target_words=300,
|
||||
keywords=["artificial intelligence", "machine learning", "AI basics", "introduction"]
|
||||
),
|
||||
BlogOutlineSection(
|
||||
id="s2",
|
||||
heading="Enterprise AI Implementation Strategies",
|
||||
subheadings=["Planning Phase", "Implementation Steps", "Best Practices"],
|
||||
key_points=["Strategic planning", "Technology selection", "Change management", "ROI measurement"],
|
||||
references=[],
|
||||
target_words=400,
|
||||
keywords=["enterprise AI", "implementation", "strategies", "business"]
|
||||
),
|
||||
BlogOutlineSection(
|
||||
id="s3",
|
||||
heading="Machine Learning Algorithms Deep Dive",
|
||||
subheadings=["Supervised Learning", "Unsupervised Learning", "Deep Learning"],
|
||||
key_points=["Algorithm types", "Use cases", "Performance metrics", "Model selection"],
|
||||
references=[],
|
||||
target_words=500,
|
||||
keywords=["machine learning algorithms", "supervised learning", "deep learning", "neural networks"]
|
||||
),
|
||||
BlogOutlineSection(
|
||||
id="s4",
|
||||
heading="AI Ethics and Responsible Development",
|
||||
subheadings=["Ethical Considerations", "Bias and Fairness", "Transparency"],
|
||||
key_points=["Ethical frameworks", "Bias detection", "Explainable AI", "Regulatory compliance"],
|
||||
references=[],
|
||||
target_words=350,
|
||||
keywords=["AI ethics", "responsible AI", "bias", "transparency"]
|
||||
)
|
||||
]
|
||||
|
||||
# Create sample research response
|
||||
self.sample_research = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=self.sample_sources,
|
||||
keyword_analysis={
|
||||
'primary': ['artificial intelligence', 'machine learning', 'AI implementation'],
|
||||
'secondary': ['enterprise AI', 'deep learning', 'AI ethics'],
|
||||
'long_tail': ['AI trends 2025', 'enterprise AI implementation guide', 'machine learning algorithms explained'],
|
||||
'semantic_keywords': ['AI', 'ML', 'neural networks', 'automation'],
|
||||
'trending_terms': ['AI 2025', 'generative AI', 'AI automation'],
|
||||
'search_intent': 'informational',
|
||||
'content_gaps': ['AI implementation challenges', 'ML algorithm comparison']
|
||||
},
|
||||
competitor_analysis={
|
||||
'top_competitors': ['TechCorp AI', 'DataScience Inc', 'AI Solutions Ltd'],
|
||||
'opportunities': ['Enterprise market gap', 'SME AI adoption'],
|
||||
'competitive_advantages': ['Comprehensive coverage', 'Practical examples']
|
||||
},
|
||||
suggested_angles=[
|
||||
'AI trends in 2025',
|
||||
'Enterprise AI implementation',
|
||||
'Machine learning fundamentals',
|
||||
'AI ethics and responsibility'
|
||||
],
|
||||
search_widget="<div>Search widget HTML</div>",
|
||||
search_queries=["AI trends 2025", "enterprise AI implementation", "machine learning guide"],
|
||||
grounding_metadata=GroundingMetadata(
|
||||
grounding_chunks=[],
|
||||
grounding_supports=[],
|
||||
citations=[],
|
||||
search_entry_point="AI trends and implementation",
|
||||
web_search_queries=["AI trends 2025", "enterprise AI"]
|
||||
)
|
||||
)
|
||||
|
||||
def test_semantic_similarity_calculation(self):
|
||||
"""Test semantic similarity calculation between sections and sources."""
|
||||
section = self.sample_sections[0] # AI Introduction section
|
||||
source = self.sample_sources[0] # AI Trends source
|
||||
|
||||
similarity = self.mapper._calculate_semantic_similarity(section, source)
|
||||
|
||||
# Should have high similarity due to AI-related content
|
||||
assert 0.0 <= similarity <= 1.0
|
||||
assert similarity > 0.3 # Should be reasonably high for AI-related content
|
||||
|
||||
def test_keyword_relevance_calculation(self):
|
||||
"""Test keyword-based relevance calculation."""
|
||||
section = self.sample_sections[1] # Enterprise AI section
|
||||
source = self.sample_sources[1] # Enterprise AI Guide source
|
||||
|
||||
relevance = self.mapper._calculate_keyword_relevance(section, source, self.sample_research)
|
||||
|
||||
# Should have reasonable relevance due to enterprise AI keywords
|
||||
assert 0.0 <= relevance <= 1.0
|
||||
assert relevance > 0.1 # Should be reasonable for matching enterprise AI content
|
||||
|
||||
def test_contextual_relevance_calculation(self):
|
||||
"""Test contextual relevance calculation."""
|
||||
section = self.sample_sections[2] # ML Algorithms section
|
||||
source = self.sample_sources[2] # ML Algorithms source
|
||||
|
||||
relevance = self.mapper._calculate_contextual_relevance(section, source, self.sample_research)
|
||||
|
||||
# Should have high relevance due to matching content angles
|
||||
assert 0.0 <= relevance <= 1.0
|
||||
assert relevance > 0.2 # Should be reasonable for matching content
|
||||
|
||||
def test_algorithmic_source_mapping(self):
|
||||
"""Test the complete algorithmic mapping process."""
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
# Should have mapping results for all sections
|
||||
assert len(mapping_results) == len(self.sample_sections)
|
||||
|
||||
# Each section should have some mapped sources
|
||||
for section_id, sources in mapping_results.items():
|
||||
assert isinstance(sources, list)
|
||||
# Each source should be a tuple of (source, score)
|
||||
for source, score in sources:
|
||||
assert isinstance(source, ResearchSource)
|
||||
assert isinstance(score, float)
|
||||
assert 0.0 <= score <= 1.0
|
||||
|
||||
def test_source_mapping_quality(self):
|
||||
"""Test that sources are mapped to relevant sections."""
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
# Enterprise AI section should have enterprise AI source
|
||||
enterprise_section = mapping_results["s2"]
|
||||
enterprise_source_titles = [source.title for source, score in enterprise_section]
|
||||
assert any("Enterprise" in title for title in enterprise_source_titles)
|
||||
|
||||
# ML Algorithms section should have ML algorithms source
|
||||
ml_section = mapping_results["s3"]
|
||||
ml_source_titles = [source.title for source, score in ml_section]
|
||||
assert any("Machine Learning" in title or "Algorithms" in title for title in ml_source_titles)
|
||||
|
||||
# AI Ethics section should have AI ethics source
|
||||
ethics_section = mapping_results["s4"]
|
||||
ethics_source_titles = [source.title for source, score in ethics_section]
|
||||
assert any("Ethics" in title for title in ethics_source_titles)
|
||||
|
||||
def test_complete_mapping_pipeline(self):
|
||||
"""Test the complete mapping pipeline from sections to mapped sections."""
|
||||
mapped_sections = self.mapper.map_sources_to_sections(self.sample_sections, self.sample_research)
|
||||
|
||||
# Should return same number of sections
|
||||
assert len(mapped_sections) == len(self.sample_sections)
|
||||
|
||||
# Each section should have mapped sources
|
||||
for section in mapped_sections:
|
||||
assert isinstance(section.references, list)
|
||||
assert len(section.references) <= self.mapper.max_sources_per_section
|
||||
|
||||
# All references should be ResearchSource objects
|
||||
for source in section.references:
|
||||
assert isinstance(source, ResearchSource)
|
||||
|
||||
def test_mapping_with_empty_sources(self):
|
||||
"""Test mapping behavior with empty sources list."""
|
||||
empty_research = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=[],
|
||||
keyword_analysis={},
|
||||
competitor_analysis={},
|
||||
suggested_angles=[],
|
||||
search_widget="",
|
||||
search_queries=[],
|
||||
grounding_metadata=None
|
||||
)
|
||||
|
||||
mapped_sections = self.mapper.map_sources_to_sections(self.sample_sections, empty_research)
|
||||
|
||||
# Should return sections with empty references
|
||||
for section in mapped_sections:
|
||||
assert section.references == []
|
||||
|
||||
def test_mapping_with_empty_sections(self):
|
||||
"""Test mapping behavior with empty sections list."""
|
||||
mapped_sections = self.mapper.map_sources_to_sections([], self.sample_research)
|
||||
|
||||
# Should return empty list
|
||||
assert mapped_sections == []
|
||||
|
||||
def test_meaningful_words_extraction(self):
|
||||
"""Test extraction of meaningful words from text."""
|
||||
text = "Artificial Intelligence and Machine Learning are transforming the world of technology and business applications."
|
||||
words = self.mapper._extract_meaningful_words(text)
|
||||
|
||||
# Should extract meaningful words and remove stop words
|
||||
assert "artificial" in words
|
||||
assert "intelligence" in words
|
||||
assert "machine" in words
|
||||
assert "learning" in words
|
||||
assert "the" not in words # Stop word should be removed
|
||||
assert "and" not in words # Stop word should be removed
|
||||
|
||||
def test_phrase_similarity_calculation(self):
|
||||
"""Test phrase similarity calculation."""
|
||||
text1 = "machine learning algorithms"
|
||||
text2 = "This article covers machine learning algorithms and their applications"
|
||||
|
||||
similarity = self.mapper._calculate_phrase_similarity(text1, text2)
|
||||
|
||||
# Should find phrase matches
|
||||
assert similarity > 0.0
|
||||
assert similarity <= 0.3 # Should be capped at 0.3
|
||||
|
||||
def test_intent_keywords_extraction(self):
|
||||
"""Test extraction of intent-specific keywords."""
|
||||
informational_keywords = self.mapper._get_intent_keywords("informational")
|
||||
transactional_keywords = self.mapper._get_intent_keywords("transactional")
|
||||
|
||||
# Should return appropriate keywords for each intent
|
||||
assert "what" in informational_keywords
|
||||
assert "how" in informational_keywords
|
||||
assert "guide" in informational_keywords
|
||||
|
||||
assert "buy" in transactional_keywords
|
||||
assert "purchase" in transactional_keywords
|
||||
assert "price" in transactional_keywords
|
||||
|
||||
def test_mapping_statistics(self):
|
||||
"""Test mapping statistics calculation."""
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
stats = self.mapper.get_mapping_statistics(mapping_results)
|
||||
|
||||
# Should have valid statistics
|
||||
assert stats['total_sections'] == len(self.sample_sections)
|
||||
assert stats['total_mappings'] > 0
|
||||
assert stats['sections_with_sources'] > 0
|
||||
assert 0.0 <= stats['average_score'] <= 1.0
|
||||
assert 0.0 <= stats['max_score'] <= 1.0
|
||||
assert 0.0 <= stats['min_score'] <= 1.0
|
||||
assert 0.0 <= stats['mapping_coverage'] <= 1.0
|
||||
|
||||
def test_source_quality_filtering(self):
|
||||
"""Test that low-quality sources are filtered out."""
|
||||
# Create a low-quality source
|
||||
low_quality_source = ResearchSource(
|
||||
title="Random Article",
|
||||
url="https://example.com/random",
|
||||
excerpt="This is a completely unrelated article about cooking recipes and gardening tips.",
|
||||
credibility_score=0.3,
|
||||
published_at="2025-08-01",
|
||||
index=5,
|
||||
source_type="web"
|
||||
)
|
||||
|
||||
# Add to research data
|
||||
research_with_low_quality = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=self.sample_sources + [low_quality_source],
|
||||
keyword_analysis=self.sample_research.keyword_analysis,
|
||||
competitor_analysis=self.sample_research.competitor_analysis,
|
||||
suggested_angles=self.sample_research.suggested_angles,
|
||||
search_widget=self.sample_research.search_widget,
|
||||
search_queries=self.sample_research.search_queries,
|
||||
grounding_metadata=self.sample_research.grounding_metadata
|
||||
)
|
||||
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, research_with_low_quality)
|
||||
|
||||
# Low-quality source should not be mapped to any section
|
||||
all_mapped_sources = []
|
||||
for sources in mapping_results.values():
|
||||
all_mapped_sources.extend([source for source, score in sources])
|
||||
|
||||
assert low_quality_source not in all_mapped_sources
|
||||
|
||||
def test_max_sources_per_section_limit(self):
|
||||
"""Test that the maximum sources per section limit is enforced."""
|
||||
# Create many sources
|
||||
many_sources = self.sample_sources * 3 # 15 sources
|
||||
|
||||
research_with_many_sources = BlogResearchResponse(
|
||||
success=True,
|
||||
sources=many_sources,
|
||||
keyword_analysis=self.sample_research.keyword_analysis,
|
||||
competitor_analysis=self.sample_research.competitor_analysis,
|
||||
suggested_angles=self.sample_research.suggested_angles,
|
||||
search_widget=self.sample_research.search_widget,
|
||||
search_queries=self.sample_research.search_queries,
|
||||
grounding_metadata=self.sample_research.grounding_metadata
|
||||
)
|
||||
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, research_with_many_sources)
|
||||
|
||||
# Each section should have at most max_sources_per_section sources
|
||||
for section_id, sources in mapping_results.items():
|
||||
assert len(sources) <= self.mapper.max_sources_per_section
|
||||
|
||||
def test_ai_validation_prompt_building(self):
|
||||
"""Test AI validation prompt building."""
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
prompt = self.mapper._build_validation_prompt(mapping_results, self.sample_research)
|
||||
|
||||
# Should contain key elements
|
||||
assert "expert content strategist" in prompt
|
||||
assert "Research Topic:" in prompt
|
||||
assert "ALGORITHMIC MAPPING RESULTS" in prompt
|
||||
assert "AVAILABLE SOURCES" in prompt
|
||||
assert "VALIDATION TASK" in prompt
|
||||
assert "RESPONSE FORMAT" in prompt
|
||||
assert "overall_quality_score" in prompt
|
||||
assert "section_improvements" in prompt
|
||||
|
||||
def test_ai_validation_response_parsing(self):
|
||||
"""Test AI validation response parsing."""
|
||||
# Mock AI response
|
||||
mock_response = """
|
||||
Here's my analysis of the source-to-section mapping:
|
||||
|
||||
```json
|
||||
{
|
||||
"overall_quality_score": 8,
|
||||
"section_improvements": [
|
||||
{
|
||||
"section_id": "s1",
|
||||
"current_sources": ["AI Trends in 2025: Machine Learning Revolution"],
|
||||
"recommended_sources": ["AI Trends in 2025: Machine Learning Revolution", "Machine Learning Algorithms Explained"],
|
||||
"reasoning": "Adding ML algorithms source provides more technical depth",
|
||||
"confidence": 0.9
|
||||
}
|
||||
],
|
||||
"summary": "Good mapping overall, minor improvements suggested"
|
||||
}
|
||||
```
|
||||
"""
|
||||
|
||||
original_mapping = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
parsed_mapping = self.mapper._parse_validation_response(mock_response, original_mapping, self.sample_research)
|
||||
|
||||
# Should have improved mapping
|
||||
assert "s1" in parsed_mapping
|
||||
assert len(parsed_mapping["s1"]) > 0
|
||||
|
||||
# Should maintain other sections
|
||||
assert len(parsed_mapping) == len(original_mapping)
|
||||
|
||||
def test_ai_validation_fallback_handling(self):
|
||||
"""Test AI validation fallback when parsing fails."""
|
||||
# Mock invalid AI response
|
||||
invalid_response = "This is not a valid JSON response"
|
||||
|
||||
original_mapping = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
parsed_mapping = self.mapper._parse_validation_response(invalid_response, original_mapping, self.sample_research)
|
||||
|
||||
# Should fallback to original mapping
|
||||
assert parsed_mapping == original_mapping
|
||||
|
||||
def test_ai_validation_with_missing_sources(self):
|
||||
"""Test AI validation when recommended sources don't exist."""
|
||||
# Mock AI response with non-existent source
|
||||
mock_response = """
|
||||
```json
|
||||
{
|
||||
"overall_quality_score": 7,
|
||||
"section_improvements": [
|
||||
{
|
||||
"section_id": "s1",
|
||||
"current_sources": ["AI Trends in 2025: Machine Learning Revolution"],
|
||||
"recommended_sources": ["Non-existent Source", "Another Fake Source"],
|
||||
"reasoning": "These sources would be better",
|
||||
"confidence": 0.8
|
||||
}
|
||||
],
|
||||
"summary": "Suggested improvements"
|
||||
}
|
||||
```
|
||||
"""
|
||||
|
||||
original_mapping = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
parsed_mapping = self.mapper._parse_validation_response(mock_response, original_mapping, self.sample_research)
|
||||
|
||||
# Should fallback to original mapping for s1 since no valid sources found
|
||||
assert parsed_mapping["s1"] == original_mapping["s1"]
|
||||
|
||||
def test_ai_validation_integration(self):
|
||||
"""Test complete AI validation integration (with mocked LLM)."""
|
||||
# This test would require mocking the LLM provider
|
||||
# For now, we'll test that the method doesn't crash
|
||||
mapping_results = self.mapper._algorithmic_source_mapping(self.sample_sections, self.sample_research)
|
||||
|
||||
# Test that AI validation method exists and can be called
|
||||
# (In real implementation, this would call the actual LLM)
|
||||
try:
|
||||
# This will fail in test environment due to no LLM, but should not crash
|
||||
validated_mapping = self.mapper._ai_validate_mapping(mapping_results, self.sample_research)
|
||||
# If it doesn't crash, it should return the original mapping as fallback
|
||||
assert validated_mapping == mapping_results
|
||||
except Exception as e:
|
||||
# Expected to fail in test environment, but should be handled gracefully
|
||||
assert "AI validation failed" in str(e) or "Failed to get AI validation response" in str(e)
|
||||
|
||||
def test_format_sections_for_prompt(self):
|
||||
"""Test formatting of sections for AI prompt."""
|
||||
sections_info = [
|
||||
{
|
||||
'id': 's1',
|
||||
'sources': [
|
||||
{
|
||||
'title': 'Test Source 1',
|
||||
'algorithmic_score': 0.85
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
formatted = self.mapper._format_sections_for_prompt(sections_info)
|
||||
|
||||
assert "Section s1:" in formatted
|
||||
assert "Test Source 1" in formatted
|
||||
assert "0.85" in formatted
|
||||
|
||||
def test_format_sources_for_prompt(self):
|
||||
"""Test formatting of sources for AI prompt."""
|
||||
sources = [
|
||||
{
|
||||
'title': 'Test Source',
|
||||
'url': 'https://example.com',
|
||||
'credibility_score': 0.9,
|
||||
'excerpt': 'This is a test excerpt for the source.'
|
||||
}
|
||||
]
|
||||
|
||||
formatted = self.mapper._format_sources_for_prompt(sources)
|
||||
|
||||
assert "Test Source" in formatted
|
||||
assert "https://example.com" in formatted
|
||||
assert "0.9" in formatted
|
||||
assert "This is a test excerpt" in formatted
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
pytest.main([__file__])
|
||||
752
backend/test/test_stability_endpoints.py
Normal file
752
backend/test/test_stability_endpoints.py
Normal file
@@ -0,0 +1,752 @@
|
||||
"""Test suite for Stability AI endpoints."""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
from fastapi.testclient import TestClient
|
||||
from fastapi import FastAPI
|
||||
import io
|
||||
from PIL import Image
|
||||
import json
|
||||
import base64
|
||||
from unittest.mock import Mock, AsyncMock, patch
|
||||
|
||||
from routers.stability import router
|
||||
from services.stability_service import StabilityAIService
|
||||
from models.stability_models import *
|
||||
|
||||
|
||||
# Create test app
|
||||
app = FastAPI()
|
||||
app.include_router(router)
|
||||
client = TestClient(app)
|
||||
|
||||
|
||||
class TestStabilityEndpoints:
|
||||
"""Test cases for Stability AI endpoints."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test environment."""
|
||||
self.test_image = self._create_test_image()
|
||||
self.test_audio = self._create_test_audio()
|
||||
|
||||
def _create_test_image(self) -> bytes:
|
||||
"""Create test image data."""
|
||||
img = Image.new('RGB', (512, 512), color='red')
|
||||
img_bytes = io.BytesIO()
|
||||
img.save(img_bytes, format='PNG')
|
||||
return img_bytes.getvalue()
|
||||
|
||||
def _create_test_audio(self) -> bytes:
|
||||
"""Create test audio data."""
|
||||
# Mock audio data
|
||||
return b"fake_audio_data" * 1000
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_generate_ultra_success(self, mock_service):
|
||||
"""Test successful Ultra generation."""
|
||||
# Mock service response
|
||||
mock_service.return_value.__aenter__.return_value.generate_ultra = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/generate/ultra",
|
||||
data={"prompt": "A beautiful landscape"},
|
||||
files={}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"].startswith("image/")
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_generate_core_with_parameters(self, mock_service):
|
||||
"""Test Core generation with various parameters."""
|
||||
mock_service.return_value.__aenter__.return_value.generate_core = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/generate/core",
|
||||
data={
|
||||
"prompt": "A futuristic city",
|
||||
"aspect_ratio": "16:9",
|
||||
"style_preset": "digital-art",
|
||||
"seed": 42
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_inpaint_with_mask(self, mock_service):
|
||||
"""Test inpainting with mask."""
|
||||
mock_service.return_value.__aenter__.return_value.inpaint = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/edit/inpaint",
|
||||
data={"prompt": "A cat"},
|
||||
files={
|
||||
"image": ("test.png", self.test_image, "image/png"),
|
||||
"mask": ("mask.png", self.test_image, "image/png")
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_upscale_fast(self, mock_service):
|
||||
"""Test fast upscaling."""
|
||||
mock_service.return_value.__aenter__.return_value.upscale_fast = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/upscale/fast",
|
||||
files={"image": ("test.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_control_sketch(self, mock_service):
|
||||
"""Test sketch control."""
|
||||
mock_service.return_value.__aenter__.return_value.control_sketch = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/control/sketch",
|
||||
data={
|
||||
"prompt": "A medieval castle",
|
||||
"control_strength": 0.8
|
||||
},
|
||||
files={"image": ("sketch.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_3d_generation(self, mock_service):
|
||||
"""Test 3D model generation."""
|
||||
mock_3d_data = b"fake_glb_data" * 100
|
||||
mock_service.return_value.__aenter__.return_value.generate_3d_fast = AsyncMock(
|
||||
return_value=mock_3d_data
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/3d/stable-fast-3d",
|
||||
files={"image": ("test.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"] == "model/gltf-binary"
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_audio_generation(self, mock_service):
|
||||
"""Test audio generation."""
|
||||
mock_service.return_value.__aenter__.return_value.generate_audio_from_text = AsyncMock(
|
||||
return_value=self.test_audio
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/audio/text-to-audio",
|
||||
data={
|
||||
"prompt": "Peaceful nature sounds",
|
||||
"duration": 30
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"].startswith("audio/")
|
||||
|
||||
def test_health_check(self):
|
||||
"""Test health check endpoint."""
|
||||
response = client.get("/api/stability/health")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "healthy"
|
||||
|
||||
def test_models_info(self):
|
||||
"""Test models info endpoint."""
|
||||
response = client.get("/api/stability/models/info")
|
||||
assert response.status_code == 200
|
||||
|
||||
data = response.json()
|
||||
assert "generate" in data
|
||||
assert "edit" in data
|
||||
assert "upscale" in data
|
||||
|
||||
def test_supported_formats(self):
|
||||
"""Test supported formats endpoint."""
|
||||
response = client.get("/api/stability/supported-formats")
|
||||
assert response.status_code == 200
|
||||
|
||||
data = response.json()
|
||||
assert "image_input" in data
|
||||
assert "image_output" in data
|
||||
assert "audio_input" in data
|
||||
|
||||
def test_image_info_analysis(self):
|
||||
"""Test image info utility endpoint."""
|
||||
response = client.post(
|
||||
"/api/stability/utils/image-info",
|
||||
files={"image": ("test.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "width" in data
|
||||
assert "height" in data
|
||||
assert "format" in data
|
||||
|
||||
def test_prompt_validation(self):
|
||||
"""Test prompt validation endpoint."""
|
||||
response = client.post(
|
||||
"/api/stability/utils/validate-prompt",
|
||||
data={"prompt": "A beautiful landscape with mountains and lakes"}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "is_valid" in data
|
||||
assert "suggestions" in data
|
||||
|
||||
def test_invalid_image_format(self):
|
||||
"""Test error handling for invalid image format."""
|
||||
response = client.post(
|
||||
"/api/stability/generate/ultra",
|
||||
data={"prompt": "Test prompt"},
|
||||
files={"image": ("test.txt", b"not an image", "text/plain")}
|
||||
)
|
||||
|
||||
# Should handle gracefully or return appropriate error
|
||||
assert response.status_code in [400, 422]
|
||||
|
||||
def test_missing_required_parameters(self):
|
||||
"""Test error handling for missing required parameters."""
|
||||
response = client.post("/api/stability/generate/ultra")
|
||||
|
||||
assert response.status_code == 422 # Validation error
|
||||
|
||||
def test_outpaint_validation(self):
|
||||
"""Test outpaint direction validation."""
|
||||
response = client.post(
|
||||
"/api/stability/edit/outpaint",
|
||||
data={
|
||||
"left": 0,
|
||||
"right": 0,
|
||||
"up": 0,
|
||||
"down": 0
|
||||
},
|
||||
files={"image": ("test.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "at least one outpaint direction" in response.json()["detail"]
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_async_generation_response(self, mock_service):
|
||||
"""Test async generation response format."""
|
||||
mock_service.return_value.__aenter__.return_value.upscale_creative = AsyncMock(
|
||||
return_value={"id": "test_generation_id"}
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/upscale/creative",
|
||||
data={"prompt": "High quality upscale"},
|
||||
files={"image": ("test.png", self.test_image, "image/png")}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "id" in data
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_batch_comparison(self, mock_service):
|
||||
"""Test model comparison endpoint."""
|
||||
mock_service.return_value.__aenter__.return_value.generate_ultra = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
mock_service.return_value.__aenter__.return_value.generate_core = AsyncMock(
|
||||
return_value=self.test_image
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/advanced/compare/models",
|
||||
data={
|
||||
"prompt": "A test image",
|
||||
"models": json.dumps(["ultra", "core"]),
|
||||
"seed": 42
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "comparison_results" in data
|
||||
|
||||
|
||||
class TestStabilityService:
|
||||
"""Test cases for StabilityAIService class."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_service_initialization(self):
|
||||
"""Test service initialization."""
|
||||
with patch.dict('os.environ', {'STABILITY_API_KEY': 'test_key'}):
|
||||
service = StabilityAIService()
|
||||
assert service.api_key == 'test_key'
|
||||
|
||||
def test_service_initialization_no_key(self):
|
||||
"""Test service initialization without API key."""
|
||||
with patch.dict('os.environ', {}, clear=True):
|
||||
with pytest.raises(ValueError):
|
||||
StabilityAIService()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch('aiohttp.ClientSession')
|
||||
async def test_make_request_success(self, mock_session):
|
||||
"""Test successful API request."""
|
||||
# Mock response
|
||||
mock_response = AsyncMock()
|
||||
mock_response.status = 200
|
||||
mock_response.read.return_value = b"test_image_data"
|
||||
mock_response.headers = {"Content-Type": "image/png"}
|
||||
|
||||
mock_session.return_value.__aenter__.return_value.request.return_value.__aenter__.return_value = mock_response
|
||||
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
async with service:
|
||||
result = await service._make_request(
|
||||
method="POST",
|
||||
endpoint="/test",
|
||||
data={"test": "data"}
|
||||
)
|
||||
|
||||
assert result == b"test_image_data"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_image_preparation(self):
|
||||
"""Test image preparation methods."""
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
# Test bytes input
|
||||
test_bytes = b"test_image_bytes"
|
||||
result = await service._prepare_image_file(test_bytes)
|
||||
assert result == test_bytes
|
||||
|
||||
# Test base64 input
|
||||
test_b64 = base64.b64encode(test_bytes).decode()
|
||||
result = await service._prepare_image_file(test_b64)
|
||||
assert result == test_bytes
|
||||
|
||||
def test_dimension_validation(self):
|
||||
"""Test image dimension validation."""
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
# Valid dimensions
|
||||
service._validate_image_requirements(1024, 1024)
|
||||
|
||||
# Invalid dimensions (too small)
|
||||
with pytest.raises(ValueError):
|
||||
service._validate_image_requirements(32, 32)
|
||||
|
||||
def test_aspect_ratio_validation(self):
|
||||
"""Test aspect ratio validation."""
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
# Valid aspect ratio
|
||||
service._validate_aspect_ratio(1024, 1024)
|
||||
|
||||
# Invalid aspect ratio (too wide)
|
||||
with pytest.raises(ValueError):
|
||||
service._validate_aspect_ratio(3000, 500)
|
||||
|
||||
|
||||
class TestStabilityModels:
|
||||
"""Test cases for Pydantic models."""
|
||||
|
||||
def test_stable_image_ultra_request(self):
|
||||
"""Test StableImageUltraRequest validation."""
|
||||
# Valid request
|
||||
request = StableImageUltraRequest(
|
||||
prompt="A beautiful landscape",
|
||||
aspect_ratio="16:9",
|
||||
seed=42
|
||||
)
|
||||
assert request.prompt == "A beautiful landscape"
|
||||
assert request.aspect_ratio == "16:9"
|
||||
assert request.seed == 42
|
||||
|
||||
def test_invalid_seed_range(self):
|
||||
"""Test invalid seed range validation."""
|
||||
with pytest.raises(ValueError):
|
||||
StableImageUltraRequest(
|
||||
prompt="Test",
|
||||
seed=5000000000 # Too large
|
||||
)
|
||||
|
||||
def test_prompt_length_validation(self):
|
||||
"""Test prompt length validation."""
|
||||
# Too long prompt
|
||||
with pytest.raises(ValueError):
|
||||
StableImageUltraRequest(
|
||||
prompt="x" * 10001 # Exceeds max length
|
||||
)
|
||||
|
||||
# Empty prompt
|
||||
with pytest.raises(ValueError):
|
||||
StableImageUltraRequest(
|
||||
prompt="" # Below min length
|
||||
)
|
||||
|
||||
def test_outpaint_request(self):
|
||||
"""Test OutpaintRequest validation."""
|
||||
request = OutpaintRequest(
|
||||
left=100,
|
||||
right=200,
|
||||
up=50,
|
||||
down=150
|
||||
)
|
||||
assert request.left == 100
|
||||
assert request.right == 200
|
||||
|
||||
def test_audio_request_validation(self):
|
||||
"""Test audio request validation."""
|
||||
request = TextToAudioRequest(
|
||||
prompt="Peaceful music",
|
||||
duration=60,
|
||||
model="stable-audio-2.5"
|
||||
)
|
||||
assert request.duration == 60
|
||||
assert request.model == "stable-audio-2.5"
|
||||
|
||||
|
||||
class TestStabilityUtils:
|
||||
"""Test cases for utility functions."""
|
||||
|
||||
def test_image_validator(self):
|
||||
"""Test image validation utilities."""
|
||||
from utils.stability_utils import ImageValidator
|
||||
|
||||
# Mock UploadFile
|
||||
mock_file = Mock()
|
||||
mock_file.content_type = "image/png"
|
||||
mock_file.filename = "test.png"
|
||||
|
||||
result = ImageValidator.validate_image_file(mock_file)
|
||||
assert result["is_valid"] is True
|
||||
|
||||
def test_prompt_optimizer(self):
|
||||
"""Test prompt optimization utilities."""
|
||||
from utils.stability_utils import PromptOptimizer
|
||||
|
||||
prompt = "A simple image"
|
||||
result = PromptOptimizer.optimize_prompt(
|
||||
prompt=prompt,
|
||||
target_model="ultra",
|
||||
target_style="photographic",
|
||||
quality_level="high"
|
||||
)
|
||||
|
||||
assert len(result["optimized_prompt"]) > len(prompt)
|
||||
assert "optimizations_applied" in result
|
||||
|
||||
def test_parameter_validator(self):
|
||||
"""Test parameter validation utilities."""
|
||||
from utils.stability_utils import ParameterValidator
|
||||
|
||||
# Valid seed
|
||||
seed = ParameterValidator.validate_seed(42)
|
||||
assert seed == 42
|
||||
|
||||
# Invalid seed
|
||||
with pytest.raises(HTTPException):
|
||||
ParameterValidator.validate_seed(5000000000)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_image_analysis(self):
|
||||
"""Test image content analysis."""
|
||||
from utils.stability_utils import ImageValidator
|
||||
|
||||
result = await ImageValidator.analyze_image_content(self.test_image)
|
||||
|
||||
assert "width" in result
|
||||
assert "height" in result
|
||||
assert "total_pixels" in result
|
||||
assert "quality_assessment" in result
|
||||
|
||||
|
||||
class TestStabilityConfig:
|
||||
"""Test cases for configuration."""
|
||||
|
||||
def test_stability_config_creation(self):
|
||||
"""Test StabilityConfig creation."""
|
||||
from config.stability_config import StabilityConfig
|
||||
|
||||
config = StabilityConfig(api_key="test_key")
|
||||
assert config.api_key == "test_key"
|
||||
assert config.base_url == "https://api.stability.ai"
|
||||
|
||||
def test_model_recommendations(self):
|
||||
"""Test model recommendation logic."""
|
||||
from config.stability_config import get_model_recommendations
|
||||
|
||||
recommendations = get_model_recommendations(
|
||||
use_case="portrait",
|
||||
quality_preference="premium"
|
||||
)
|
||||
|
||||
assert "primary" in recommendations
|
||||
assert "alternative" in recommendations
|
||||
|
||||
def test_image_validation_config(self):
|
||||
"""Test image validation configuration."""
|
||||
from config.stability_config import validate_image_requirements
|
||||
|
||||
# Valid image
|
||||
result = validate_image_requirements(1024, 1024, "generate")
|
||||
assert result["is_valid"] is True
|
||||
|
||||
# Invalid image (too small)
|
||||
result = validate_image_requirements(32, 32, "generate")
|
||||
assert result["is_valid"] is False
|
||||
|
||||
def test_cost_calculation(self):
|
||||
"""Test cost calculation."""
|
||||
from config.stability_config import calculate_estimated_cost
|
||||
|
||||
cost = calculate_estimated_cost("generate", "ultra")
|
||||
assert cost == 8 # Ultra model cost
|
||||
|
||||
cost = calculate_estimated_cost("upscale", "fast")
|
||||
assert cost == 2 # Fast upscale cost
|
||||
|
||||
|
||||
class TestStabilityMiddleware:
|
||||
"""Test cases for middleware."""
|
||||
|
||||
def test_rate_limit_middleware(self):
|
||||
"""Test rate limiting middleware."""
|
||||
from middleware.stability_middleware import RateLimitMiddleware
|
||||
|
||||
middleware = RateLimitMiddleware(requests_per_window=5, window_seconds=10)
|
||||
|
||||
# Test client identification
|
||||
mock_request = Mock()
|
||||
mock_request.headers = {"authorization": "Bearer test_api_key"}
|
||||
|
||||
client_id = middleware._get_client_id(mock_request)
|
||||
assert len(client_id) == 8 # First 8 chars of API key
|
||||
|
||||
def test_monitoring_middleware(self):
|
||||
"""Test monitoring middleware."""
|
||||
from middleware.stability_middleware import MonitoringMiddleware
|
||||
|
||||
middleware = MonitoringMiddleware()
|
||||
|
||||
# Test operation extraction
|
||||
operation = middleware._extract_operation("/api/stability/generate/ultra")
|
||||
assert operation == "generate_ultra"
|
||||
|
||||
def test_caching_middleware(self):
|
||||
"""Test caching middleware."""
|
||||
from middleware.stability_middleware import CachingMiddleware
|
||||
|
||||
middleware = CachingMiddleware()
|
||||
|
||||
# Test cache key generation
|
||||
mock_request = Mock()
|
||||
mock_request.method = "GET"
|
||||
mock_request.url.path = "/api/stability/health"
|
||||
mock_request.query_params = {}
|
||||
|
||||
# This would need to be properly mocked for async
|
||||
# cache_key = await middleware._generate_cache_key(mock_request)
|
||||
# assert isinstance(cache_key, str)
|
||||
|
||||
|
||||
class TestErrorHandling:
|
||||
"""Test error handling scenarios."""
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_api_error_handling(self, mock_service):
|
||||
"""Test API error response handling."""
|
||||
mock_service.return_value.__aenter__.return_value.generate_ultra = AsyncMock(
|
||||
side_effect=HTTPException(status_code=400, detail="Invalid parameters")
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/generate/ultra",
|
||||
data={"prompt": "Test"}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_timeout_handling(self, mock_service):
|
||||
"""Test timeout error handling."""
|
||||
mock_service.return_value.__aenter__.return_value.generate_ultra = AsyncMock(
|
||||
side_effect=asyncio.TimeoutError()
|
||||
)
|
||||
|
||||
response = client.post(
|
||||
"/api/stability/generate/ultra",
|
||||
data={"prompt": "Test"}
|
||||
)
|
||||
|
||||
assert response.status_code == 504
|
||||
|
||||
def test_file_size_validation(self):
|
||||
"""Test file size validation."""
|
||||
from utils.stability_utils import validate_file_size
|
||||
|
||||
# Mock large file
|
||||
mock_file = Mock()
|
||||
mock_file.size = 20 * 1024 * 1024 # 20MB
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
validate_file_size(mock_file, max_size=10 * 1024 * 1024)
|
||||
|
||||
assert exc_info.value.status_code == 413
|
||||
|
||||
|
||||
class TestWorkflowProcessing:
|
||||
"""Test workflow and batch processing."""
|
||||
|
||||
@patch('services.stability_service.StabilityAIService')
|
||||
def test_workflow_validation(self, mock_service):
|
||||
"""Test workflow validation."""
|
||||
from utils.stability_utils import WorkflowManager
|
||||
|
||||
# Valid workflow
|
||||
workflow = [
|
||||
{"operation": "generate_core", "parameters": {"prompt": "test"}},
|
||||
{"operation": "upscale_fast", "parameters": {}}
|
||||
]
|
||||
|
||||
errors = WorkflowManager.validate_workflow(workflow)
|
||||
assert len(errors) == 0
|
||||
|
||||
# Invalid workflow
|
||||
invalid_workflow = [
|
||||
{"operation": "invalid_operation"}
|
||||
]
|
||||
|
||||
errors = WorkflowManager.validate_workflow(invalid_workflow)
|
||||
assert len(errors) > 0
|
||||
|
||||
def test_workflow_optimization(self):
|
||||
"""Test workflow optimization."""
|
||||
from utils.stability_utils import WorkflowManager
|
||||
|
||||
workflow = [
|
||||
{"operation": "upscale_fast"},
|
||||
{"operation": "generate_core"}, # Should be moved to front
|
||||
{"operation": "inpaint"}
|
||||
]
|
||||
|
||||
optimized = WorkflowManager.optimize_workflow(workflow)
|
||||
|
||||
# Generate operation should be first
|
||||
assert optimized[0]["operation"] == "generate_core"
|
||||
|
||||
|
||||
# ==================== INTEGRATION TESTS ====================
|
||||
|
||||
class TestStabilityIntegration:
|
||||
"""Integration tests for full workflow."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch('aiohttp.ClientSession')
|
||||
async def test_full_generation_workflow(self, mock_session):
|
||||
"""Test complete generation workflow."""
|
||||
# Mock successful API responses
|
||||
mock_response = AsyncMock()
|
||||
mock_response.status = 200
|
||||
mock_response.read.return_value = b"test_image_data"
|
||||
mock_response.headers = {"Content-Type": "image/png"}
|
||||
|
||||
mock_session.return_value.__aenter__.return_value.request.return_value.__aenter__.return_value = mock_response
|
||||
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
async with service:
|
||||
# Test generation
|
||||
result = await service.generate_ultra(
|
||||
prompt="A beautiful landscape",
|
||||
aspect_ratio="16:9",
|
||||
seed=42
|
||||
)
|
||||
|
||||
assert isinstance(result, bytes)
|
||||
assert len(result) > 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch('aiohttp.ClientSession')
|
||||
async def test_full_edit_workflow(self, mock_session):
|
||||
"""Test complete edit workflow."""
|
||||
# Mock successful API responses
|
||||
mock_response = AsyncMock()
|
||||
mock_response.status = 200
|
||||
mock_response.read.return_value = b"test_edited_image_data"
|
||||
mock_response.headers = {"Content-Type": "image/png"}
|
||||
|
||||
mock_session.return_value.__aenter__.return_value.request.return_value.__aenter__.return_value = mock_response
|
||||
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
|
||||
async with service:
|
||||
# Test inpainting
|
||||
result = await service.inpaint(
|
||||
image=b"test_image_data",
|
||||
prompt="A cat in the scene",
|
||||
grow_mask=10
|
||||
)
|
||||
|
||||
assert isinstance(result, bytes)
|
||||
assert len(result) > 0
|
||||
|
||||
|
||||
# ==================== PERFORMANCE TESTS ====================
|
||||
|
||||
class TestStabilityPerformance:
|
||||
"""Performance tests for Stability AI endpoints."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_requests(self):
|
||||
"""Test handling of concurrent requests."""
|
||||
from services.stability_service import StabilityAIService
|
||||
|
||||
async def mock_request():
|
||||
service = StabilityAIService(api_key="test_key")
|
||||
# Mock a quick operation
|
||||
await asyncio.sleep(0.1)
|
||||
return "success"
|
||||
|
||||
# Run multiple concurrent requests
|
||||
tasks = [mock_request() for _ in range(10)]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# All should succeed
|
||||
assert all(result == "success" for result in results)
|
||||
|
||||
def test_large_file_handling(self):
|
||||
"""Test handling of large files."""
|
||||
from utils.stability_utils import validate_file_size
|
||||
|
||||
# Test with various file sizes
|
||||
mock_file = Mock()
|
||||
|
||||
# Valid size
|
||||
mock_file.size = 5 * 1024 * 1024 # 5MB
|
||||
validate_file_size(mock_file) # Should not raise
|
||||
|
||||
# Invalid size
|
||||
mock_file.size = 15 * 1024 * 1024 # 15MB
|
||||
with pytest.raises(HTTPException):
|
||||
validate_file_size(mock_file)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
126
backend/test/test_step1_only.py
Normal file
126
backend/test/test_step1_only.py
Normal file
@@ -0,0 +1,126 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple Test Script for Step 1 Only
|
||||
|
||||
This script tests only Step 1 to verify imports are working correctly.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from typing import Dict, Any
|
||||
from loguru import logger
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
# Add the services directory to the path
|
||||
services_dir = os.path.join(backend_dir, "services")
|
||||
if services_dir not in sys.path:
|
||||
sys.path.insert(0, services_dir)
|
||||
|
||||
async def test_step1_only():
|
||||
"""Test only Step 1 to verify imports work."""
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting test of Step 1 only")
|
||||
|
||||
# Test data
|
||||
test_context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_duration": 7,
|
||||
"posting_preferences": {
|
||||
"posting_frequency": "daily",
|
||||
"preferred_days": ["monday", "wednesday", "friday"],
|
||||
"preferred_times": ["09:00", "12:00", "15:00"],
|
||||
"content_per_day": 2
|
||||
}
|
||||
}
|
||||
|
||||
# Test Step 1: Content Strategy Analysis
|
||||
logger.info("📋 Testing Step 1: Content Strategy Analysis")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import ContentStrategyAnalysisStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.strategy_data import StrategyDataProcessor
|
||||
|
||||
logger.info("✅ Imports successful")
|
||||
|
||||
# Create strategy processor with mock data for testing
|
||||
strategy_processor = StrategyDataProcessor()
|
||||
|
||||
# Mock strategy data
|
||||
mock_strategy_data = {
|
||||
"strategy_id": 1,
|
||||
"strategy_name": "Test Strategy",
|
||||
"industry": "technology",
|
||||
"target_audience": {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {"age_range": "25-45", "location": "Global"}
|
||||
},
|
||||
"content_pillars": [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
],
|
||||
"business_objectives": [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership"
|
||||
],
|
||||
"target_metrics": {"awareness": "website_traffic", "leads": "lead_generation"},
|
||||
"quality_indicators": {"data_completeness": 0.8, "strategic_alignment": 0.9}
|
||||
}
|
||||
|
||||
# Mock the get_strategy_data method for testing
|
||||
async def mock_get_strategy_data(strategy_id):
|
||||
return mock_strategy_data
|
||||
|
||||
strategy_processor.get_strategy_data = mock_get_strategy_data
|
||||
|
||||
# Mock the validate_data method
|
||||
async def mock_validate_data(data):
|
||||
return {
|
||||
"quality_score": 0.85,
|
||||
"missing_fields": [],
|
||||
"recommendations": []
|
||||
}
|
||||
|
||||
strategy_processor.validate_data = mock_validate_data
|
||||
|
||||
step1 = ContentStrategyAnalysisStep()
|
||||
step1.strategy_processor = strategy_processor
|
||||
|
||||
result1 = await step1.execute(test_context)
|
||||
logger.info(f"✅ Step 1 completed: {result1.get('status')}")
|
||||
logger.info(f" Quality Score: {result1.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 1 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
logger.info("🎉 Step 1 test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test failed with error: {str(e)}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the test
|
||||
success = asyncio.run(test_step1_only())
|
||||
|
||||
if success:
|
||||
logger.info("✅ Test completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Test failed!")
|
||||
sys.exit(1)
|
||||
85
backend/test/test_step2.py
Normal file
85
backend/test/test_step2.py
Normal file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 2 specifically
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step2_implementation import Step2Implementation
|
||||
|
||||
async def test_step2():
|
||||
"""Test Step 2 implementation."""
|
||||
|
||||
print("🧪 Testing Step 2: Gap Analysis & Opportunity Identification")
|
||||
|
||||
# Create test context
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme",
|
||||
"user_data": {
|
||||
"onboarding_data": {
|
||||
"posting_preferences": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
},
|
||||
"posting_days": [
|
||||
"Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"
|
||||
],
|
||||
"optimal_times": [
|
||||
"09:00", "12:00", "15:00", "18:00", "20:00"
|
||||
]
|
||||
},
|
||||
"strategy_data": {
|
||||
"industry": "technology",
|
||||
"target_audience": {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders"
|
||||
},
|
||||
"business_objectives": [
|
||||
"Increase brand awareness",
|
||||
"Generate leads",
|
||||
"Establish thought leadership"
|
||||
]
|
||||
}
|
||||
},
|
||||
"step_results": {},
|
||||
"quality_scores": {}
|
||||
}
|
||||
|
||||
try:
|
||||
# Create Step 2 instance
|
||||
step2 = Step2Implementation()
|
||||
|
||||
print("✅ Step 2 instance created successfully")
|
||||
|
||||
# Test Step 2 execution
|
||||
print("🔄 Executing Step 2...")
|
||||
result = await step2.run(context)
|
||||
|
||||
if result:
|
||||
print("✅ Step 2 executed successfully!")
|
||||
print(f"Status: {result.get('status')}")
|
||||
print(f"Quality Score: {result.get('quality_score')}")
|
||||
print(f"Execution Time: {result.get('execution_time')}")
|
||||
|
||||
if result.get('status') == 'error':
|
||||
print(f"❌ Step 2 Error: {result.get('error_message')}")
|
||||
else:
|
||||
print("❌ Step 2 returned None")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 2: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step2())
|
||||
50
backend/test/test_step4_data.py
Normal file
50
backend/test/test_step4_data.py
Normal file
@@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 4 data
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
async def test_step4_data():
|
||||
"""Test Step 4 data processing."""
|
||||
|
||||
print("🧪 Testing Step 4: Calendar Framework & Timeline Data")
|
||||
|
||||
try:
|
||||
# Test comprehensive user data
|
||||
from services.calendar_generation_datasource_framework.data_processing.comprehensive_user_data import ComprehensiveUserDataProcessor
|
||||
|
||||
processor = ComprehensiveUserDataProcessor()
|
||||
data = await processor.get_comprehensive_user_data(1, 1)
|
||||
|
||||
print("✅ Comprehensive user data retrieved successfully")
|
||||
print(f"📊 Data keys: {list(data.keys())}")
|
||||
|
||||
onboarding_data = data.get('onboarding_data', {})
|
||||
print(f"📋 Onboarding data keys: {list(onboarding_data.keys())}")
|
||||
|
||||
posting_preferences = onboarding_data.get('posting_preferences')
|
||||
posting_days = onboarding_data.get('posting_days')
|
||||
optimal_times = onboarding_data.get('optimal_times')
|
||||
|
||||
print(f"📅 Posting preferences: {posting_preferences}")
|
||||
print(f"📅 Posting days: {posting_days}")
|
||||
print(f"⏰ Optimal times: {optimal_times}")
|
||||
|
||||
if posting_preferences and posting_days:
|
||||
print("✅ Step 4 data requirements met!")
|
||||
else:
|
||||
print("❌ Step 4 data requirements NOT met!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 4 data: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step4_data())
|
||||
168
backend/test/test_step4_data_debug.py
Normal file
168
backend/test/test_step4_data_debug.py
Normal file
@@ -0,0 +1,168 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Debug Step 4 data issues - check what data is available from database.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from loguru import logger
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
async def test_step4_data_sources():
|
||||
"""Test what data Step 4 is actually receiving."""
|
||||
try:
|
||||
logger.info("🧪 Testing Step 4 data sources")
|
||||
|
||||
# Test 1: Onboarding Data Service
|
||||
logger.info("📋 Test 1: Onboarding Data Service")
|
||||
try:
|
||||
from services.onboarding_data_service import OnboardingDataService
|
||||
onboarding_service = OnboardingDataService()
|
||||
|
||||
# Test with user_id = 1
|
||||
onboarding_data = onboarding_service.get_personalized_ai_inputs(1)
|
||||
|
||||
logger.info(f"📊 Onboarding data keys: {list(onboarding_data.keys()) if onboarding_data else 'None'}")
|
||||
|
||||
if onboarding_data:
|
||||
# Check for posting preferences
|
||||
posting_prefs = onboarding_data.get("posting_preferences")
|
||||
posting_days = onboarding_data.get("posting_days")
|
||||
optimal_times = onboarding_data.get("optimal_times")
|
||||
|
||||
logger.info(f"📅 Posting preferences: {posting_prefs}")
|
||||
logger.info(f"📅 Posting days: {posting_days}")
|
||||
logger.info(f"📅 Optimal times: {optimal_times}")
|
||||
|
||||
# Check website analysis
|
||||
website_analysis = onboarding_data.get("website_analysis", {})
|
||||
logger.info(f"🌐 Website analysis keys: {list(website_analysis.keys())}")
|
||||
|
||||
# Check competitor analysis
|
||||
competitor_analysis = onboarding_data.get("competitor_analysis", {})
|
||||
logger.info(f"🏢 Competitor analysis keys: {list(competitor_analysis.keys())}")
|
||||
|
||||
# Check keyword analysis
|
||||
keyword_analysis = onboarding_data.get("keyword_analysis", {})
|
||||
logger.info(f"🔍 Keyword analysis keys: {list(keyword_analysis.keys())}")
|
||||
|
||||
else:
|
||||
logger.error("❌ No onboarding data returned")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Onboarding service error: {str(e)}")
|
||||
|
||||
# Test 2: Comprehensive User Data Processor
|
||||
logger.info("\n📋 Test 2: Comprehensive User Data Processor")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.data_processing.comprehensive_user_data import ComprehensiveUserDataProcessor
|
||||
|
||||
processor = ComprehensiveUserDataProcessor()
|
||||
comprehensive_data = await processor.get_comprehensive_user_data(1, 1)
|
||||
|
||||
logger.info(f"📊 Comprehensive data keys: {list(comprehensive_data.keys()) if comprehensive_data else 'None'}")
|
||||
|
||||
if comprehensive_data:
|
||||
# Check onboarding data
|
||||
onboarding_data = comprehensive_data.get("onboarding_data", {})
|
||||
logger.info(f"👤 Onboarding data keys: {list(onboarding_data.keys())}")
|
||||
|
||||
# Check for posting preferences (Step 4 requirement)
|
||||
posting_prefs = onboarding_data.get("posting_preferences")
|
||||
posting_days = onboarding_data.get("posting_days")
|
||||
optimal_times = onboarding_data.get("optimal_times")
|
||||
|
||||
logger.info(f"📅 Posting preferences: {posting_prefs}")
|
||||
logger.info(f"📅 Posting days: {posting_days}")
|
||||
logger.info(f"📅 Optimal times: {optimal_times}")
|
||||
|
||||
# Check strategy data
|
||||
strategy_data = comprehensive_data.get("strategy_data", {})
|
||||
logger.info(f"🎯 Strategy data keys: {list(strategy_data.keys())}")
|
||||
|
||||
# Check gap analysis
|
||||
gap_analysis = comprehensive_data.get("gap_analysis", {})
|
||||
logger.info(f"📊 Gap analysis keys: {list(gap_analysis.keys())}")
|
||||
|
||||
else:
|
||||
logger.error("❌ No comprehensive data returned")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Comprehensive data processor error: {str(e)}")
|
||||
|
||||
# Test 3: Database Connection
|
||||
logger.info("\n📋 Test 3: Database Connection")
|
||||
try:
|
||||
from services.database import get_db_session
|
||||
from models.onboarding import OnboardingSession, WebsiteAnalysis
|
||||
|
||||
session = get_db_session()
|
||||
|
||||
# Check for onboarding sessions
|
||||
onboarding_sessions = session.query(OnboardingSession).all()
|
||||
logger.info(f"📊 Found {len(onboarding_sessions)} onboarding sessions")
|
||||
|
||||
if onboarding_sessions:
|
||||
for i, session_data in enumerate(onboarding_sessions):
|
||||
logger.info(f" Session {i+1}: user_id={session_data.user_id}, created={session_data.created_at}")
|
||||
|
||||
# Check for website analysis
|
||||
website_analyses = session.query(WebsiteAnalysis).filter(
|
||||
WebsiteAnalysis.session_id == session_data.id
|
||||
).all()
|
||||
logger.info(f" Website analyses: {len(website_analyses)}")
|
||||
|
||||
else:
|
||||
logger.warning("⚠️ No onboarding sessions found in database")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Database connection error: {str(e)}")
|
||||
|
||||
# Test 4: Step 4 Direct Test
|
||||
logger.info("\n📋 Test 4: Step 4 Direct Test")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step4_implementation import CalendarFrameworkStep
|
||||
|
||||
step4 = CalendarFrameworkStep()
|
||||
|
||||
# Create mock context
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
# Try to execute Step 4
|
||||
logger.info("🔄 Executing Step 4...")
|
||||
result = await step4.execute(context)
|
||||
|
||||
logger.info(f"✅ Step 4 executed successfully")
|
||||
logger.info(f"📊 Result keys: {list(result.keys()) if result else 'None'}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 4 execution error: {str(e)}")
|
||||
import traceback
|
||||
logger.error(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
logger.info("\n🎯 Step 4 Data Debug Complete")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error in data debug test: {str(e)}")
|
||||
import traceback
|
||||
logger.error(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the test
|
||||
asyncio.run(test_step4_data_sources())
|
||||
63
backend/test/test_step4_execution.py
Normal file
63
backend/test/test_step4_execution.py
Normal file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 4 execution
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
async def test_step4_execution():
|
||||
"""Test Step 4 execution directly."""
|
||||
|
||||
print("🧪 Testing Step 4: Calendar Framework & Timeline Execution")
|
||||
|
||||
try:
|
||||
# Import Step 4
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step4_implementation import CalendarFrameworkStep
|
||||
|
||||
# Create test context
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme",
|
||||
"step_results": {},
|
||||
"quality_scores": {}
|
||||
}
|
||||
|
||||
# Create Step 4 instance
|
||||
step4 = CalendarFrameworkStep()
|
||||
print("✅ Step 4 instance created successfully")
|
||||
|
||||
# Test Step 4 execution
|
||||
print("🔄 Executing Step 4...")
|
||||
result = await step4.run(context)
|
||||
|
||||
if result:
|
||||
print("✅ Step 4 executed successfully!")
|
||||
print(f"Status: {result.get('status')}")
|
||||
print(f"Quality Score: {result.get('quality_score')}")
|
||||
print(f"Execution Time: {result.get('execution_time')}")
|
||||
|
||||
if result.get('status') == 'error':
|
||||
print(f"❌ Step 4 Error: {result.get('error_message')}")
|
||||
else:
|
||||
print("📊 Step 4 Results:")
|
||||
print(f" - Calendar Structure: {result.get('calendar_structure', {}).get('type')}")
|
||||
print(f" - Timeline Config: {result.get('timeline_config', {}).get('total_weeks')} weeks")
|
||||
print(f" - Duration Control: {result.get('duration_control', {}).get('validation_passed')}")
|
||||
else:
|
||||
print("❌ Step 4 returned None")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 4 execution: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step4_execution())
|
||||
245
backend/test/test_step4_implementation.py
Normal file
245
backend/test/test_step4_implementation.py
Normal file
@@ -0,0 +1,245 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for Step 4 Implementation
|
||||
|
||||
This script tests the Step 4 (Calendar Framework and Timeline) implementation
|
||||
to ensure it works correctly with real AI services and data processing.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add the backend directory to the Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.phase2_steps import CalendarFrameworkStep
|
||||
from services.calendar_generation_datasource_framework.data_processing import ComprehensiveUserDataProcessor
|
||||
|
||||
|
||||
async def test_step4_implementation():
|
||||
"""Test Step 4 implementation with real data processing."""
|
||||
print("🧪 Testing Step 4: Calendar Framework and Timeline Implementation")
|
||||
|
||||
try:
|
||||
# Initialize Step 4
|
||||
step4 = CalendarFrameworkStep()
|
||||
print("✅ Step 4 initialized successfully")
|
||||
|
||||
# Initialize data processor
|
||||
data_processor = ComprehensiveUserDataProcessor()
|
||||
print("✅ Data processor initialized successfully")
|
||||
|
||||
# Test context data
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
|
||||
print(f"📊 Testing with context: {context}")
|
||||
|
||||
# Execute Step 4
|
||||
print("🔄 Executing Step 4...")
|
||||
result = await step4.execute(context)
|
||||
|
||||
# Validate results
|
||||
print("📋 Step 4 Results:")
|
||||
print(f" - Step Number: {result.get('stepNumber')}")
|
||||
print(f" - Step Name: {result.get('stepName')}")
|
||||
print(f" - Quality Score: {result.get('qualityScore', 0):.2f}")
|
||||
print(f" - Execution Time: {result.get('executionTime')}")
|
||||
print(f" - Data Sources Used: {result.get('dataSourcesUsed')}")
|
||||
|
||||
# Validate calendar structure
|
||||
calendar_structure = result.get('results', {}).get('calendarStructure', {})
|
||||
print(f" - Calendar Type: {calendar_structure.get('type')}")
|
||||
print(f" - Total Weeks: {calendar_structure.get('totalWeeks')}")
|
||||
print(f" - Content Distribution: {calendar_structure.get('contentDistribution')}")
|
||||
|
||||
# Validate timeline configuration
|
||||
timeline_config = result.get('results', {}).get('timelineConfiguration', {})
|
||||
print(f" - Start Date: {timeline_config.get('startDate')}")
|
||||
print(f" - End Date: {timeline_config.get('endDate')}")
|
||||
print(f" - Total Days: {timeline_config.get('totalDays')}")
|
||||
print(f" - Posting Days: {timeline_config.get('postingDays')}")
|
||||
|
||||
# Validate quality gates
|
||||
duration_control = result.get('results', {}).get('durationControl', {})
|
||||
strategic_alignment = result.get('results', {}).get('strategicAlignment', {})
|
||||
|
||||
print(f" - Duration Accuracy: {duration_control.get('accuracyScore', 0):.1%}")
|
||||
print(f" - Strategic Alignment: {strategic_alignment.get('alignmentScore', 0):.1%}")
|
||||
|
||||
# Validate insights and recommendations
|
||||
insights = result.get('insights', [])
|
||||
recommendations = result.get('recommendations', [])
|
||||
|
||||
print(f" - Insights Count: {len(insights)}")
|
||||
print(f" - Recommendations Count: {len(recommendations)}")
|
||||
|
||||
# Quality validation
|
||||
quality_score = result.get('qualityScore', 0)
|
||||
if quality_score >= 0.85:
|
||||
print(f"✅ Quality Score: {quality_score:.2f} (Excellent)")
|
||||
elif quality_score >= 0.75:
|
||||
print(f"✅ Quality Score: {quality_score:.2f} (Good)")
|
||||
else:
|
||||
print(f"⚠️ Quality Score: {quality_score:.2f} (Needs Improvement)")
|
||||
|
||||
print("✅ Step 4 implementation test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 4: {str(e)}")
|
||||
import traceback
|
||||
print(f"Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_step4_integration():
|
||||
"""Test Step 4 integration with the orchestrator."""
|
||||
print("\n🧪 Testing Step 4 Integration with Orchestrator")
|
||||
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.orchestrator import PromptChainOrchestrator
|
||||
|
||||
# Initialize orchestrator
|
||||
orchestrator = PromptChainOrchestrator()
|
||||
print("✅ Orchestrator initialized successfully")
|
||||
|
||||
# Check if Step 4 is properly registered
|
||||
step4 = orchestrator.steps.get("step_04")
|
||||
if step4 and step4.name == "Calendar Framework & Timeline":
|
||||
print("✅ Step 4 properly registered in orchestrator")
|
||||
else:
|
||||
print("❌ Step 4 not properly registered in orchestrator")
|
||||
return False
|
||||
|
||||
# Test context initialization
|
||||
context = await orchestrator._initialize_context(
|
||||
user_id=1,
|
||||
strategy_id=1,
|
||||
calendar_type="monthly",
|
||||
industry="technology",
|
||||
business_size="sme"
|
||||
)
|
||||
print("✅ Context initialization successful")
|
||||
|
||||
# Test Step 4 execution through orchestrator
|
||||
print("🔄 Testing Step 4 execution through orchestrator...")
|
||||
step_result = await step4.execute(context)
|
||||
|
||||
if step_result and step_result.get('stepNumber') == 4:
|
||||
print("✅ Step 4 execution through orchestrator successful")
|
||||
print(f" - Quality Score: {step_result.get('qualityScore', 0):.2f}")
|
||||
else:
|
||||
print("❌ Step 4 execution through orchestrator failed")
|
||||
return False
|
||||
|
||||
print("✅ Step 4 integration test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 4 integration: {str(e)}")
|
||||
import traceback
|
||||
print(f"Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_step4_data_processing():
|
||||
"""Test Step 4 data processing capabilities."""
|
||||
print("\n🧪 Testing Step 4 Data Processing")
|
||||
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.data_processing import ComprehensiveUserDataProcessor
|
||||
|
||||
# Initialize data processor
|
||||
data_processor = ComprehensiveUserDataProcessor()
|
||||
print("✅ Data processor initialized successfully")
|
||||
|
||||
# Test comprehensive user data retrieval
|
||||
print("🔄 Testing comprehensive user data retrieval...")
|
||||
user_data = await data_processor.get_comprehensive_user_data(1, 1)
|
||||
|
||||
if user_data:
|
||||
print("✅ Comprehensive user data retrieved successfully")
|
||||
print(f" - User ID: {user_data.get('user_id')}")
|
||||
print(f" - Strategy ID: {user_data.get('strategy_id')}")
|
||||
print(f" - Industry: {user_data.get('industry')}")
|
||||
|
||||
# Check for required data sections
|
||||
required_sections = ['onboarding_data', 'strategy_data', 'gap_analysis', 'ai_analysis']
|
||||
for section in required_sections:
|
||||
if section in user_data:
|
||||
print(f" - {section}: Available")
|
||||
else:
|
||||
print(f" - {section}: Missing")
|
||||
else:
|
||||
print("❌ Failed to retrieve comprehensive user data")
|
||||
return False
|
||||
|
||||
print("✅ Step 4 data processing test completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 4 data processing: {str(e)}")
|
||||
import traceback
|
||||
print(f"Traceback: {traceback.format_exc()}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main test function."""
|
||||
print("🚀 Starting Step 4 Implementation Tests")
|
||||
print("=" * 50)
|
||||
|
||||
# Run all tests
|
||||
tests = [
|
||||
test_step4_implementation(),
|
||||
test_step4_integration(),
|
||||
test_step4_data_processing()
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tests, return_exceptions=True)
|
||||
|
||||
# Summarize results
|
||||
print("\n" + "=" * 50)
|
||||
print("📊 Test Results Summary")
|
||||
print("=" * 50)
|
||||
|
||||
test_names = [
|
||||
"Step 4 Implementation",
|
||||
"Step 4 Integration",
|
||||
"Step 4 Data Processing"
|
||||
]
|
||||
|
||||
passed = 0
|
||||
total = len(results)
|
||||
|
||||
for i, result in enumerate(results):
|
||||
if isinstance(result, Exception):
|
||||
print(f"❌ {test_names[i]}: Failed - {str(result)}")
|
||||
elif result:
|
||||
print(f"✅ {test_names[i]}: Passed")
|
||||
passed += 1
|
||||
else:
|
||||
print(f"❌ {test_names[i]}: Failed")
|
||||
|
||||
print(f"\n🎯 Overall Results: {passed}/{total} tests passed")
|
||||
|
||||
if passed == total:
|
||||
print("🎉 All tests passed! Step 4 implementation is ready for production.")
|
||||
return True
|
||||
else:
|
||||
print("⚠️ Some tests failed. Please review the implementation.")
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = asyncio.run(main())
|
||||
sys.exit(0 if success else 1)
|
||||
88
backend/test/test_step5_debug.py
Normal file
88
backend/test/test_step5_debug.py
Normal file
@@ -0,0 +1,88 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 5 debugging
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
async def test_step5_debug():
|
||||
"""Debug Step 5 execution specifically."""
|
||||
|
||||
print("🧪 Debugging Step 5: Content Pillar Distribution")
|
||||
|
||||
try:
|
||||
# Import Step 5
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step5_implementation import ContentPillarDistributionStep
|
||||
|
||||
# Create test context with data from previous steps
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme",
|
||||
"previous_step_results": {
|
||||
4: {
|
||||
"results": {
|
||||
"calendarStructure": {
|
||||
"type": "monthly",
|
||||
"total_weeks": 4,
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"posting_frequency": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"step_results": {},
|
||||
"quality_scores": {}
|
||||
}
|
||||
|
||||
# Create Step 5 instance
|
||||
print("✅ Creating Step 5 instance...")
|
||||
step5 = ContentPillarDistributionStep()
|
||||
print("✅ Step 5 instance created successfully")
|
||||
|
||||
# Test Step 5 execution with timing
|
||||
print("🔄 Executing Step 5...")
|
||||
start_time = time.time()
|
||||
|
||||
result = await step5.run(context)
|
||||
|
||||
execution_time = time.time() - start_time
|
||||
print(f"⏱️ Step 5 execution time: {execution_time:.2f} seconds")
|
||||
|
||||
if result:
|
||||
print("✅ Step 5 executed successfully!")
|
||||
print(f"Status: {result.get('status', 'unknown')}")
|
||||
print(f"Quality Score: {result.get('quality_score', 0)}")
|
||||
print(f"Execution Time: {result.get('execution_time', 'unknown')}")
|
||||
|
||||
if result.get('status') == 'error':
|
||||
print(f"❌ Step 5 Error: {result.get('error_message', 'Unknown error')}")
|
||||
else:
|
||||
print("📊 Step 5 Results:")
|
||||
results = result.get('results', {})
|
||||
print(f" - Pillar Mapping: {results.get('pillarMapping', {}).get('distribution_balance', 0):.1%} balance")
|
||||
print(f" - Theme Development: {results.get('themeDevelopment', {}).get('variety_score', 0):.1%} variety")
|
||||
print(f" - Strategic Validation: {results.get('strategicValidation', {}).get('alignment_score', 0):.1%} alignment")
|
||||
print(f" - Diversity Assurance: {results.get('diversityAssurance', {}).get('diversity_score', 0):.1%} diversity")
|
||||
else:
|
||||
print("❌ Step 5 returned None")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 5: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step5_debug())
|
||||
122
backend/test/test_step5_orchestrator_context.py
Normal file
122
backend/test/test_step5_orchestrator_context.py
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 5 with orchestrator context structure
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
async def test_step5_orchestrator_context():
|
||||
"""Test Step 5 with orchestrator context structure."""
|
||||
|
||||
print("🧪 Testing Step 5 with orchestrator context structure")
|
||||
|
||||
try:
|
||||
# Import Step 5
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step5_implementation import ContentPillarDistributionStep
|
||||
|
||||
# Create context exactly as the orchestrator does
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme",
|
||||
"user_data": {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"industry": "technology",
|
||||
"onboarding_data": {
|
||||
"posting_preferences": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
},
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"optimal_times": ["09:00", "12:00", "15:00", "18:00", "20:00"]
|
||||
},
|
||||
"strategy_data": {
|
||||
"content_pillars": [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
],
|
||||
"business_objectives": [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership in AI/ML space"
|
||||
]
|
||||
}
|
||||
},
|
||||
"step_results": {
|
||||
"step_04": {
|
||||
"stepNumber": 4,
|
||||
"stepName": "Calendar Framework & Timeline",
|
||||
"results": {
|
||||
"calendarStructure": {
|
||||
"type": "monthly",
|
||||
"total_weeks": 4,
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"posting_frequency": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
},
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
},
|
||||
"qualityScore": 1.0,
|
||||
"executionTime": "2.9s"
|
||||
}
|
||||
},
|
||||
"quality_scores": {},
|
||||
"current_step": 5,
|
||||
"phase": "phase_2_structure"
|
||||
}
|
||||
|
||||
# Create Step 5 instance
|
||||
print("✅ Creating Step 5 instance...")
|
||||
step5 = ContentPillarDistributionStep()
|
||||
print("✅ Step 5 instance created successfully")
|
||||
|
||||
# Test Step 5 execution with timing
|
||||
print("🔄 Executing Step 5...")
|
||||
start_time = time.time()
|
||||
|
||||
result = await step5.run(context)
|
||||
|
||||
execution_time = time.time() - start_time
|
||||
print(f"⏱️ Step 5 execution time: {execution_time:.2f} seconds")
|
||||
|
||||
if result:
|
||||
print("✅ Step 5 executed successfully!")
|
||||
print(f"Status: {result.get('status', 'unknown')}")
|
||||
print(f"Quality Score: {result.get('quality_score', 0)}")
|
||||
print(f"Execution Time: {result.get('execution_time', 'unknown')}")
|
||||
|
||||
if result.get('status') == 'error':
|
||||
print(f"❌ Step 5 Error: {result.get('error_message', 'Unknown error')}")
|
||||
else:
|
||||
print("📊 Step 5 Results:")
|
||||
results = result.get('results', {})
|
||||
print(f" - Pillar Mapping: {results.get('pillarMapping', {}).get('distribution_balance', 0):.1%} balance")
|
||||
print(f" - Theme Development: {results.get('themeDevelopment', {}).get('variety_score', 0):.1%} variety")
|
||||
print(f" - Strategic Validation: {results.get('strategicValidation', {}).get('alignment_score', 0):.1%} alignment")
|
||||
print(f" - Diversity Assurance: {results.get('diversityAssurance', {}).get('diversity_score', 0):.1%} diversity")
|
||||
else:
|
||||
print("❌ Step 5 returned None")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 5: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step5_orchestrator_context())
|
||||
127
backend/test/test_step5_orchestrator_direct.py
Normal file
127
backend/test/test_step5_orchestrator_direct.py
Normal file
@@ -0,0 +1,127 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Step 5 with orchestrator's direct step execution
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
async def test_step5_orchestrator_direct():
|
||||
"""Test Step 5 with orchestrator's direct step execution."""
|
||||
|
||||
print("🧪 Testing Step 5 with orchestrator's direct step execution")
|
||||
|
||||
try:
|
||||
# Import orchestrator and Step 5
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.orchestrator import PromptChainOrchestrator
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step5_implementation import ContentPillarDistributionStep
|
||||
|
||||
# Create orchestrator
|
||||
print("✅ Creating orchestrator...")
|
||||
orchestrator = PromptChainOrchestrator()
|
||||
print("✅ Orchestrator created successfully")
|
||||
|
||||
# Get Step 5 from orchestrator
|
||||
step5 = orchestrator.steps["step_05"]
|
||||
print(f"✅ Got Step 5 from orchestrator: {type(step5)}")
|
||||
|
||||
# Create context exactly as the orchestrator does
|
||||
context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_type": "monthly",
|
||||
"industry": "technology",
|
||||
"business_size": "sme",
|
||||
"user_data": {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"industry": "technology",
|
||||
"onboarding_data": {
|
||||
"posting_preferences": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
},
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"optimal_times": ["09:00", "12:00", "15:00", "18:00", "20:00"]
|
||||
},
|
||||
"strategy_data": {
|
||||
"content_pillars": [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
],
|
||||
"business_objectives": [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership in AI/ML space"
|
||||
]
|
||||
}
|
||||
},
|
||||
"step_results": {
|
||||
"step_04": {
|
||||
"stepNumber": 4,
|
||||
"stepName": "Calendar Framework & Timeline",
|
||||
"results": {
|
||||
"calendarStructure": {
|
||||
"type": "monthly",
|
||||
"total_weeks": 4,
|
||||
"posting_days": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"],
|
||||
"posting_frequency": {
|
||||
"daily": 2,
|
||||
"weekly": 10,
|
||||
"monthly": 40
|
||||
},
|
||||
"industry": "technology",
|
||||
"business_size": "sme"
|
||||
}
|
||||
},
|
||||
"qualityScore": 1.0,
|
||||
"executionTime": "2.9s"
|
||||
}
|
||||
},
|
||||
"quality_scores": {},
|
||||
"current_step": 5,
|
||||
"phase": "phase_2_structure"
|
||||
}
|
||||
|
||||
# Test Step 5 execution with timing
|
||||
print("🔄 Executing Step 5 with orchestrator's step...")
|
||||
start_time = time.time()
|
||||
|
||||
result = await step5.run(context)
|
||||
|
||||
execution_time = time.time() - start_time
|
||||
print(f"⏱️ Step 5 execution time: {execution_time:.2f} seconds")
|
||||
|
||||
if result:
|
||||
print("✅ Step 5 executed successfully!")
|
||||
print(f"Status: {result.get('status', 'unknown')}")
|
||||
print(f"Quality Score: {result.get('quality_score', 0)}")
|
||||
print(f"Execution Time: {result.get('execution_time', 'unknown')}")
|
||||
|
||||
if result.get('status') == 'error':
|
||||
print(f"❌ Step 5 Error: {result.get('error_message', 'Unknown error')}")
|
||||
else:
|
||||
print("📊 Step 5 Results:")
|
||||
step_result = result.get('result', {})
|
||||
print(f" - Pillar Mapping: {step_result.get('pillarMapping', {}).get('distribution_balance', 0):.1%} balance")
|
||||
print(f" - Theme Development: {step_result.get('themeDevelopment', {}).get('variety_score', 0):.1%} variety")
|
||||
print(f" - Strategic Validation: {step_result.get('strategicValidation', {}).get('alignment_score', 0):.1%} alignment")
|
||||
print(f" - Diversity Assurance: {step_result.get('diversityAssurance', {}).get('diversity_score', 0):.1%} diversity")
|
||||
else:
|
||||
print("❌ Step 5 returned None")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing Step 5: {e}")
|
||||
import traceback
|
||||
print(f"📋 Traceback: {traceback.format_exc()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_step5_orchestrator_direct())
|
||||
303
backend/test/test_steps_1_8.py
Normal file
303
backend/test/test_steps_1_8.py
Normal file
@@ -0,0 +1,303 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script for Steps 1-8 of Calendar Generation Framework
|
||||
|
||||
This script tests the first 8 steps of the calendar generation process
|
||||
with real data sources and no fallbacks.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import os
|
||||
from typing import Dict, Any
|
||||
from loguru import logger
|
||||
|
||||
# Add the backend directory to the path
|
||||
backend_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
# Add the services directory to the path
|
||||
services_dir = os.path.join(backend_dir, "services")
|
||||
if services_dir not in sys.path:
|
||||
sys.path.insert(0, services_dir)
|
||||
|
||||
async def test_steps_1_8():
|
||||
"""Test Steps 1-8 of the calendar generation framework."""
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting test of Steps 1-8")
|
||||
|
||||
# Test data
|
||||
test_context = {
|
||||
"user_id": 1,
|
||||
"strategy_id": 1,
|
||||
"calendar_duration": 7, # 1 week
|
||||
"posting_preferences": {
|
||||
"posting_frequency": "daily",
|
||||
"preferred_days": ["monday", "wednesday", "friday"],
|
||||
"preferred_times": ["09:00", "12:00", "15:00"],
|
||||
"content_per_day": 2
|
||||
}
|
||||
}
|
||||
|
||||
# Test Step 1: Content Strategy Analysis
|
||||
logger.info("📋 Testing Step 1: Content Strategy Analysis")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import ContentStrategyAnalysisStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.strategy_data import StrategyDataProcessor
|
||||
|
||||
# Create strategy processor with mock data for testing
|
||||
strategy_processor = StrategyDataProcessor()
|
||||
|
||||
# For testing, we'll create a simple mock strategy data
|
||||
# In a real scenario, this would come from the database
|
||||
mock_strategy_data = {
|
||||
"strategy_id": 1,
|
||||
"strategy_name": "Test Strategy",
|
||||
"industry": "technology",
|
||||
"target_audience": {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {"age_range": "25-45", "location": "Global"}
|
||||
},
|
||||
"content_pillars": [
|
||||
"AI and Machine Learning",
|
||||
"Digital Transformation",
|
||||
"Innovation and Technology Trends",
|
||||
"Business Strategy and Growth"
|
||||
],
|
||||
"business_objectives": [
|
||||
"Increase brand awareness by 40%",
|
||||
"Generate 500 qualified leads per month",
|
||||
"Establish thought leadership"
|
||||
],
|
||||
"target_metrics": {"awareness": "website_traffic", "leads": "lead_generation"},
|
||||
"quality_indicators": {"data_completeness": 0.8, "strategic_alignment": 0.9}
|
||||
}
|
||||
|
||||
# Mock the get_strategy_data method for testing
|
||||
async def mock_get_strategy_data(strategy_id):
|
||||
return mock_strategy_data
|
||||
|
||||
strategy_processor.get_strategy_data = mock_get_strategy_data
|
||||
|
||||
# Mock the validate_data method
|
||||
async def mock_validate_data(data):
|
||||
return {
|
||||
"quality_score": 0.85,
|
||||
"missing_fields": [],
|
||||
"recommendations": []
|
||||
}
|
||||
|
||||
strategy_processor.validate_data = mock_validate_data
|
||||
|
||||
step1 = ContentStrategyAnalysisStep()
|
||||
step1.strategy_processor = strategy_processor
|
||||
|
||||
result1 = await step1.execute(test_context)
|
||||
logger.info(f"✅ Step 1 completed: {result1.get('status')}")
|
||||
logger.info(f" Quality Score: {result1.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 1 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 2: Gap Analysis
|
||||
logger.info("📋 Testing Step 2: Gap Analysis")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import GapAnalysisStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.gap_analysis_data import GapAnalysisDataProcessor
|
||||
|
||||
# Create gap processor with mock data for testing
|
||||
gap_processor = GapAnalysisDataProcessor()
|
||||
|
||||
# Mock gap analysis data
|
||||
mock_gap_data = {
|
||||
"content_gaps": [
|
||||
{"topic": "AI Ethics", "priority": "high", "impact_score": 0.9},
|
||||
{"topic": "Digital Transformation ROI", "priority": "medium", "impact_score": 0.7},
|
||||
{"topic": "Cloud Migration Strategies", "priority": "high", "impact_score": 0.8}
|
||||
],
|
||||
"keyword_opportunities": [
|
||||
{"keyword": "AI ethics in business", "search_volume": 5000, "competition": "low"},
|
||||
{"keyword": "digital transformation ROI", "search_volume": 8000, "competition": "medium"},
|
||||
{"keyword": "cloud migration guide", "search_volume": 12000, "competition": "high"}
|
||||
],
|
||||
"competitor_insights": {
|
||||
"top_competitors": ["Competitor A", "Competitor B"],
|
||||
"content_gaps": ["AI Ethics", "Practical ROI"],
|
||||
"opportunities": ["Case Studies", "Implementation Guides"]
|
||||
},
|
||||
"opportunities": [
|
||||
{"type": "content", "topic": "AI Ethics", "priority": "high"},
|
||||
{"type": "content", "topic": "ROI Analysis", "priority": "medium"}
|
||||
],
|
||||
"recommendations": [
|
||||
"Create comprehensive AI ethics guide",
|
||||
"Develop ROI calculator for digital transformation",
|
||||
"Publish case studies on successful implementations"
|
||||
]
|
||||
}
|
||||
|
||||
# Mock the get_gap_analysis_data method
|
||||
async def mock_get_gap_analysis_data(user_id):
|
||||
return mock_gap_data
|
||||
|
||||
gap_processor.get_gap_analysis_data = mock_get_gap_analysis_data
|
||||
|
||||
step2 = GapAnalysisStep()
|
||||
step2.gap_processor = gap_processor
|
||||
|
||||
result2 = await step2.execute(test_context)
|
||||
logger.info(f"✅ Step 2 completed: {result2.get('status')}")
|
||||
logger.info(f" Quality Score: {result2.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 2 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 3: Audience & Platform Strategy
|
||||
logger.info("📋 Testing Step 3: Audience & Platform Strategy")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase1.phase1_steps import AudiencePlatformStrategyStep
|
||||
from services.calendar_generation_datasource_framework.data_processing.comprehensive_user_data import ComprehensiveUserDataProcessor
|
||||
|
||||
# Create comprehensive processor with mock data for testing
|
||||
comprehensive_processor = ComprehensiveUserDataProcessor()
|
||||
|
||||
# Mock comprehensive user data
|
||||
mock_user_data = {
|
||||
"user_id": 1,
|
||||
"onboarding_data": {
|
||||
"industry": "technology",
|
||||
"business_size": "enterprise",
|
||||
"target_audience": {
|
||||
"primary": "Tech professionals",
|
||||
"secondary": "Business leaders",
|
||||
"demographics": {"age_range": "25-45", "location": "Global"}
|
||||
},
|
||||
"platform_preferences": {
|
||||
"LinkedIn": {"priority": "high", "content_focus": "professional"},
|
||||
"Twitter": {"priority": "medium", "content_focus": "news"},
|
||||
"Blog": {"priority": "high", "content_focus": "in-depth"}
|
||||
}
|
||||
},
|
||||
"performance_data": {
|
||||
"LinkedIn": {"engagement_rate": 0.08, "reach": 10000},
|
||||
"Twitter": {"engagement_rate": 0.05, "reach": 5000},
|
||||
"Blog": {"engagement_rate": 0.12, "reach": 8000}
|
||||
},
|
||||
"strategy_data": mock_strategy_data
|
||||
}
|
||||
|
||||
# Mock the get_comprehensive_user_data method
|
||||
async def mock_get_comprehensive_user_data(user_id, strategy_id):
|
||||
return mock_user_data
|
||||
|
||||
comprehensive_processor.get_comprehensive_user_data = mock_get_comprehensive_user_data
|
||||
|
||||
step3 = AudiencePlatformStrategyStep()
|
||||
step3.comprehensive_processor = comprehensive_processor
|
||||
|
||||
result3 = await step3.execute(test_context)
|
||||
logger.info(f"✅ Step 3 completed: {result3.get('status')}")
|
||||
logger.info(f" Quality Score: {result3.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 3 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 4: Calendar Framework
|
||||
logger.info("📋 Testing Step 4: Calendar Framework")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step4_implementation import CalendarFrameworkStep
|
||||
|
||||
step4 = CalendarFrameworkStep()
|
||||
result4 = await step4.execute(test_context)
|
||||
logger.info(f"✅ Step 4 completed: {result4.get('status')}")
|
||||
logger.info(f" Quality Score: {result4.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 4 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 5: Content Pillar Distribution
|
||||
logger.info("📋 Testing Step 5: Content Pillar Distribution")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step5_implementation import ContentPillarDistributionStep
|
||||
|
||||
step5 = ContentPillarDistributionStep()
|
||||
result5 = await step5.execute(test_context)
|
||||
logger.info(f"✅ Step 5 completed: {result5.get('status')}")
|
||||
logger.info(f" Quality Score: {result5.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 5 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 6: Platform-Specific Strategy
|
||||
logger.info("📋 Testing Step 6: Platform-Specific Strategy")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase2.step6_implementation import PlatformSpecificStrategyStep
|
||||
|
||||
step6 = PlatformSpecificStrategyStep()
|
||||
result6 = await step6.execute(test_context)
|
||||
logger.info(f"✅ Step 6 completed: {result6.get('status')}")
|
||||
logger.info(f" Quality Score: {result6.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 6 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 7: Weekly Theme Development
|
||||
logger.info("📋 Testing Step 7: Weekly Theme Development")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step7_implementation import WeeklyThemeDevelopmentStep
|
||||
|
||||
step7 = WeeklyThemeDevelopmentStep()
|
||||
result7 = await step7.execute(test_context)
|
||||
logger.info(f"✅ Step 7 completed: {result7.get('status')}")
|
||||
logger.info(f" Quality Score: {result7.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 7 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
# Test Step 8: Daily Content Planning
|
||||
logger.info("📋 Testing Step 8: Daily Content Planning")
|
||||
try:
|
||||
from services.calendar_generation_datasource_framework.prompt_chaining.steps.phase3.step8_implementation import DailyContentPlanningStep
|
||||
|
||||
step8 = DailyContentPlanningStep()
|
||||
result8 = await step8.execute(test_context)
|
||||
logger.info(f"✅ Step 8 completed: {result8.get('status')}")
|
||||
logger.info(f" Quality Score: {result8.get('quality_score')}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Step 8 failed: {str(e)}")
|
||||
return False
|
||||
|
||||
logger.info("🎉 All Steps 1-8 completed successfully!")
|
||||
logger.info("📝 Note: This test uses mock data for database services.")
|
||||
logger.info("📝 In production, real database services would be used.")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Test failed with error: {str(e)}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure logging
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, level="INFO", format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>")
|
||||
|
||||
# Run the test
|
||||
success = asyncio.run(test_steps_1_8())
|
||||
|
||||
if success:
|
||||
logger.info("✅ Test completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("❌ Test failed!")
|
||||
sys.exit(1)
|
||||
117
backend/test/test_strategy_data_structure.py
Normal file
117
backend/test/test_strategy_data_structure.py
Normal file
@@ -0,0 +1,117 @@
|
||||
"""
|
||||
Test script to verify strategy data structure matches frontend expectations
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from api.content_planning.services.strategy_service import StrategyService
|
||||
|
||||
async def test_strategy_data_structure():
|
||||
"""Test the strategy data structure to ensure it matches frontend expectations."""
|
||||
|
||||
print("🧪 Testing Strategy Data Structure")
|
||||
print("=" * 50)
|
||||
|
||||
# Initialize service
|
||||
service = StrategyService()
|
||||
|
||||
# Get strategies
|
||||
result = await service.get_strategies(user_id=1)
|
||||
|
||||
print("📊 Backend Response Structure:")
|
||||
print(json.dumps(result, indent=2, default=str))
|
||||
|
||||
# Check if strategies array exists
|
||||
if "strategies" in result and len(result["strategies"]) > 0:
|
||||
strategy = result["strategies"][0]
|
||||
|
||||
print("\n✅ Frontend Expected Structure Check:")
|
||||
print("-" * 40)
|
||||
|
||||
# Check for ai_recommendations
|
||||
if "ai_recommendations" in strategy:
|
||||
ai_rec = strategy["ai_recommendations"]
|
||||
print(f"✅ ai_recommendations: Present")
|
||||
|
||||
# Check market_score
|
||||
if "market_score" in ai_rec:
|
||||
print(f"✅ market_score: {ai_rec['market_score']}")
|
||||
else:
|
||||
print("❌ market_score: Missing")
|
||||
|
||||
# Check strengths
|
||||
if "strengths" in ai_rec:
|
||||
print(f"✅ strengths: {len(ai_rec['strengths'])} items")
|
||||
else:
|
||||
print("❌ strengths: Missing")
|
||||
|
||||
# Check weaknesses
|
||||
if "weaknesses" in ai_rec:
|
||||
print(f"✅ weaknesses: {len(ai_rec['weaknesses'])} items")
|
||||
else:
|
||||
print("❌ weaknesses: Missing")
|
||||
|
||||
# Check competitive_advantages
|
||||
if "competitive_advantages" in ai_rec:
|
||||
print(f"✅ competitive_advantages: {len(ai_rec['competitive_advantages'])} items")
|
||||
else:
|
||||
print("❌ competitive_advantages: Missing")
|
||||
|
||||
# Check strategic_risks
|
||||
if "strategic_risks" in ai_rec:
|
||||
print(f"✅ strategic_risks: {len(ai_rec['strategic_risks'])} items")
|
||||
else:
|
||||
print("❌ strategic_risks: Missing")
|
||||
|
||||
else:
|
||||
print("❌ ai_recommendations: Missing")
|
||||
|
||||
# Check for required strategy fields
|
||||
required_fields = ["id", "name", "industry", "target_audience", "content_pillars"]
|
||||
for field in required_fields:
|
||||
if field in strategy:
|
||||
print(f"✅ {field}: Present")
|
||||
else:
|
||||
print(f"❌ {field}: Missing")
|
||||
|
||||
print("\n🎯 Frontend Data Mapping Validation:")
|
||||
print("-" * 40)
|
||||
|
||||
# Validate the specific structure expected by frontend
|
||||
if "ai_recommendations" in strategy:
|
||||
ai_rec = strategy["ai_recommendations"]
|
||||
|
||||
# Check market positioning structure
|
||||
if "market_score" in ai_rec:
|
||||
print(f"✅ Frontend can access: strategy.ai_recommendations.market_score")
|
||||
|
||||
# Check strengths structure
|
||||
if "strengths" in ai_rec and isinstance(ai_rec["strengths"], list):
|
||||
print(f"✅ Frontend can access: strategy.ai_recommendations.strengths")
|
||||
|
||||
# Check weaknesses structure
|
||||
if "weaknesses" in ai_rec and isinstance(ai_rec["weaknesses"], list):
|
||||
print(f"✅ Frontend can access: strategy.ai_recommendations.weaknesses")
|
||||
|
||||
# Check competitive advantages structure
|
||||
if "competitive_advantages" in ai_rec and isinstance(ai_rec["competitive_advantages"], list):
|
||||
print(f"✅ Frontend can access: strategy.ai_recommendations.competitive_advantages")
|
||||
|
||||
# Check strategic risks structure
|
||||
if "strategic_risks" in ai_rec and isinstance(ai_rec["strategic_risks"], list):
|
||||
print(f"✅ Frontend can access: strategy.ai_recommendations.strategic_risks")
|
||||
|
||||
print("\n🎉 Data Structure Validation Complete!")
|
||||
print("=" * 50)
|
||||
|
||||
return True
|
||||
else:
|
||||
print("❌ No strategies found in response")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = asyncio.run(test_strategy_data_structure())
|
||||
if success:
|
||||
print("✅ All tests passed! Backend data structure matches frontend expectations.")
|
||||
else:
|
||||
print("❌ Tests failed! Backend data structure needs adjustment.")
|
||||
276
backend/test/test_subscription_system.py
Normal file
276
backend/test/test_subscription_system.py
Normal file
@@ -0,0 +1,276 @@
|
||||
"""
|
||||
Test Script for Subscription System
|
||||
Tests the core functionality of the usage-based subscription system.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
import asyncio
|
||||
import json
|
||||
|
||||
# Add the backend directory to Python path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from loguru import logger
|
||||
|
||||
from services.database import engine
|
||||
from services.pricing_service import PricingService
|
||||
from services.usage_tracking_service import UsageTrackingService
|
||||
from models.subscription_models import APIProvider, SubscriptionTier
|
||||
|
||||
async def test_pricing_service():
|
||||
"""Test the pricing service functionality."""
|
||||
|
||||
logger.info("🧪 Testing Pricing Service...")
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
db = SessionLocal()
|
||||
|
||||
try:
|
||||
pricing_service = PricingService(db)
|
||||
|
||||
# Test cost calculation
|
||||
cost_data = pricing_service.calculate_api_cost(
|
||||
provider=APIProvider.GEMINI,
|
||||
model_name="gemini-2.5-flash",
|
||||
tokens_input=1000,
|
||||
tokens_output=500,
|
||||
request_count=1
|
||||
)
|
||||
|
||||
logger.info(f"✅ Cost calculation: {cost_data}")
|
||||
|
||||
# Test user limits
|
||||
limits = pricing_service.get_user_limits("test_user")
|
||||
logger.info(f"✅ User limits: {limits}")
|
||||
|
||||
# Test usage limit checking
|
||||
can_proceed, message, usage_info = pricing_service.check_usage_limits(
|
||||
user_id="test_user",
|
||||
provider=APIProvider.GEMINI,
|
||||
tokens_requested=100
|
||||
)
|
||||
|
||||
logger.info(f"✅ Usage check: {can_proceed} - {message}")
|
||||
logger.info(f" Usage info: {usage_info}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Pricing service test failed: {e}")
|
||||
return False
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
async def test_usage_tracking():
|
||||
"""Test the usage tracking service."""
|
||||
|
||||
logger.info("🧪 Testing Usage Tracking Service...")
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
db = SessionLocal()
|
||||
|
||||
try:
|
||||
usage_service = UsageTrackingService(db)
|
||||
|
||||
# Test tracking an API usage
|
||||
result = await usage_service.track_api_usage(
|
||||
user_id="test_user",
|
||||
provider=APIProvider.GEMINI,
|
||||
endpoint="/api/generate",
|
||||
method="POST",
|
||||
model_used="gemini-2.5-flash",
|
||||
tokens_input=500,
|
||||
tokens_output=300,
|
||||
response_time=1.5,
|
||||
status_code=200
|
||||
)
|
||||
|
||||
logger.info(f"✅ Usage tracking result: {result}")
|
||||
|
||||
# Test getting usage stats
|
||||
stats = usage_service.get_user_usage_stats("test_user")
|
||||
logger.info(f"✅ Usage stats: {json.dumps(stats, indent=2)}")
|
||||
|
||||
# Test usage trends
|
||||
trends = usage_service.get_usage_trends("test_user", 3)
|
||||
logger.info(f"✅ Usage trends: {json.dumps(trends, indent=2)}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Usage tracking test failed: {e}")
|
||||
return False
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
async def test_limit_enforcement():
|
||||
"""Test usage limit enforcement."""
|
||||
|
||||
logger.info("🧪 Testing Limit Enforcement...")
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
db = SessionLocal()
|
||||
|
||||
try:
|
||||
usage_service = UsageTrackingService(db)
|
||||
|
||||
# Test multiple API calls to approach limits
|
||||
for i in range(5):
|
||||
result = await usage_service.track_api_usage(
|
||||
user_id="test_user_limits",
|
||||
provider=APIProvider.GEMINI,
|
||||
endpoint="/api/generate",
|
||||
method="POST",
|
||||
model_used="gemini-2.5-flash",
|
||||
tokens_input=1000,
|
||||
tokens_output=800,
|
||||
response_time=2.0,
|
||||
status_code=200
|
||||
)
|
||||
logger.info(f"Call {i+1}: {result}")
|
||||
|
||||
# Check if limits are being enforced
|
||||
can_proceed, message, usage_info = await usage_service.enforce_usage_limits(
|
||||
user_id="test_user_limits",
|
||||
provider=APIProvider.GEMINI,
|
||||
tokens_requested=5000
|
||||
)
|
||||
|
||||
logger.info(f"✅ Limit enforcement: {can_proceed} - {message}")
|
||||
logger.info(f" Usage info: {usage_info}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Limit enforcement test failed: {e}")
|
||||
return False
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
def test_database_tables():
|
||||
"""Test that all subscription tables exist."""
|
||||
|
||||
logger.info("🧪 Testing Database Tables...")
|
||||
|
||||
try:
|
||||
from sqlalchemy import text
|
||||
|
||||
with engine.connect() as conn:
|
||||
# Check for subscription tables
|
||||
tables_query = text("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table' AND (
|
||||
name LIKE '%subscription%' OR
|
||||
name LIKE '%usage%' OR
|
||||
name LIKE '%pricing%' OR
|
||||
name LIKE '%billing%'
|
||||
)
|
||||
ORDER BY name
|
||||
""")
|
||||
|
||||
result = conn.execute(tables_query)
|
||||
tables = result.fetchall()
|
||||
|
||||
expected_tables = [
|
||||
'api_provider_pricing',
|
||||
'api_usage_logs',
|
||||
'billing_history',
|
||||
'subscription_plans',
|
||||
'usage_alerts',
|
||||
'usage_summaries',
|
||||
'user_subscriptions'
|
||||
]
|
||||
|
||||
found_tables = [t[0] for t in tables]
|
||||
logger.info(f"Found tables: {found_tables}")
|
||||
|
||||
missing_tables = [t for t in expected_tables if t not in found_tables]
|
||||
if missing_tables:
|
||||
logger.error(f"❌ Missing tables: {missing_tables}")
|
||||
return False
|
||||
|
||||
# Check table data
|
||||
for table in ['subscription_plans', 'api_provider_pricing']:
|
||||
count_query = text(f"SELECT COUNT(*) FROM {table}")
|
||||
result = conn.execute(count_query)
|
||||
count = result.fetchone()[0]
|
||||
logger.info(f"✅ {table}: {count} records")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Database tables test failed: {e}")
|
||||
return False
|
||||
|
||||
async def run_comprehensive_test():
|
||||
"""Run comprehensive test suite."""
|
||||
|
||||
logger.info("🚀 Starting Subscription System Comprehensive Test")
|
||||
logger.info("="*60)
|
||||
|
||||
test_results = {}
|
||||
|
||||
# Test 1: Database Tables
|
||||
logger.info("\n1. Testing Database Tables...")
|
||||
test_results['database_tables'] = test_database_tables()
|
||||
|
||||
# Test 2: Pricing Service
|
||||
logger.info("\n2. Testing Pricing Service...")
|
||||
test_results['pricing_service'] = await test_pricing_service()
|
||||
|
||||
# Test 3: Usage Tracking
|
||||
logger.info("\n3. Testing Usage Tracking...")
|
||||
test_results['usage_tracking'] = await test_usage_tracking()
|
||||
|
||||
# Test 4: Limit Enforcement
|
||||
logger.info("\n4. Testing Limit Enforcement...")
|
||||
test_results['limit_enforcement'] = await test_limit_enforcement()
|
||||
|
||||
# Summary
|
||||
logger.info("\n" + "="*60)
|
||||
logger.info("TEST RESULTS SUMMARY")
|
||||
logger.info("="*60)
|
||||
|
||||
passed = sum(1 for result in test_results.values() if result)
|
||||
total = len(test_results)
|
||||
|
||||
for test_name, result in test_results.items():
|
||||
status = "✅ PASS" if result else "❌ FAIL"
|
||||
logger.info(f"{test_name.upper().replace('_', ' ')}: {status}")
|
||||
|
||||
logger.info(f"\nOverall: {passed}/{total} tests passed")
|
||||
|
||||
if passed == total:
|
||||
logger.info("🎉 All tests passed! Subscription system is ready.")
|
||||
|
||||
logger.info("\n" + "="*60)
|
||||
logger.info("NEXT STEPS:")
|
||||
logger.info("="*60)
|
||||
logger.info("1. Start the FastAPI server:")
|
||||
logger.info(" cd backend && python start_alwrity_backend.py")
|
||||
logger.info("\n2. Test the API endpoints:")
|
||||
logger.info(" GET http://localhost:8000/api/subscription/plans")
|
||||
logger.info(" GET http://localhost:8000/api/subscription/pricing")
|
||||
logger.info(" GET http://localhost:8000/api/subscription/usage/test_user")
|
||||
logger.info("\n3. Integrate with your frontend dashboard")
|
||||
logger.info("4. Set up user authentication/identification")
|
||||
logger.info("5. Configure payment processing (Stripe, etc.)")
|
||||
logger.info("="*60)
|
||||
|
||||
return True
|
||||
else:
|
||||
logger.error("❌ Some tests failed. Please check the errors above.")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run the comprehensive test
|
||||
success = asyncio.run(run_comprehensive_test())
|
||||
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
|
||||
logger.info("✅ Test completed successfully!")
|
||||
82
backend/test/test_user_data.py
Normal file
82
backend/test/test_user_data.py
Normal file
@@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for user data service
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the backend directory to the path
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from services.database import init_database, get_db_session
|
||||
from services.user_data_service import UserDataService
|
||||
from loguru import logger
|
||||
|
||||
def test_user_data():
|
||||
"""Test the user data service functionality."""
|
||||
|
||||
print("👤 Testing User Data Service")
|
||||
print("=" * 50)
|
||||
|
||||
try:
|
||||
# Initialize database
|
||||
print("📊 Initializing database...")
|
||||
init_database()
|
||||
print("✅ Database initialized successfully")
|
||||
|
||||
# Test fetching user website URL
|
||||
print("\n🌐 Testing website URL fetching...")
|
||||
db_session = get_db_session()
|
||||
if db_session:
|
||||
try:
|
||||
user_data_service = UserDataService(db_session)
|
||||
website_url = user_data_service.get_user_website_url()
|
||||
|
||||
if website_url:
|
||||
print(f"✅ Found website URL: {website_url}")
|
||||
else:
|
||||
print("⚠️ No website URL found in database")
|
||||
print(" This is expected if no onboarding has been completed yet")
|
||||
|
||||
# Test getting full onboarding data
|
||||
print("\n📋 Testing full onboarding data...")
|
||||
onboarding_data = user_data_service.get_user_onboarding_data()
|
||||
|
||||
if onboarding_data:
|
||||
print("✅ Found onboarding data:")
|
||||
print(f" Session ID: {onboarding_data['session']['id']}")
|
||||
print(f" Current Step: {onboarding_data['session']['current_step']}")
|
||||
print(f" Progress: {onboarding_data['session']['progress']}")
|
||||
|
||||
if onboarding_data['website_analysis']:
|
||||
print(f" Website URL: {onboarding_data['website_analysis']['website_url']}")
|
||||
print(f" Analysis Status: {onboarding_data['website_analysis']['status']}")
|
||||
else:
|
||||
print(" No website analysis found")
|
||||
|
||||
print(f" API Keys: {len(onboarding_data['api_keys'])} configured")
|
||||
|
||||
if onboarding_data['research_preferences']:
|
||||
print(" Research preferences configured")
|
||||
else:
|
||||
print(" No research preferences found")
|
||||
else:
|
||||
print("⚠️ No onboarding data found")
|
||||
print(" This is expected if no onboarding has been completed yet")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database error: {str(e)}")
|
||||
finally:
|
||||
db_session.close()
|
||||
else:
|
||||
print("❌ Failed to get database session")
|
||||
|
||||
print("\n🎉 User Data Service Test Completed!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed: {str(e)}")
|
||||
logger.error(f"Test failed: {str(e)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_user_data()
|
||||
91
backend/test/validate_database.py
Normal file
91
backend/test/validate_database.py
Normal file
@@ -0,0 +1,91 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database validation script for billing system
|
||||
"""
|
||||
import sqlite3
|
||||
from datetime import datetime
|
||||
|
||||
def validate_database():
|
||||
conn = sqlite3.connect('alwrity.db')
|
||||
cursor = conn.cursor()
|
||||
|
||||
print('=== BILLING DATABASE VALIDATION ===')
|
||||
print(f'Validation timestamp: {datetime.now()}')
|
||||
print()
|
||||
|
||||
# Check subscription-related tables
|
||||
cursor.execute("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table' AND (
|
||||
name LIKE '%subscription%' OR
|
||||
name LIKE '%usage%' OR
|
||||
name LIKE '%billing%' OR
|
||||
name LIKE '%pricing%' OR
|
||||
name LIKE '%alert%'
|
||||
)
|
||||
ORDER BY name
|
||||
""")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
print('=== SUBSCRIPTION TABLES ===')
|
||||
for table in tables:
|
||||
table_name = table[0]
|
||||
print(f'\nTable: {table_name}')
|
||||
|
||||
# Get table schema
|
||||
cursor.execute(f'PRAGMA table_info({table_name})')
|
||||
columns = cursor.fetchall()
|
||||
print(' Schema:')
|
||||
for col in columns:
|
||||
col_id, name, type_name, not_null, default, pk = col
|
||||
constraints = []
|
||||
if pk:
|
||||
constraints.append('PRIMARY KEY')
|
||||
if not_null:
|
||||
constraints.append('NOT NULL')
|
||||
if default:
|
||||
constraints.append(f'DEFAULT {default}')
|
||||
constraint_str = f' ({", ".join(constraints)})' if constraints else ''
|
||||
print(f' {name}: {type_name}{constraint_str}')
|
||||
|
||||
# Get row count
|
||||
cursor.execute(f'SELECT COUNT(*) FROM {table_name}')
|
||||
count = cursor.fetchone()[0]
|
||||
print(f' Row count: {count}')
|
||||
|
||||
# Sample data for non-empty tables
|
||||
if count > 0 and count <= 10:
|
||||
cursor.execute(f'SELECT * FROM {table_name} LIMIT 3')
|
||||
rows = cursor.fetchall()
|
||||
print(' Sample data:')
|
||||
for i, row in enumerate(rows):
|
||||
print(f' Row {i+1}: {row}')
|
||||
|
||||
# Check for user-specific data
|
||||
print('\n=== USER DATA VALIDATION ===')
|
||||
|
||||
# Check if we have user-specific usage data
|
||||
cursor.execute("SELECT DISTINCT user_id FROM usage_summary LIMIT 5")
|
||||
users = cursor.fetchall()
|
||||
print(f'Users with usage data: {[u[0] for u in users]}')
|
||||
|
||||
# Check user subscriptions
|
||||
cursor.execute("SELECT DISTINCT user_id FROM user_subscriptions LIMIT 5")
|
||||
user_subs = cursor.fetchall()
|
||||
print(f'Users with subscriptions: {[u[0] for u in user_subs]}')
|
||||
|
||||
# Check API usage logs
|
||||
cursor.execute("SELECT COUNT(*) FROM api_usage_logs")
|
||||
api_logs_count = cursor.fetchone()[0]
|
||||
print(f'Total API usage logs: {api_logs_count}')
|
||||
|
||||
if api_logs_count > 0:
|
||||
cursor.execute("SELECT DISTINCT user_id FROM api_usage_logs LIMIT 5")
|
||||
api_users = cursor.fetchall()
|
||||
print(f'Users with API usage logs: {[u[0] for u in api_users]}')
|
||||
|
||||
conn.close()
|
||||
print('\n=== VALIDATION COMPLETE ===')
|
||||
|
||||
if __name__ == '__main__':
|
||||
validate_database()
|
||||
255
backend/test/validate_linkedin_structure.py
Normal file
255
backend/test/validate_linkedin_structure.py
Normal file
@@ -0,0 +1,255 @@
|
||||
"""
|
||||
Simple validation script for LinkedIn content generation structure.
|
||||
This script validates the code structure without requiring external dependencies.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import ast
|
||||
import traceback
|
||||
from pathlib import Path
|
||||
|
||||
def validate_file_syntax(file_path: str) -> bool:
|
||||
"""Validate Python file syntax."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
ast.parse(content)
|
||||
print(f"✅ {file_path}: Syntax valid")
|
||||
return True
|
||||
except SyntaxError as e:
|
||||
print(f"❌ {file_path}: Syntax error - {e}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ {file_path}: Error - {e}")
|
||||
return False
|
||||
|
||||
def validate_import_structure(file_path: str) -> bool:
|
||||
"""Validate import structure without actually importing."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
tree = ast.parse(content)
|
||||
imports = []
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.Import):
|
||||
for alias in node.names:
|
||||
imports.append(alias.name)
|
||||
elif isinstance(node, ast.ImportFrom):
|
||||
module = node.module or ""
|
||||
for alias in node.names:
|
||||
imports.append(f"{module}.{alias.name}")
|
||||
|
||||
print(f"✅ {file_path}: Found {len(imports)} imports")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ {file_path}: Import validation error - {e}")
|
||||
return False
|
||||
|
||||
def check_class_structure(file_path: str, expected_classes: list) -> bool:
|
||||
"""Check if expected classes are defined."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
tree = ast.parse(content)
|
||||
found_classes = []
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.ClassDef):
|
||||
found_classes.append(node.name)
|
||||
|
||||
missing_classes = set(expected_classes) - set(found_classes)
|
||||
if missing_classes:
|
||||
print(f"⚠️ {file_path}: Missing classes: {missing_classes}")
|
||||
else:
|
||||
print(f"✅ {file_path}: All expected classes found")
|
||||
|
||||
print(f" Found classes: {found_classes}")
|
||||
return len(missing_classes) == 0
|
||||
except Exception as e:
|
||||
print(f"❌ {file_path}: Class validation error - {e}")
|
||||
return False
|
||||
|
||||
def check_function_structure(file_path: str, expected_functions: list) -> bool:
|
||||
"""Check if expected functions are defined."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
tree = ast.parse(content)
|
||||
found_functions = []
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.FunctionDef):
|
||||
found_functions.append(node.name)
|
||||
elif isinstance(node, ast.AsyncFunctionDef):
|
||||
found_functions.append(node.name)
|
||||
|
||||
missing_functions = set(expected_functions) - set(found_functions)
|
||||
if missing_functions:
|
||||
print(f"⚠️ {file_path}: Missing functions: {missing_functions}")
|
||||
else:
|
||||
print(f"✅ {file_path}: All expected functions found")
|
||||
|
||||
return len(missing_functions) == 0
|
||||
except Exception as e:
|
||||
print(f"❌ {file_path}: Function validation error - {e}")
|
||||
return False
|
||||
|
||||
def validate_linkedin_models():
|
||||
"""Validate LinkedIn models file."""
|
||||
print("\n🔍 Validating LinkedIn Models")
|
||||
print("-" * 40)
|
||||
|
||||
file_path = "models/linkedin_models.py"
|
||||
if not os.path.exists(file_path):
|
||||
print(f"❌ {file_path}: File does not exist")
|
||||
return False
|
||||
|
||||
# Check syntax
|
||||
syntax_ok = validate_file_syntax(file_path)
|
||||
|
||||
# Check imports
|
||||
imports_ok = validate_import_structure(file_path)
|
||||
|
||||
# Check expected classes
|
||||
expected_classes = [
|
||||
"LinkedInPostRequest", "LinkedInArticleRequest", "LinkedInCarouselRequest",
|
||||
"LinkedInVideoScriptRequest", "LinkedInCommentResponseRequest",
|
||||
"LinkedInPostResponse", "LinkedInArticleResponse", "LinkedInCarouselResponse",
|
||||
"LinkedInVideoScriptResponse", "LinkedInCommentResponseResult",
|
||||
"PostContent", "ArticleContent", "CarouselContent", "VideoScript"
|
||||
]
|
||||
classes_ok = check_class_structure(file_path, expected_classes)
|
||||
|
||||
return syntax_ok and imports_ok and classes_ok
|
||||
|
||||
def validate_linkedin_service():
|
||||
"""Validate LinkedIn service file."""
|
||||
print("\n🔍 Validating LinkedIn Service")
|
||||
print("-" * 40)
|
||||
|
||||
file_path = "services/linkedin_service.py"
|
||||
if not os.path.exists(file_path):
|
||||
print(f"❌ {file_path}: File does not exist")
|
||||
return False
|
||||
|
||||
# Check syntax
|
||||
syntax_ok = validate_file_syntax(file_path)
|
||||
|
||||
# Check imports
|
||||
imports_ok = validate_import_structure(file_path)
|
||||
|
||||
# Check expected classes
|
||||
expected_classes = ["LinkedInContentService"]
|
||||
classes_ok = check_class_structure(file_path, expected_classes)
|
||||
|
||||
# Check expected methods
|
||||
expected_functions = [
|
||||
"generate_post", "generate_article", "generate_carousel",
|
||||
"generate_video_script", "generate_comment_response"
|
||||
]
|
||||
functions_ok = check_function_structure(file_path, expected_functions)
|
||||
|
||||
return syntax_ok and imports_ok and classes_ok and functions_ok
|
||||
|
||||
def validate_linkedin_router():
|
||||
"""Validate LinkedIn router file."""
|
||||
print("\n🔍 Validating LinkedIn Router")
|
||||
print("-" * 40)
|
||||
|
||||
file_path = "routers/linkedin.py"
|
||||
if not os.path.exists(file_path):
|
||||
print(f"❌ {file_path}: File does not exist")
|
||||
return False
|
||||
|
||||
# Check syntax
|
||||
syntax_ok = validate_file_syntax(file_path)
|
||||
|
||||
# Check imports
|
||||
imports_ok = validate_import_structure(file_path)
|
||||
|
||||
# Check expected functions (endpoints)
|
||||
expected_functions = [
|
||||
"health_check", "generate_post", "generate_article",
|
||||
"generate_carousel", "generate_video_script", "generate_comment_response",
|
||||
"get_content_types", "get_usage_stats"
|
||||
]
|
||||
functions_ok = check_function_structure(file_path, expected_functions)
|
||||
|
||||
return syntax_ok and imports_ok and functions_ok
|
||||
|
||||
def check_file_exists(file_path: str) -> bool:
|
||||
"""Check if file exists."""
|
||||
exists = os.path.exists(file_path)
|
||||
status = "✅" if exists else "❌"
|
||||
print(f"{status} {file_path}: {'Exists' if exists else 'Missing'}")
|
||||
return exists
|
||||
|
||||
def validate_file_structure():
|
||||
"""Validate the overall file structure."""
|
||||
print("\n🔍 Validating File Structure")
|
||||
print("-" * 40)
|
||||
|
||||
required_files = [
|
||||
"models/linkedin_models.py",
|
||||
"services/linkedin_service.py",
|
||||
"routers/linkedin.py",
|
||||
"test_linkedin_endpoints.py"
|
||||
]
|
||||
|
||||
all_exist = True
|
||||
for file_path in required_files:
|
||||
if not check_file_exists(file_path):
|
||||
all_exist = False
|
||||
|
||||
return all_exist
|
||||
|
||||
def main():
|
||||
"""Run all validations."""
|
||||
print("🚀 LinkedIn Content Generation Structure Validation")
|
||||
print("=" * 60)
|
||||
|
||||
results = {}
|
||||
|
||||
# Validate file structure
|
||||
results["file_structure"] = validate_file_structure()
|
||||
|
||||
# Validate individual components
|
||||
results["models"] = validate_linkedin_models()
|
||||
results["service"] = validate_linkedin_service()
|
||||
results["router"] = validate_linkedin_router()
|
||||
|
||||
# Summary
|
||||
print("\n📊 Validation Results")
|
||||
print("=" * 40)
|
||||
|
||||
passed = sum(results.values())
|
||||
total = len(results)
|
||||
|
||||
for component, result in results.items():
|
||||
status = "✅ PASSED" if result else "❌ FAILED"
|
||||
print(f"{component}: {status}")
|
||||
|
||||
print(f"\nOverall: {passed}/{total} validations passed ({(passed/total)*100:.1f}%)")
|
||||
|
||||
if passed == total:
|
||||
print("\n🎉 All structure validations passed!")
|
||||
print("The LinkedIn content generation migration is structurally complete.")
|
||||
print("\nNext steps:")
|
||||
print("1. Install required dependencies (fastapi, pydantic, etc.)")
|
||||
print("2. Configure API keys (GEMINI_API_KEY)")
|
||||
print("3. Start the FastAPI server")
|
||||
print("4. Test the endpoints")
|
||||
else:
|
||||
print(f"\n⚠️ {total - passed} validation(s) failed. Please review the implementation.")
|
||||
|
||||
return passed == total
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
||||
280
backend/test/verify_billing_setup.py
Normal file
280
backend/test/verify_billing_setup.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""
|
||||
Comprehensive verification script for billing and subscription system setup.
|
||||
Checks that all files are created, tables exist, and the system is properly integrated.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def check_file_exists(file_path, description):
|
||||
"""Check if a file exists and report status."""
|
||||
if os.path.exists(file_path):
|
||||
print(f"✅ {description}: {file_path}")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ {description}: {file_path} - NOT FOUND")
|
||||
return False
|
||||
|
||||
def check_file_content(file_path, search_terms, description):
|
||||
"""Check if file contains expected content."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
missing_terms = []
|
||||
for term in search_terms:
|
||||
if term not in content:
|
||||
missing_terms.append(term)
|
||||
|
||||
if not missing_terms:
|
||||
print(f"✅ {description}: All expected content found")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ {description}: Missing content - {missing_terms}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ {description}: Error reading file - {e}")
|
||||
return False
|
||||
|
||||
def check_database_tables():
|
||||
"""Check if billing database tables exist."""
|
||||
print("\n🗄️ Checking Database Tables:")
|
||||
print("-" * 30)
|
||||
|
||||
try:
|
||||
# Add backend to path
|
||||
backend_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(backend_dir))
|
||||
|
||||
from services.database import get_db_session, DATABASE_URL
|
||||
from sqlalchemy import text
|
||||
|
||||
session = get_db_session()
|
||||
if not session:
|
||||
print("❌ Could not get database session")
|
||||
return False
|
||||
|
||||
# Check for billing tables
|
||||
tables_query = text("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table' AND (
|
||||
name LIKE '%subscription%' OR
|
||||
name LIKE '%usage%' OR
|
||||
name LIKE '%billing%' OR
|
||||
name LIKE '%pricing%' OR
|
||||
name LIKE '%alert%'
|
||||
)
|
||||
ORDER BY name
|
||||
""")
|
||||
|
||||
result = session.execute(tables_query)
|
||||
tables = result.fetchall()
|
||||
|
||||
expected_tables = [
|
||||
'api_provider_pricing',
|
||||
'api_usage_logs',
|
||||
'subscription_plans',
|
||||
'usage_alerts',
|
||||
'usage_summaries',
|
||||
'user_subscriptions'
|
||||
]
|
||||
|
||||
found_tables = [t[0] for t in tables]
|
||||
print(f"Found tables: {found_tables}")
|
||||
|
||||
missing_tables = [t for t in expected_tables if t not in found_tables]
|
||||
if missing_tables:
|
||||
print(f"❌ Missing tables: {missing_tables}")
|
||||
return False
|
||||
|
||||
# Check table data
|
||||
for table in ['subscription_plans', 'api_provider_pricing']:
|
||||
count_query = text(f"SELECT COUNT(*) FROM {table}")
|
||||
result = session.execute(count_query)
|
||||
count = result.fetchone()[0]
|
||||
print(f"✅ {table}: {count} records")
|
||||
|
||||
session.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database check failed: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main verification function."""
|
||||
|
||||
print("🔍 ALwrity Billing & Subscription System Setup Verification")
|
||||
print("=" * 70)
|
||||
|
||||
backend_dir = Path(__file__).parent
|
||||
|
||||
# Files to check
|
||||
files_to_check = [
|
||||
(backend_dir / "models" / "subscription_models.py", "Subscription Models"),
|
||||
(backend_dir / "services" / "pricing_service.py", "Pricing Service"),
|
||||
(backend_dir / "services" / "usage_tracking_service.py", "Usage Tracking Service"),
|
||||
(backend_dir / "services" / "subscription_exception_handler.py", "Exception Handler"),
|
||||
(backend_dir / "api" / "subscription_api.py", "Subscription API"),
|
||||
(backend_dir / "scripts" / "create_billing_tables.py", "Billing Migration Script"),
|
||||
(backend_dir / "scripts" / "create_subscription_tables.py", "Subscription Migration Script"),
|
||||
(backend_dir / "start_alwrity_backend.py", "Backend Startup Script"),
|
||||
]
|
||||
|
||||
# Check file existence
|
||||
print("\n📁 Checking File Existence:")
|
||||
print("-" * 30)
|
||||
files_exist = 0
|
||||
for file_path, description in files_to_check:
|
||||
if check_file_exists(file_path, description):
|
||||
files_exist += 1
|
||||
|
||||
# Check content of key files
|
||||
print("\n📝 Checking File Content:")
|
||||
print("-" * 30)
|
||||
|
||||
content_checks = [
|
||||
(
|
||||
backend_dir / "models" / "subscription_models.py",
|
||||
["SubscriptionPlan", "APIUsageLog", "UsageSummary", "APIProviderPricing"],
|
||||
"Subscription Models Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "services" / "pricing_service.py",
|
||||
["calculate_api_cost", "check_usage_limits", "initialize_default_pricing"],
|
||||
"Pricing Service Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "services" / "usage_tracking_service.py",
|
||||
["track_api_usage", "get_user_usage_stats", "enforce_usage_limits"],
|
||||
"Usage Tracking Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "api" / "subscription_api.py",
|
||||
["get_user_usage", "get_subscription_plans", "get_dashboard_data"],
|
||||
"API Endpoints Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "start_alwrity_backend.py",
|
||||
["setup_billing_tables", "verify_billing_tables"],
|
||||
"Backend Startup Integration"
|
||||
)
|
||||
]
|
||||
|
||||
content_valid = 0
|
||||
for file_path, search_terms, description in content_checks:
|
||||
if os.path.exists(file_path):
|
||||
if check_file_content(file_path, search_terms, description):
|
||||
content_valid += 1
|
||||
else:
|
||||
print(f"❌ {description}: File not found")
|
||||
|
||||
# Check database tables
|
||||
database_ok = check_database_tables()
|
||||
|
||||
# Check middleware integration
|
||||
print("\n🔧 Checking Middleware Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
middleware_file = backend_dir / "middleware" / "monitoring_middleware.py"
|
||||
middleware_terms = [
|
||||
"UsageTrackingService",
|
||||
"detect_api_provider",
|
||||
"track_api_usage",
|
||||
"check_usage_limits_middleware"
|
||||
]
|
||||
|
||||
middleware_ok = check_file_content(
|
||||
middleware_file,
|
||||
middleware_terms,
|
||||
"Middleware Integration"
|
||||
)
|
||||
|
||||
# Check app.py integration
|
||||
print("\n🚀 Checking FastAPI Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
app_file = backend_dir / "app.py"
|
||||
app_terms = [
|
||||
"from api.subscription_api import router as subscription_router",
|
||||
"app.include_router(subscription_router)"
|
||||
]
|
||||
|
||||
app_ok = check_file_content(
|
||||
app_file,
|
||||
app_terms,
|
||||
"FastAPI App Integration"
|
||||
)
|
||||
|
||||
# Check database service integration
|
||||
print("\n💾 Checking Database Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
db_file = backend_dir / "services" / "database.py"
|
||||
db_terms = [
|
||||
"from models.subscription_models import Base as SubscriptionBase",
|
||||
"SubscriptionBase.metadata.create_all(bind=engine)"
|
||||
]
|
||||
|
||||
db_ok = check_file_content(
|
||||
db_file,
|
||||
db_terms,
|
||||
"Database Service Integration"
|
||||
)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 70)
|
||||
print("📊 VERIFICATION SUMMARY")
|
||||
print("=" * 70)
|
||||
|
||||
total_files = len(files_to_check)
|
||||
total_content = len(content_checks)
|
||||
|
||||
print(f"Files Created: {files_exist}/{total_files}")
|
||||
print(f"Content Valid: {content_valid}/{total_content}")
|
||||
print(f"Database Tables: {'✅' if database_ok else '❌'}")
|
||||
print(f"Middleware Integration: {'✅' if middleware_ok else '❌'}")
|
||||
print(f"FastAPI Integration: {'✅' if app_ok else '❌'}")
|
||||
print(f"Database Integration: {'✅' if db_ok else '❌'}")
|
||||
|
||||
# Overall status
|
||||
all_checks = [
|
||||
files_exist == total_files,
|
||||
content_valid == total_content,
|
||||
database_ok,
|
||||
middleware_ok,
|
||||
app_ok,
|
||||
db_ok
|
||||
]
|
||||
|
||||
if all(all_checks):
|
||||
print("\n🎉 ALL CHECKS PASSED!")
|
||||
print("✅ Billing and subscription system setup is complete and ready to use.")
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
print("🚀 NEXT STEPS:")
|
||||
print("=" * 70)
|
||||
print("1. Start the backend server:")
|
||||
print(" python start_alwrity_backend.py")
|
||||
print("\n2. Test the API endpoints:")
|
||||
print(" GET http://localhost:8000/api/subscription/plans")
|
||||
print(" GET http://localhost:8000/api/subscription/usage/demo")
|
||||
print(" GET http://localhost:8000/api/subscription/dashboard/demo")
|
||||
print(" GET http://localhost:8000/api/subscription/pricing")
|
||||
print("\n3. Access the frontend billing dashboard")
|
||||
print("4. Monitor usage through the API monitoring middleware")
|
||||
print("5. Set up user identification for production use")
|
||||
print("=" * 70)
|
||||
|
||||
else:
|
||||
print("\n❌ SOME CHECKS FAILED!")
|
||||
print("Please review the errors above and fix any issues.")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
205
backend/test/verify_subscription_setup.py
Normal file
205
backend/test/verify_subscription_setup.py
Normal file
@@ -0,0 +1,205 @@
|
||||
"""
|
||||
Simple verification script for subscription system setup.
|
||||
Checks that all files are created and properly structured.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def check_file_exists(file_path, description):
|
||||
"""Check if a file exists and report status."""
|
||||
if os.path.exists(file_path):
|
||||
print(f"✅ {description}: {file_path}")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ {description}: {file_path} - NOT FOUND")
|
||||
return False
|
||||
|
||||
def check_file_content(file_path, search_terms, description):
|
||||
"""Check if file contains expected content."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
missing_terms = []
|
||||
for term in search_terms:
|
||||
if term not in content:
|
||||
missing_terms.append(term)
|
||||
|
||||
if not missing_terms:
|
||||
print(f"✅ {description}: All expected content found")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ {description}: Missing content - {missing_terms}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ {description}: Error reading file - {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main verification function."""
|
||||
|
||||
print("🔍 ALwrity Subscription System Setup Verification")
|
||||
print("=" * 60)
|
||||
|
||||
backend_dir = Path(__file__).parent
|
||||
|
||||
# Files to check
|
||||
files_to_check = [
|
||||
(backend_dir / "models" / "subscription_models.py", "Subscription Models"),
|
||||
(backend_dir / "services" / "pricing_service.py", "Pricing Service"),
|
||||
(backend_dir / "services" / "usage_tracking_service.py", "Usage Tracking Service"),
|
||||
(backend_dir / "services" / "subscription_exception_handler.py", "Exception Handler"),
|
||||
(backend_dir / "api" / "subscription_api.py", "Subscription API"),
|
||||
(backend_dir / "scripts" / "create_subscription_tables.py", "Migration Script"),
|
||||
(backend_dir / "test_subscription_system.py", "Test Script"),
|
||||
(backend_dir / "SUBSCRIPTION_SYSTEM_README.md", "Documentation")
|
||||
]
|
||||
|
||||
# Check file existence
|
||||
print("\n📁 Checking File Existence:")
|
||||
print("-" * 30)
|
||||
files_exist = 0
|
||||
for file_path, description in files_to_check:
|
||||
if check_file_exists(file_path, description):
|
||||
files_exist += 1
|
||||
|
||||
# Check content of key files
|
||||
print("\n📝 Checking File Content:")
|
||||
print("-" * 30)
|
||||
|
||||
content_checks = [
|
||||
(
|
||||
backend_dir / "models" / "subscription_models.py",
|
||||
["SubscriptionPlan", "APIUsageLog", "UsageSummary", "APIProvider"],
|
||||
"Subscription Models Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "services" / "pricing_service.py",
|
||||
["calculate_api_cost", "check_usage_limits", "APIProvider.GEMINI"],
|
||||
"Pricing Service Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "services" / "usage_tracking_service.py",
|
||||
["track_api_usage", "get_user_usage_stats", "enforce_usage_limits"],
|
||||
"Usage Tracking Content"
|
||||
),
|
||||
(
|
||||
backend_dir / "api" / "subscription_api.py",
|
||||
["get_user_usage", "get_subscription_plans", "get_dashboard_data"],
|
||||
"API Endpoints Content"
|
||||
)
|
||||
]
|
||||
|
||||
content_valid = 0
|
||||
for file_path, search_terms, description in content_checks:
|
||||
if os.path.exists(file_path):
|
||||
if check_file_content(file_path, search_terms, description):
|
||||
content_valid += 1
|
||||
else:
|
||||
print(f"❌ {description}: File not found")
|
||||
|
||||
# Check middleware integration
|
||||
print("\n🔧 Checking Middleware Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
middleware_file = backend_dir / "middleware" / "monitoring_middleware.py"
|
||||
middleware_terms = [
|
||||
"UsageTrackingService",
|
||||
"detect_api_provider",
|
||||
"track_api_usage",
|
||||
"check_usage_limits_middleware"
|
||||
]
|
||||
|
||||
middleware_ok = check_file_content(
|
||||
middleware_file,
|
||||
middleware_terms,
|
||||
"Middleware Integration"
|
||||
)
|
||||
|
||||
# Check app.py integration
|
||||
print("\n🚀 Checking FastAPI Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
app_file = backend_dir / "app.py"
|
||||
app_terms = [
|
||||
"from api.subscription_api import router as subscription_router",
|
||||
"app.include_router(subscription_router)"
|
||||
]
|
||||
|
||||
app_ok = check_file_content(
|
||||
app_file,
|
||||
app_terms,
|
||||
"FastAPI App Integration"
|
||||
)
|
||||
|
||||
# Check database service integration
|
||||
print("\n💾 Checking Database Integration:")
|
||||
print("-" * 30)
|
||||
|
||||
db_file = backend_dir / "services" / "database.py"
|
||||
db_terms = [
|
||||
"from models.subscription_models import Base as SubscriptionBase",
|
||||
"SubscriptionBase.metadata.create_all(bind=engine)"
|
||||
]
|
||||
|
||||
db_ok = check_file_content(
|
||||
db_file,
|
||||
db_terms,
|
||||
"Database Service Integration"
|
||||
)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("📊 VERIFICATION SUMMARY")
|
||||
print("=" * 60)
|
||||
|
||||
total_files = len(files_to_check)
|
||||
total_content = len(content_checks)
|
||||
|
||||
print(f"Files Created: {files_exist}/{total_files}")
|
||||
print(f"Content Valid: {content_valid}/{total_content}")
|
||||
print(f"Middleware Integration: {'✅' if middleware_ok else '❌'}")
|
||||
print(f"FastAPI Integration: {'✅' if app_ok else '❌'}")
|
||||
print(f"Database Integration: {'✅' if db_ok else '❌'}")
|
||||
|
||||
# Overall status
|
||||
all_checks = [
|
||||
files_exist == total_files,
|
||||
content_valid == total_content,
|
||||
middleware_ok,
|
||||
app_ok,
|
||||
db_ok
|
||||
]
|
||||
|
||||
if all(all_checks):
|
||||
print("\n🎉 ALL CHECKS PASSED!")
|
||||
print("✅ Subscription system setup is complete and ready to use.")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("🚀 NEXT STEPS:")
|
||||
print("=" * 60)
|
||||
print("1. Install dependencies (if not already done):")
|
||||
print(" pip install sqlalchemy loguru fastapi")
|
||||
print("\n2. Run the migration script:")
|
||||
print(" python scripts/create_subscription_tables.py")
|
||||
print("\n3. Test the system:")
|
||||
print(" python test_subscription_system.py")
|
||||
print("\n4. Start the server:")
|
||||
print(" python start_alwrity_backend.py")
|
||||
print("\n5. Test API endpoints:")
|
||||
print(" GET http://localhost:8000/api/subscription/plans")
|
||||
print(" GET http://localhost:8000/api/subscription/pricing")
|
||||
|
||||
else:
|
||||
print("\n❌ SOME CHECKS FAILED!")
|
||||
print("Please review the errors above and fix any issues.")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
Reference in New Issue
Block a user