Base code
This commit is contained in:
117
ToBeMigrated/ai_marketing_tools/ai_backlinker/README.md
Normal file
117
ToBeMigrated/ai_marketing_tools/ai_backlinker/README.md
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
|
||||
# AI Backlinking Tool
|
||||
|
||||
## Overview
|
||||
|
||||
The `ai_backlinking.py` module is part of the [AI-Writer](https://github.com/AJaySi/AI-Writer) project. It simplifies and automates the process of finding and securing backlink opportunities. Using AI, the tool performs web research, extracts contact information, and sends personalized outreach emails for guest posting opportunities, making it an essential tool for content writers, digital marketers, and solopreneurs.
|
||||
|
||||
---
|
||||
|
||||
## Key Features
|
||||
|
||||
| Feature | Description |
|
||||
|-------------------------------|-----------------------------------------------------------------------------|
|
||||
| **Automated Web Scraping** | Extract guest post opportunities, contact details, and website insights. |
|
||||
| **AI-Powered Emails** | Create personalized outreach emails tailored to target websites. |
|
||||
| **Email Automation** | Integrate with platforms like Gmail or SendGrid for streamlined communication. |
|
||||
| **Lead Management** | Track email status (sent, replied, successful) and follow up efficiently. |
|
||||
| **Batch Processing** | Handle multiple keywords and queries simultaneously. |
|
||||
| **AI-Driven Follow-Up** | Automate polite reminders if there's no response. |
|
||||
| **Reports and Analytics** | View performance metrics like email open rates and backlink success rates. |
|
||||
|
||||
---
|
||||
|
||||
## Workflow Breakdown
|
||||
|
||||
| Step | Action | Example |
|
||||
|-------------------------------|---------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|
||||
| **Input Keywords** | Provide keywords for backlinking opportunities. | *E.g., "AI tools", "SEO strategies", "content marketing."* |
|
||||
| **Generate Search Queries** | Automatically create queries for search engines. | *E.g., "AI tools + 'write for us'" or "content marketing + 'submit a guest post.'"* |
|
||||
| **Web Scraping** | Collect URLs, email addresses, and content details from target websites. | Extract "editor@contentblog.com" from "https://contentblog.com/write-for-us". |
|
||||
| **Compose Outreach Emails** | Use AI to draft personalized emails based on scraped website data. | Email tailored to "Content Blog" discussing "AI tools for better content writing." |
|
||||
| **Automated Email Sending** | Review and send emails or fully automate the process. | Send emails through Gmail or other SMTP services. |
|
||||
| **Follow-Ups** | Automate follow-ups for non-responsive contacts. | A polite reminder email sent 7 days later. |
|
||||
| **Track and Log Results** | Monitor sent emails, responses, and backlink placements. | View logs showing responses and backlink acquisition rate. |
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python Version**: 3.6 or higher.
|
||||
- **Required Packages**: `googlesearch-python`, `loguru`, `smtplib`, `email`.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/AJaySi/AI-Writer.git
|
||||
cd AI-Writer
|
||||
```
|
||||
|
||||
2. Install dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here’s a quick example of how to use the tool:
|
||||
|
||||
```python
|
||||
from lib.ai_marketing_tools.ai_backlinking import main_backlinking_workflow
|
||||
|
||||
# Email configurations
|
||||
smtp_config = {
|
||||
'server': 'smtp.gmail.com',
|
||||
'port': 587,
|
||||
'user': 'your_email@gmail.com',
|
||||
'password': 'your_password'
|
||||
}
|
||||
|
||||
imap_config = {
|
||||
'server': 'imap.gmail.com',
|
||||
'user': 'your_email@gmail.com',
|
||||
'password': 'your_password'
|
||||
}
|
||||
|
||||
# Proposal details
|
||||
user_proposal = {
|
||||
'user_name': 'Your Name',
|
||||
'user_email': 'your_email@gmail.com',
|
||||
'topic': 'Proposed guest post topic'
|
||||
}
|
||||
|
||||
# Keywords to search
|
||||
keywords = ['AI tools', 'SEO strategies', 'content marketing']
|
||||
|
||||
# Start the workflow
|
||||
main_backlinking_workflow(keywords, smtp_config, imap_config, user_proposal)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core Functions
|
||||
|
||||
| Function | Purpose |
|
||||
|--------------------------------------------|-------------------------------------------------------------------------------------------|
|
||||
| `generate_search_queries(keyword)` | Create search queries to find guest post opportunities. |
|
||||
| `find_backlink_opportunities(keyword)` | Scrape websites for backlink opportunities. |
|
||||
| `compose_personalized_email()` | Draft outreach emails using AI insights and website data. |
|
||||
| `send_email()` | Send emails using SMTP configurations. |
|
||||
| `check_email_responses()` | Monitor inbox for replies using IMAP. |
|
||||
| `send_follow_up_email()` | Automate polite reminders to non-responsive contacts. |
|
||||
| `log_sent_email()` | Keep a record of all sent emails and responses. |
|
||||
| `main_backlinking_workflow()` | Execute the complete backlinking workflow for multiple keywords. |
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License. For more details, refer to the [LICENSE](LICENSE) file.
|
||||
|
||||
---
|
||||
423
ToBeMigrated/ai_marketing_tools/ai_backlinker/ai_backlinking.py
Normal file
423
ToBeMigrated/ai_marketing_tools/ai_backlinker/ai_backlinking.py
Normal file
@@ -0,0 +1,423 @@
|
||||
#Problem:
|
||||
#
|
||||
#Finding websites for guest posts is manual, tedious, and time-consuming. Communicating with webmasters, maintaining conversations, and keeping track of backlinking opportunities is difficult to scale. Content creators and marketers struggle with discovering new websites and consistently getting backlinks.
|
||||
#Solution:
|
||||
#
|
||||
#An AI-powered backlinking app that automates web research, scrapes websites, extracts contact information, and sends personalized outreach emails to webmasters. This would simplify the entire process, allowing marketers to scale their backlinking strategy with minimal manual intervention.
|
||||
#Core Workflow:
|
||||
#
|
||||
# User Input:
|
||||
# Keyword Search: The user inputs a keyword (e.g., "AI writers").
|
||||
# Search Queries: Your app will append various search strings to this keyword to find backlinking opportunities (e.g., "AI writers + 'Write for Us'").
|
||||
#
|
||||
# Web Research:
|
||||
#
|
||||
# Use search engines or web scraping to run multiple queries:
|
||||
# Keyword + "Guest Contributor"
|
||||
# Keyword + "Add Guest Post"
|
||||
# Keyword + "Write for Us", etc.
|
||||
#
|
||||
# Collect URLs of websites that have pages or posts related to guest post opportunities.
|
||||
#
|
||||
# Scrape Website Data:
|
||||
# Contact Information Extraction:
|
||||
# Scrape the website for contact details (email addresses, contact forms, etc.).
|
||||
# Use natural language processing (NLP) to understand the type of content on the website and who the contact person might be (webmaster, editor, or guest post manager).
|
||||
# Website Content Understanding:
|
||||
# Scrape a summary of each website's content (e.g., their blog topics, categories, and tone) to personalize the email based on the site's focus.
|
||||
#
|
||||
# Personalized Outreach:
|
||||
# AI Email Composition:
|
||||
# Compose personalized outreach emails based on:
|
||||
# The scraped data (website content, topic focus, etc.).
|
||||
# The user's input (what kind of guest post or content they want to contribute).
|
||||
# Example: "Hi [Webmaster Name], I noticed that your site [Site Name] features high-quality content about [Topic]. I would love to contribute a guest post on [Proposed Topic] in exchange for a backlink."
|
||||
#
|
||||
# Automated Email Sending:
|
||||
# Review Emails (Optional HITL):
|
||||
# Let users review and approve the personalized emails before they are sent, or allow full automation.
|
||||
# Send Emails:
|
||||
# Automate email dispatch through an integrated SMTP or API (e.g., Gmail API, SendGrid).
|
||||
# Keep track of which emails were sent, bounced, or received replies.
|
||||
#
|
||||
# Scaling the Search:
|
||||
# Repeat for Multiple Keywords:
|
||||
# Run the same scraping and outreach process for a list of relevant keywords, either automatically suggested or uploaded by the user.
|
||||
# Keep Track of Sent Emails:
|
||||
# Maintain a log of all sent emails, responses, and follow-up reminders to avoid repetition or forgotten leads.
|
||||
#
|
||||
# Tracking Responses and Follow-ups:
|
||||
# Automated Responses:
|
||||
# If a website replies positively, AI can respond with predefined follow-up emails (e.g., proposing topics, confirming submission deadlines).
|
||||
# Follow-up Reminders:
|
||||
# If there's no reply, the system can send polite follow-up reminders at pre-set intervals.
|
||||
#
|
||||
#Key Features:
|
||||
#
|
||||
# Automated Web Scraping:
|
||||
# Scrape websites for guest post opportunities using a predefined set of search queries based on user input.
|
||||
# Extract key information like email addresses, names, and submission guidelines.
|
||||
#
|
||||
# Personalized Email Writing:
|
||||
# Leverage AI to create personalized emails using the scraped website information.
|
||||
# Tailor each email to the tone, content style, and focus of the website.
|
||||
#
|
||||
# Email Sending Automation:
|
||||
# Integrate with email platforms (e.g., Gmail, SendGrid, or custom SMTP).
|
||||
# Send automated outreach emails with the ability for users to review first (HITL - Human-in-the-loop) or automate completely.
|
||||
#
|
||||
# Customizable Email Templates:
|
||||
# Allow users to customize or choose from a set of email templates for different types of outreach (e.g., guest post requests, follow-up emails, submission offers).
|
||||
#
|
||||
# Lead Tracking and Management:
|
||||
# Track all emails sent, monitor replies, and keep track of successful backlinks.
|
||||
# Log each lead's status (e.g., emailed, responded, no reply) to manage future interactions.
|
||||
#
|
||||
# Multiple Keywords/Queries:
|
||||
# Allow users to run the same process for a batch of keywords, automatically generating relevant search queries for each.
|
||||
#
|
||||
# AI-Driven Follow-Up:
|
||||
# Schedule follow-up emails if there is no response after a specified period.
|
||||
#
|
||||
# Reports and Analytics:
|
||||
# Provide users with reports on how many emails were sent, opened, replied to, and successful backlink placements.
|
||||
#
|
||||
#Advanced Features (for Scaling and Optimization):
|
||||
#
|
||||
# Domain Authority Filtering:
|
||||
# Use SEO APIs (e.g., Moz, Ahrefs) to filter websites based on their domain authority or backlink strength.
|
||||
# Prioritize high-authority websites to maximize the impact of backlinks.
|
||||
#
|
||||
# Spam Detection:
|
||||
# Use AI to detect and avoid spammy or low-quality websites that might harm the user's SEO.
|
||||
#
|
||||
# Contact Form Auto-Fill:
|
||||
# If the site only offers a contact form (without email), automatically fill and submit the form with AI-generated content.
|
||||
#
|
||||
# Dynamic Content Suggestions:
|
||||
# Suggest guest post topics based on the website's focus, using NLP to analyze the site's existing content.
|
||||
#
|
||||
# Bulk Email Support:
|
||||
# Allow users to bulk-send outreach emails while still personalizing each message for scalability.
|
||||
#
|
||||
# AI Copy Optimization:
|
||||
# Use copywriting AI to optimize email content, adjusting tone and CTA based on the target audience.
|
||||
#
|
||||
#Challenges and Considerations:
|
||||
#
|
||||
# Legal Compliance:
|
||||
# Ensure compliance with anti-spam laws (e.g., CAN-SPAM, GDPR) by including unsubscribe options or manual email approval.
|
||||
#
|
||||
# Scraping Limits:
|
||||
# Be mindful of scraping limits on certain websites and employ smart throttling or use API-based scraping for better reliability.
|
||||
#
|
||||
# Deliverability:
|
||||
# Ensure emails are delivered properly without landing in spam folders by integrating proper email authentication (SPF, DKIM) and using high-reputation SMTP servers.
|
||||
#
|
||||
# Maintaining Email Personalization:
|
||||
# Striking the balance between automating the email process and keeping each message personal enough to avoid being flagged as spam.
|
||||
#
|
||||
#Technology Stack:
|
||||
#
|
||||
# Web Scraping: BeautifulSoup, Scrapy, or Puppeteer for scraping guest post opportunities and contact information.
|
||||
# Email Automation: Integrate with Gmail API, SendGrid, or Mailgun for sending emails.
|
||||
# NLP for Personalization: GPT-based models for email generation and web content understanding.
|
||||
# Frontend: React or Vue for the user interface.
|
||||
# Backend: Python/Node.js with Flask or Express for the API and automation logic.
|
||||
# Database: MongoDB or PostgreSQL to track leads, emails, and responses.
|
||||
#
|
||||
#This solution will significantly streamline the backlinking process by automating the most tedious tasks, from finding sites to personalizing outreach, enabling marketers to focus on content creation and high-level strategies.
|
||||
|
||||
|
||||
import sys
|
||||
# from googlesearch import search # Temporarily disabled for future enhancement
|
||||
from loguru import logger
|
||||
from lib.ai_web_researcher.firecrawl_web_crawler import scrape_website
|
||||
from lib.gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
from lib.ai_web_researcher.firecrawl_web_crawler import scrape_url
|
||||
import smtplib
|
||||
from email.mime.multipart import MIMEMultipart
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
def generate_search_queries(keyword):
|
||||
"""
|
||||
Generate a list of search queries for finding guest post opportunities.
|
||||
|
||||
Args:
|
||||
keyword (str): The keyword to base the search queries on.
|
||||
|
||||
Returns:
|
||||
list: A list of search queries.
|
||||
"""
|
||||
return [
|
||||
f"{keyword} + 'Guest Contributor'",
|
||||
f"{keyword} + 'Add Guest Post'",
|
||||
f"{keyword} + 'Guest Bloggers Wanted'",
|
||||
f"{keyword} + 'Write for Us'",
|
||||
f"{keyword} + 'Submit Guest Post'",
|
||||
f"{keyword} + 'Become a Guest Blogger'",
|
||||
f"{keyword} + 'guest post opportunities'",
|
||||
f"{keyword} + 'Submit article'",
|
||||
]
|
||||
|
||||
def find_backlink_opportunities(keyword):
|
||||
"""
|
||||
Find backlink opportunities by scraping websites based on search queries.
|
||||
|
||||
Args:
|
||||
keyword (str): The keyword to search for backlink opportunities.
|
||||
|
||||
Returns:
|
||||
list: A list of results from the scraped websites.
|
||||
"""
|
||||
search_queries = generate_search_queries(keyword)
|
||||
results = []
|
||||
|
||||
# Temporarily disabled Google search functionality
|
||||
# for query in search_queries:
|
||||
# urls = search_for_urls(query)
|
||||
# for url in urls:
|
||||
# website_data = scrape_website(url)
|
||||
# logger.info(f"Scraped Website content for {url}: {website_data}")
|
||||
# if website_data:
|
||||
# contact_info = extract_contact_info(website_data)
|
||||
# logger.info(f"Contact details found for {url}: {contact_info}")
|
||||
|
||||
# Placeholder return for now
|
||||
return []
|
||||
|
||||
def search_for_urls(query):
|
||||
"""
|
||||
Search for URLs using Google search.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
|
||||
Returns:
|
||||
list: List of URLs found.
|
||||
"""
|
||||
# Temporarily disabled Google search functionality
|
||||
# return list(search(query, num_results=10))
|
||||
return []
|
||||
|
||||
def compose_personalized_email(website_data, insights, user_proposal):
|
||||
"""
|
||||
Compose a personalized outreach email using AI LLM based on website data, insights, and user proposal.
|
||||
|
||||
Args:
|
||||
website_data (dict): The data of the website including metadata and contact info.
|
||||
insights (str): Insights generated by the LLM about the website.
|
||||
user_proposal (dict): The user's proposal for a guest post or content contribution.
|
||||
|
||||
Returns:
|
||||
str: A personalized email message.
|
||||
"""
|
||||
contact_name = website_data.get("contact_info", {}).get("name", "Webmaster")
|
||||
site_name = website_data.get("metadata", {}).get("title", "your site")
|
||||
proposed_topic = user_proposal.get("topic", "a guest post")
|
||||
user_name = user_proposal.get("user_name", "Your Name")
|
||||
user_email = user_proposal.get("user_email", "your_email@example.com")
|
||||
|
||||
# Refined prompt for email generation
|
||||
email_prompt = f"""
|
||||
You are an AI assistant tasked with composing a highly personalized outreach email for guest posting.
|
||||
|
||||
Contact Name: {contact_name}
|
||||
Website Name: {site_name}
|
||||
Proposed Topic: {proposed_topic}
|
||||
|
||||
User Details:
|
||||
Name: {user_name}
|
||||
Email: {user_email}
|
||||
|
||||
Website Insights: {insights}
|
||||
|
||||
Please compose a professional and engaging email that includes:
|
||||
1. A personalized introduction addressing the recipient.
|
||||
2. A mention of the website's content focus.
|
||||
3. A proposal for a guest post.
|
||||
4. A call to action to discuss the guest post opportunity.
|
||||
5. A polite closing with user contact details.
|
||||
"""
|
||||
|
||||
return llm_text_gen(email_prompt)
|
||||
|
||||
def send_email(smtp_server, smtp_port, smtp_user, smtp_password, to_email, subject, body):
|
||||
"""
|
||||
Send an email using an SMTP server.
|
||||
|
||||
Args:
|
||||
smtp_server (str): The SMTP server address.
|
||||
smtp_port (int): The SMTP server port.
|
||||
smtp_user (str): The SMTP server username.
|
||||
smtp_password (str): The SMTP server password.
|
||||
to_email (str): The recipient's email address.
|
||||
subject (str): The email subject.
|
||||
body (str): The email body.
|
||||
|
||||
Returns:
|
||||
bool: True if the email was sent successfully, False otherwise.
|
||||
"""
|
||||
try:
|
||||
msg = MIMEMultipart()
|
||||
msg['From'] = smtp_user
|
||||
msg['To'] = to_email
|
||||
msg['Subject'] = subject
|
||||
msg.attach(MIMEText(body, 'plain'))
|
||||
|
||||
server = smtplib.SMTP(smtp_server, smtp_port)
|
||||
server.starttls()
|
||||
server.login(smtp_user, smtp_password)
|
||||
server.send_message(msg)
|
||||
server.quit()
|
||||
|
||||
logger.info(f"Email sent successfully to {to_email}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to send email to {to_email}: {e}")
|
||||
return False
|
||||
|
||||
def extract_contact_info(website_data):
|
||||
"""
|
||||
Extract contact information from website data.
|
||||
|
||||
Args:
|
||||
website_data (dict): Scraped data from the website.
|
||||
|
||||
Returns:
|
||||
dict: Extracted contact information such as name, email, etc.
|
||||
"""
|
||||
# Placeholder for extracting contact information logic
|
||||
return {
|
||||
"name": website_data.get("contact", {}).get("name", "Webmaster"),
|
||||
"email": website_data.get("contact", {}).get("email", ""),
|
||||
}
|
||||
|
||||
def find_backlink_opportunities_for_keywords(keywords):
|
||||
"""
|
||||
Find backlink opportunities for multiple keywords.
|
||||
|
||||
Args:
|
||||
keywords (list): A list of keywords to search for backlink opportunities.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary with keywords as keys and a list of results as values.
|
||||
"""
|
||||
all_results = {}
|
||||
for keyword in keywords:
|
||||
results = find_backlink_opportunities(keyword)
|
||||
all_results[keyword] = results
|
||||
return all_results
|
||||
|
||||
def log_sent_email(keyword, email_info):
|
||||
"""
|
||||
Log the information of a sent email.
|
||||
|
||||
Args:
|
||||
keyword (str): The keyword associated with the email.
|
||||
email_info (dict): Information about the sent email (e.g., recipient, subject, body).
|
||||
"""
|
||||
with open(f"{keyword}_sent_emails.log", "a") as log_file:
|
||||
log_file.write(f"{email_info}\n")
|
||||
|
||||
def check_email_responses(imap_server, imap_user, imap_password):
|
||||
"""
|
||||
Check email responses using an IMAP server.
|
||||
|
||||
Args:
|
||||
imap_server (str): The IMAP server address.
|
||||
imap_user (str): The IMAP server username.
|
||||
imap_password (str): The IMAP server password.
|
||||
|
||||
Returns:
|
||||
list: A list of email responses.
|
||||
"""
|
||||
responses = []
|
||||
try:
|
||||
mail = imaplib.IMAP4_SSL(imap_server)
|
||||
mail.login(imap_user, imap_password)
|
||||
mail.select('inbox')
|
||||
|
||||
status, data = mail.search(None, 'UNSEEN')
|
||||
mail_ids = data[0]
|
||||
id_list = mail_ids.split()
|
||||
|
||||
for mail_id in id_list:
|
||||
status, data = mail.fetch(mail_id, '(RFC822)')
|
||||
msg = email.message_from_bytes(data[0][1])
|
||||
if msg.is_multipart():
|
||||
for part in msg.walk():
|
||||
if part.get_content_type() == 'text/plain':
|
||||
responses.append(part.get_payload(decode=True).decode())
|
||||
else:
|
||||
responses.append(msg.get_payload(decode=True).decode())
|
||||
|
||||
mail.logout()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to check email responses: {e}")
|
||||
|
||||
return responses
|
||||
|
||||
def send_follow_up_email(smtp_server, smtp_port, smtp_user, smtp_password, to_email, subject, body):
|
||||
"""
|
||||
Send a follow-up email using an SMTP server.
|
||||
|
||||
Args:
|
||||
smtp_server (str): The SMTP server address.
|
||||
smtp_port (int): The SMTP server port.
|
||||
smtp_user (str): The SMTP server username.
|
||||
smtp_password (str): The SMTP server password.
|
||||
to_email (str): The recipient's email address.
|
||||
subject (str): The email subject.
|
||||
body (str): The email body.
|
||||
|
||||
Returns:
|
||||
bool: True if the email was sent successfully, False otherwise.
|
||||
"""
|
||||
return send_email(smtp_server, smtp_port, smtp_user, smtp_password, to_email, subject, body)
|
||||
|
||||
def main_backlinking_workflow(keywords, smtp_config, imap_config, user_proposal):
|
||||
"""
|
||||
Main workflow for the AI-powered backlinking feature.
|
||||
|
||||
Args:
|
||||
keywords (list): A list of keywords to search for backlink opportunities.
|
||||
smtp_config (dict): SMTP configuration for sending emails.
|
||||
imap_config (dict): IMAP configuration for checking email responses.
|
||||
user_proposal (dict): The user's proposal for a guest post or content contribution.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
all_results = find_backlink_opportunities_for_keywords(keywords)
|
||||
|
||||
for keyword, results in all_results.items():
|
||||
for result in results:
|
||||
email_body = compose_personalized_email(result, result['insights'], user_proposal)
|
||||
email_sent = send_email(
|
||||
smtp_config['server'],
|
||||
smtp_config['port'],
|
||||
smtp_config['user'],
|
||||
smtp_config['password'],
|
||||
result['contact_info']['email'],
|
||||
f"Guest Post Proposal for {result['metadata']['title']}",
|
||||
email_body
|
||||
)
|
||||
if email_sent:
|
||||
log_sent_email(keyword, {
|
||||
"to": result['contact_info']['email'],
|
||||
"subject": f"Guest Post Proposal for {result['metadata']['title']}",
|
||||
"body": email_body
|
||||
})
|
||||
|
||||
responses = check_email_responses(imap_config['server'], imap_config['user'], imap_config['password'])
|
||||
for response in responses:
|
||||
# TBD : Process and possibly send follow-up emails based on responses
|
||||
pass
|
||||
@@ -0,0 +1,60 @@
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
from st_aggrid import AgGrid, GridOptionsBuilder, GridUpdateMode
|
||||
from lib.ai_marketing_tools.ai_backlinker.ai_backlinking import find_backlink_opportunities, compose_personalized_email
|
||||
|
||||
|
||||
# Streamlit UI function
|
||||
def backlinking_ui():
|
||||
st.title("AI Backlinking Tool")
|
||||
|
||||
# Step 1: Get user inputs
|
||||
keyword = st.text_input("Enter a keyword", value="technology")
|
||||
|
||||
# Step 2: Generate backlink opportunities
|
||||
if st.button("Find Backlink Opportunities"):
|
||||
if keyword:
|
||||
backlink_opportunities = find_backlink_opportunities(keyword)
|
||||
|
||||
# Convert results to a DataFrame for display
|
||||
df = pd.DataFrame(backlink_opportunities)
|
||||
|
||||
# Create a selectable table using st-aggrid
|
||||
gb = GridOptionsBuilder.from_dataframe(df)
|
||||
gb.configure_selection('multiple', use_checkbox=True, groupSelectsChildren=True)
|
||||
gridOptions = gb.build()
|
||||
|
||||
grid_response = AgGrid(
|
||||
df,
|
||||
gridOptions=gridOptions,
|
||||
update_mode=GridUpdateMode.SELECTION_CHANGED,
|
||||
height=200,
|
||||
width='100%'
|
||||
)
|
||||
|
||||
selected_rows = grid_response['selected_rows']
|
||||
|
||||
if selected_rows:
|
||||
st.write("Selected Opportunities:")
|
||||
st.table(pd.DataFrame(selected_rows))
|
||||
|
||||
# Step 3: Option to generate personalized emails for selected opportunities
|
||||
if st.button("Generate Emails for Selected Opportunities"):
|
||||
user_proposal = {
|
||||
"user_name": st.text_input("Your Name", value="John Doe"),
|
||||
"user_email": st.text_input("Your Email", value="john@example.com")
|
||||
}
|
||||
|
||||
emails = []
|
||||
for selected in selected_rows:
|
||||
insights = f"Insights based on content from {selected['url']}."
|
||||
email = compose_personalized_email(selected, insights, user_proposal)
|
||||
emails.append(email)
|
||||
|
||||
st.subheader("Generated Emails:")
|
||||
for email in emails:
|
||||
st.write(email)
|
||||
st.markdown("---")
|
||||
|
||||
else:
|
||||
st.error("Please enter a keyword.")
|
||||
@@ -0,0 +1,370 @@
|
||||
Google Ads Generator
|
||||
Google Ads Generator Logo
|
||||
|
||||
Overview
|
||||
The Google Ads Generator is an AI-powered tool designed to create high-converting Google Ads based on industry best practices. This tool helps marketers, business owners, and advertising professionals create optimized ad campaigns that maximize ROI and conversion rates.
|
||||
|
||||
By leveraging advanced AI algorithms and proven advertising frameworks, the Google Ads Generator creates compelling ad copy, suggests optimal keywords, generates relevant extensions, and provides performance predictions—all tailored to your specific business needs and target audience.
|
||||
|
||||
Table of Contents
|
||||
Features
|
||||
Getting Started
|
||||
User Interface
|
||||
Ad Creation Process
|
||||
Ad Types
|
||||
Quality Analysis
|
||||
Performance Simulation
|
||||
Best Practices
|
||||
Export Options
|
||||
Advanced Features
|
||||
Technical Details
|
||||
FAQ
|
||||
Troubleshooting
|
||||
Updates and Roadmap
|
||||
Features
|
||||
Core Features
|
||||
AI-Powered Ad Generation: Create compelling, high-converting Google Ads in seconds
|
||||
Multiple Ad Types: Support for Responsive Search Ads, Expanded Text Ads, Call-Only Ads, and Dynamic Search Ads
|
||||
Industry-Specific Templates: Tailored templates for 20+ industries
|
||||
Ad Extensions Generator: Automatically create Sitelinks, Callouts, and Structured Snippets
|
||||
Quality Score Analysis: Comprehensive scoring based on Google's quality factors
|
||||
Performance Prediction: Estimate CTR, conversion rates, and ROI
|
||||
A/B Testing: Generate multiple variations for testing
|
||||
Export Options: Export to CSV, Excel, Google Ads Editor CSV, and JSON
|
||||
Advanced Features
|
||||
Keyword Research Integration: Find high-performing keywords for your ads
|
||||
Competitor Analysis: Analyze competitor ads and identify opportunities
|
||||
Landing Page Suggestions: Recommendations for landing page optimization
|
||||
Budget Optimization: Suggestions for optimal budget allocation
|
||||
Ad Schedule Recommendations: Identify the best times to run your ads
|
||||
Audience Targeting Suggestions: Recommendations for demographic targeting
|
||||
Local Ad Optimization: Special features for local businesses
|
||||
E-commerce Ad Features: Product-specific ad generation
|
||||
Getting Started
|
||||
Prerequisites
|
||||
Alwrity AI Writer platform
|
||||
Basic understanding of Google Ads concepts
|
||||
Information about your business, products/services, and target audience
|
||||
Accessing the Tool
|
||||
Navigate to the Alwrity AI Writer platform
|
||||
Select "AI Google Ads Generator" from the tools menu
|
||||
Follow the guided setup process
|
||||
User Interface
|
||||
The Google Ads Generator features a user-friendly, tabbed interface designed to guide you through the ad creation process:
|
||||
|
||||
Tab 1: Ad Creation
|
||||
This is where you'll input your business information and ad requirements:
|
||||
|
||||
Business Information: Company name, industry, products/services
|
||||
Campaign Goals: Select from options like brand awareness, lead generation, sales, etc.
|
||||
Target Audience: Define your ideal customer
|
||||
Ad Type Selection: Choose from available ad formats
|
||||
USP and Benefits: Input your unique selling propositions and key benefits
|
||||
Keywords: Add target keywords or generate suggestions
|
||||
Landing Page URL: Specify where users will go after clicking your ad
|
||||
Budget Information: Set daily/monthly budget for performance predictions
|
||||
Tab 2: Ad Performance
|
||||
After generating ads, this tab provides detailed analysis:
|
||||
|
||||
Quality Score: Overall score (1-10) with detailed breakdown
|
||||
Strengths & Improvements: What's good and what could be better
|
||||
Keyword Relevance: Analysis of keyword usage in ad elements
|
||||
CTR Prediction: Estimated click-through rate based on ad quality
|
||||
Conversion Potential: Estimated conversion rate
|
||||
Mobile Friendliness: Assessment of how well the ad performs on mobile
|
||||
Ad Policy Compliance: Check for potential policy violations
|
||||
Tab 3: Ad History
|
||||
Keep track of your generated ads:
|
||||
|
||||
Saved Ads: Previously generated and saved ads
|
||||
Favorites: Ads you've marked as favorites
|
||||
Version History: Track changes and iterations
|
||||
Performance Notes: Add notes about real-world performance
|
||||
Tab 4: Best Practices
|
||||
Educational resources to improve your ads:
|
||||
|
||||
Industry Guidelines: Best practices for your specific industry
|
||||
Ad Type Tips: Specific guidance for each ad type
|
||||
Quality Score Optimization: How to improve quality score
|
||||
Extension Strategies: How to effectively use ad extensions
|
||||
A/B Testing Guide: How to test and optimize your ads
|
||||
Ad Creation Process
|
||||
Step 1: Define Your Campaign
|
||||
Select your industry from the dropdown menu
|
||||
Choose your primary campaign goal
|
||||
Define your target audience
|
||||
Set your budget parameters
|
||||
Step 2: Input Business Details
|
||||
Enter your business name
|
||||
Provide your website URL
|
||||
Input your unique selling propositions
|
||||
List key product/service benefits
|
||||
Add any promotional offers or discounts
|
||||
Step 3: Keyword Selection
|
||||
Enter your primary keywords
|
||||
Use the integrated keyword research tool to find additional keywords
|
||||
Select keyword match types (broad, phrase, exact)
|
||||
Review keyword competition and volume metrics
|
||||
Step 4: Ad Type Selection
|
||||
Choose your preferred ad type
|
||||
Review the requirements and limitations for that ad type
|
||||
Select any additional features specific to that ad type
|
||||
Step 5: Generate Ads
|
||||
Click the "Generate Ads" button
|
||||
Review the generated ads
|
||||
Request variations if needed
|
||||
Save your favorite versions
|
||||
Step 6: Add Extensions
|
||||
Select which extension types to include
|
||||
Review and edit the generated extensions
|
||||
Add any custom extensions
|
||||
Step 7: Analyze and Optimize
|
||||
Review the quality score and analysis
|
||||
Make suggested improvements
|
||||
Regenerate ads if necessary
|
||||
Compare different versions
|
||||
Step 8: Export
|
||||
Choose your preferred export format
|
||||
Select which ads to include
|
||||
Download the file for import into Google Ads
|
||||
Ad Types
|
||||
Responsive Search Ads (RSA)
|
||||
The most flexible and recommended ad type, featuring:
|
||||
|
||||
Up to 15 headlines (3 shown at a time)
|
||||
Up to 4 descriptions (2 shown at a time)
|
||||
Dynamic combination of elements based on performance
|
||||
Automatic testing of different combinations
|
||||
Expanded Text Ads (ETA)
|
||||
A more controlled ad format with:
|
||||
|
||||
3 headlines
|
||||
2 descriptions
|
||||
Display URL with two path fields
|
||||
Fixed layout with no dynamic combinations
|
||||
Call-Only Ads
|
||||
Designed to drive phone calls rather than website visits:
|
||||
|
||||
Business name
|
||||
Phone number
|
||||
Call-to-action text
|
||||
Description lines
|
||||
Verification URL (not shown to users)
|
||||
Dynamic Search Ads (DSA)
|
||||
Ads that use your website content to target relevant searches:
|
||||
|
||||
Dynamic headline generation based on search queries
|
||||
Custom descriptions
|
||||
Landing page selection based on website content
|
||||
Requires website URL for crawling
|
||||
Quality Analysis
|
||||
Our comprehensive quality analysis evaluates your ads based on factors that influence Google's Quality Score:
|
||||
|
||||
Headline Analysis
|
||||
Keyword Usage: Presence of keywords in headlines
|
||||
Character Count: Optimal length for visibility
|
||||
Power Words: Use of emotionally compelling words
|
||||
Clarity: Clear communication of value proposition
|
||||
Call to Action: Presence of action-oriented language
|
||||
Description Analysis
|
||||
Keyword Density: Optimal keyword usage
|
||||
Benefit Focus: Clear articulation of benefits
|
||||
Feature Inclusion: Mention of key features
|
||||
Urgency Elements: Time-limited offers or scarcity
|
||||
Call to Action: Clear next steps for the user
|
||||
URL Path Analysis
|
||||
Keyword Inclusion: Relevant keywords in display paths
|
||||
Readability: Clear, understandable paths
|
||||
Relevance: Connection to landing page content
|
||||
Overall Ad Relevance
|
||||
Keyword-to-Ad Relevance: Alignment between keywords and ad copy
|
||||
Ad-to-Landing Page Relevance: Consistency across the user journey
|
||||
Intent Match: Alignment with search intent
|
||||
Performance Simulation
|
||||
Our tool provides data-driven performance predictions based on:
|
||||
|
||||
Click-Through Rate (CTR) Prediction
|
||||
Industry benchmarks
|
||||
Ad quality factors
|
||||
Keyword competition
|
||||
Ad position estimates
|
||||
Conversion Rate Prediction
|
||||
Industry averages
|
||||
Landing page quality
|
||||
Offer strength
|
||||
Call-to-action effectiveness
|
||||
Cost Estimation
|
||||
Keyword competition
|
||||
Quality Score impact
|
||||
Industry CPC averages
|
||||
Budget allocation
|
||||
ROI Calculation
|
||||
Estimated clicks
|
||||
Predicted conversions
|
||||
Average conversion value
|
||||
Cost projections
|
||||
Best Practices
|
||||
Our tool incorporates these Google Ads best practices:
|
||||
|
||||
Headline Best Practices
|
||||
Include primary keywords in at least 2 headlines
|
||||
Use numbers and statistics when relevant
|
||||
Address user pain points directly
|
||||
Include your unique selling proposition
|
||||
Create a sense of urgency when appropriate
|
||||
Keep headlines under 30 characters for full visibility
|
||||
Use title case for better readability
|
||||
Include at least one call-to-action headline
|
||||
Description Best Practices
|
||||
Include primary and secondary keywords naturally
|
||||
Focus on benefits, not just features
|
||||
Address objections proactively
|
||||
Include specific offers or promotions
|
||||
End with a clear call to action
|
||||
Use all available character space (90 characters per description)
|
||||
Maintain consistent messaging with headlines
|
||||
Include trust signals (guarantees, social proof, etc.)
|
||||
Extension Best Practices
|
||||
Create at least 8 sitelinks for maximum visibility
|
||||
Use callouts to highlight additional benefits
|
||||
Include structured snippets relevant to your industry
|
||||
Ensure extensions don't duplicate headline content
|
||||
Make each extension unique and valuable
|
||||
Use specific, action-oriented language
|
||||
Keep sitelink text under 25 characters for mobile visibility
|
||||
Ensure landing pages for sitelinks are relevant and optimized
|
||||
Campaign Structure Best Practices
|
||||
Group closely related keywords together
|
||||
Create separate ad groups for different themes
|
||||
Align ad copy closely with keywords in each ad group
|
||||
Use a mix of match types for each keyword
|
||||
Include negative keywords to prevent irrelevant clicks
|
||||
Create separate campaigns for different goals or audiences
|
||||
Set appropriate bid adjustments for devices, locations, and schedules
|
||||
Implement conversion tracking for performance measurement
|
||||
Export Options
|
||||
The Google Ads Generator offers multiple export formats to fit your workflow:
|
||||
|
||||
CSV Format
|
||||
Standard CSV format compatible with most spreadsheet applications
|
||||
Includes all ad elements and extensions
|
||||
Contains quality score and performance predictions
|
||||
Suitable for analysis and record-keeping
|
||||
Excel Format
|
||||
Formatted Excel workbook with multiple sheets
|
||||
Separate sheets for ads, extensions, and analysis
|
||||
Includes charts and visualizations of predicted performance
|
||||
Color-coded quality indicators
|
||||
Google Ads Editor CSV
|
||||
Specially formatted CSV for direct import into Google Ads Editor
|
||||
Follows Google's required format specifications
|
||||
Includes all necessary fields for campaign creation
|
||||
Ready for immediate upload to Google Ads Editor
|
||||
JSON Format
|
||||
Structured data format for programmatic use
|
||||
Complete ad data in machine-readable format
|
||||
Suitable for integration with other marketing tools
|
||||
Includes all metadata and analysis results
|
||||
Advanced Features
|
||||
Keyword Research Integration
|
||||
Access to keyword volume data
|
||||
Competition analysis
|
||||
Cost-per-click estimates
|
||||
Keyword difficulty scores
|
||||
Seasonal trend information
|
||||
Question-based keyword suggestions
|
||||
Long-tail keyword recommendations
|
||||
Competitor Analysis
|
||||
Identify competitors bidding on similar keywords
|
||||
Analyze competitor ad copy and messaging
|
||||
Identify gaps and opportunities
|
||||
Benchmark your ads against competitors
|
||||
Receive suggestions for differentiation
|
||||
Landing Page Suggestions
|
||||
Alignment with ad messaging
|
||||
Key elements to include
|
||||
Conversion optimization tips
|
||||
Mobile responsiveness recommendations
|
||||
Page speed improvement suggestions
|
||||
Call-to-action placement recommendations
|
||||
Local Ad Optimization
|
||||
Location extension suggestions
|
||||
Local keyword recommendations
|
||||
Geo-targeting strategies
|
||||
Local offer suggestions
|
||||
Community-focused messaging
|
||||
Location-specific call-to-actions
|
||||
Technical Details
|
||||
System Requirements
|
||||
Modern web browser (Chrome, Firefox, Safari, Edge)
|
||||
Internet connection
|
||||
Access to Alwrity AI Writer platform
|
||||
Data Privacy
|
||||
No permanent storage of business data
|
||||
Secure processing of all inputs
|
||||
Option to save ads to your account
|
||||
Compliance with data protection regulations
|
||||
API Integration
|
||||
Available API endpoints for programmatic access
|
||||
Documentation for developers
|
||||
Rate limits and authentication requirements
|
||||
Sample code for common use cases
|
||||
FAQ
|
||||
General Questions
|
||||
Q: How accurate are the performance predictions? A: Performance predictions are based on industry benchmarks and Google's published data. While they provide a good estimate, actual performance may vary based on numerous factors including competition, seasonality, and market conditions.
|
||||
|
||||
Q: Can I edit the generated ads? A: Yes, all generated ads can be edited before export. You can modify headlines, descriptions, paths, and extensions to better fit your needs.
|
||||
|
||||
Q: How many ads can I generate? A: The tool allows unlimited ad generation within your Alwrity subscription limits.
|
||||
|
||||
Q: Are the generated ads compliant with Google's policies? A: The tool is designed to create policy-compliant ads, but we recommend reviewing Google's latest advertising policies as they may change over time.
|
||||
|
||||
Technical Questions
|
||||
Q: Can I import my existing ads for optimization? A: Currently, the tool does not support importing existing ads, but this feature is on our roadmap.
|
||||
|
||||
Q: How do I import the exported files into Google Ads? A: For Google Ads Editor CSV files, open Google Ads Editor, go to File > Import, and select your exported file. For other formats, you may need to manually create campaigns using the generated content.
|
||||
|
||||
Q: Can I schedule automatic ad generation? A: Automated scheduling is not currently available but is planned for a future release.
|
||||
|
||||
Troubleshooting
|
||||
Common Issues
|
||||
Issue: Generated ads don't include my keywords Solution: Ensure your keywords are relevant to your business description and offerings. Try using more specific keywords or providing more detailed business information.
|
||||
|
||||
Issue: Quality score is consistently low Solution: Review the improvement suggestions in the Ad Performance tab. Common issues include keyword relevance, landing page alignment, and benefit clarity.
|
||||
|
||||
Issue: Export file isn't importing correctly into Google Ads Editor Solution: Ensure you're selecting the "Google Ads Editor CSV" export format. If problems persist, check for special characters in your ad copy that might be causing formatting issues.
|
||||
|
||||
Issue: Performance predictions seem unrealistic Solution: Adjust your industry selection and budget information to get more accurate predictions. Consider providing more specific audience targeting information.
|
||||
|
||||
Updates and Roadmap
|
||||
Recent Updates
|
||||
Added support for Performance Max campaign recommendations
|
||||
Improved keyword research integration
|
||||
Enhanced mobile ad optimization
|
||||
Added 5 new industry templates
|
||||
Improved quality score algorithm
|
||||
Coming Soon
|
||||
Competitor ad analysis tool
|
||||
A/B testing performance simulator
|
||||
Landing page builder integration
|
||||
Automated ad scheduling recommendations
|
||||
Video ad script generator
|
||||
Google Shopping ad support
|
||||
Multi-language ad generation
|
||||
Custom template builder
|
||||
Support
|
||||
For additional help with the Google Ads Generator:
|
||||
|
||||
Visit our Help Center
|
||||
Email support at support@example.com
|
||||
Join our Community Forum
|
||||
License
|
||||
The Google Ads Generator is part of the Alwrity AI Writer platform and is subject to the platform's terms of service and licensing agreements.
|
||||
|
||||
Acknowledgments
|
||||
Google Ads API documentation
|
||||
Industry best practices from leading digital marketing experts
|
||||
User feedback and feature requests
|
||||
Last updated: [Current Date]
|
||||
|
||||
Version: 1.0.0
|
||||
@@ -0,0 +1,9 @@
|
||||
"""
|
||||
Google Ads Generator Module
|
||||
|
||||
This module provides functionality for generating high-converting Google Ads.
|
||||
"""
|
||||
|
||||
from .google_ads_generator import write_google_ads
|
||||
|
||||
__all__ = ["write_google_ads"]
|
||||
@@ -0,0 +1,327 @@
|
||||
"""
|
||||
Ad Analyzer Module
|
||||
|
||||
This module provides functions for analyzing and scoring Google Ads.
|
||||
"""
|
||||
|
||||
import re
|
||||
from typing import Dict, List, Any, Tuple
|
||||
import random
|
||||
from urllib.parse import urlparse
|
||||
|
||||
def analyze_ad_quality(ad: Dict, primary_keywords: List[str], secondary_keywords: List[str],
|
||||
business_name: str, call_to_action: str) -> Dict:
|
||||
"""
|
||||
Analyze the quality of a Google Ad based on best practices.
|
||||
|
||||
Args:
|
||||
ad: Dictionary containing ad details
|
||||
primary_keywords: List of primary keywords
|
||||
secondary_keywords: List of secondary keywords
|
||||
business_name: Name of the business
|
||||
call_to_action: Call to action text
|
||||
|
||||
Returns:
|
||||
Dictionary with analysis results
|
||||
"""
|
||||
# Initialize results
|
||||
strengths = []
|
||||
improvements = []
|
||||
|
||||
# Get ad components
|
||||
headlines = ad.get("headlines", [])
|
||||
descriptions = ad.get("descriptions", [])
|
||||
path1 = ad.get("path1", "")
|
||||
path2 = ad.get("path2", "")
|
||||
|
||||
# Check headline count
|
||||
if len(headlines) >= 10:
|
||||
strengths.append("Good number of headlines (10+) for optimization")
|
||||
elif len(headlines) >= 5:
|
||||
strengths.append("Adequate number of headlines for testing")
|
||||
else:
|
||||
improvements.append("Add more headlines (aim for 10+) to give Google's algorithm more options")
|
||||
|
||||
# Check description count
|
||||
if len(descriptions) >= 4:
|
||||
strengths.append("Good number of descriptions (4+) for optimization")
|
||||
elif len(descriptions) >= 2:
|
||||
strengths.append("Adequate number of descriptions for testing")
|
||||
else:
|
||||
improvements.append("Add more descriptions (aim for 4+) to give Google's algorithm more options")
|
||||
|
||||
# Check headline length
|
||||
long_headlines = [h for h in headlines if len(h) > 30]
|
||||
if long_headlines:
|
||||
improvements.append(f"{len(long_headlines)} headline(s) exceed 30 characters and may be truncated")
|
||||
else:
|
||||
strengths.append("All headlines are within the recommended length")
|
||||
|
||||
# Check description length
|
||||
long_descriptions = [d for d in descriptions if len(d) > 90]
|
||||
if long_descriptions:
|
||||
improvements.append(f"{len(long_descriptions)} description(s) exceed 90 characters and may be truncated")
|
||||
else:
|
||||
strengths.append("All descriptions are within the recommended length")
|
||||
|
||||
# Check keyword usage in headlines
|
||||
headline_keywords = []
|
||||
for kw in primary_keywords:
|
||||
if any(kw.lower() in h.lower() for h in headlines):
|
||||
headline_keywords.append(kw)
|
||||
|
||||
if len(headline_keywords) == len(primary_keywords):
|
||||
strengths.append("All primary keywords are used in headlines")
|
||||
elif headline_keywords:
|
||||
strengths.append(f"{len(headline_keywords)} out of {len(primary_keywords)} primary keywords used in headlines")
|
||||
missing_kw = [kw for kw in primary_keywords if kw not in headline_keywords]
|
||||
improvements.append(f"Add these primary keywords to headlines: {', '.join(missing_kw)}")
|
||||
else:
|
||||
improvements.append("No primary keywords found in headlines - add keywords to improve relevance")
|
||||
|
||||
# Check keyword usage in descriptions
|
||||
desc_keywords = []
|
||||
for kw in primary_keywords:
|
||||
if any(kw.lower() in d.lower() for d in descriptions):
|
||||
desc_keywords.append(kw)
|
||||
|
||||
if len(desc_keywords) == len(primary_keywords):
|
||||
strengths.append("All primary keywords are used in descriptions")
|
||||
elif desc_keywords:
|
||||
strengths.append(f"{len(desc_keywords)} out of {len(primary_keywords)} primary keywords used in descriptions")
|
||||
missing_kw = [kw for kw in primary_keywords if kw not in desc_keywords]
|
||||
improvements.append(f"Add these primary keywords to descriptions: {', '.join(missing_kw)}")
|
||||
else:
|
||||
improvements.append("No primary keywords found in descriptions - add keywords to improve relevance")
|
||||
|
||||
# Check for business name
|
||||
if any(business_name.lower() in h.lower() for h in headlines):
|
||||
strengths.append("Business name is included in headlines")
|
||||
else:
|
||||
improvements.append("Consider adding your business name to at least one headline")
|
||||
|
||||
# Check for call to action
|
||||
if any(call_to_action.lower() in h.lower() for h in headlines) or any(call_to_action.lower() in d.lower() for d in descriptions):
|
||||
strengths.append("Call to action is included in the ad")
|
||||
else:
|
||||
improvements.append(f"Add your call to action '{call_to_action}' to at least one headline or description")
|
||||
|
||||
# Check for numbers and statistics
|
||||
has_numbers = any(bool(re.search(r'\d+', h)) for h in headlines) or any(bool(re.search(r'\d+', d)) for d in descriptions)
|
||||
if has_numbers:
|
||||
strengths.append("Ad includes numbers or statistics which can improve CTR")
|
||||
else:
|
||||
improvements.append("Consider adding numbers or statistics to increase credibility and CTR")
|
||||
|
||||
# Check for questions
|
||||
has_questions = any('?' in h for h in headlines) or any('?' in d for d in descriptions)
|
||||
if has_questions:
|
||||
strengths.append("Ad includes questions which can engage users")
|
||||
else:
|
||||
improvements.append("Consider adding a question to engage users")
|
||||
|
||||
# Check for emotional triggers
|
||||
emotional_words = ['you', 'free', 'because', 'instantly', 'new', 'save', 'proven', 'guarantee', 'love', 'discover']
|
||||
has_emotional = any(any(word in h.lower() for word in emotional_words) for h in headlines) or \
|
||||
any(any(word in d.lower() for word in emotional_words) for d in descriptions)
|
||||
|
||||
if has_emotional:
|
||||
strengths.append("Ad includes emotional trigger words which can improve engagement")
|
||||
else:
|
||||
improvements.append("Consider adding emotional trigger words to increase engagement")
|
||||
|
||||
# Check for path relevance
|
||||
if any(kw.lower() in path1.lower() or kw.lower() in path2.lower() for kw in primary_keywords):
|
||||
strengths.append("Display URL paths include keywords which improves relevance")
|
||||
else:
|
||||
improvements.append("Add keywords to your display URL paths to improve relevance")
|
||||
|
||||
# Return the analysis results
|
||||
return {
|
||||
"strengths": strengths,
|
||||
"improvements": improvements
|
||||
}
|
||||
|
||||
def calculate_quality_score(ad: Dict, primary_keywords: List[str], landing_page: str, ad_type: str) -> Dict:
|
||||
"""
|
||||
Calculate a quality score for a Google Ad based on best practices.
|
||||
|
||||
Args:
|
||||
ad: Dictionary containing ad details
|
||||
primary_keywords: List of primary keywords
|
||||
landing_page: Landing page URL
|
||||
ad_type: Type of Google Ad
|
||||
|
||||
Returns:
|
||||
Dictionary with quality score components
|
||||
"""
|
||||
# Initialize scores
|
||||
keyword_relevance = 0
|
||||
ad_relevance = 0
|
||||
cta_effectiveness = 0
|
||||
landing_page_relevance = 0
|
||||
|
||||
# Get ad components
|
||||
headlines = ad.get("headlines", [])
|
||||
descriptions = ad.get("descriptions", [])
|
||||
path1 = ad.get("path1", "")
|
||||
path2 = ad.get("path2", "")
|
||||
|
||||
# Calculate keyword relevance (0-10)
|
||||
# Check if keywords are in headlines, descriptions, and paths
|
||||
keyword_in_headline = sum(1 for kw in primary_keywords if any(kw.lower() in h.lower() for h in headlines))
|
||||
keyword_in_description = sum(1 for kw in primary_keywords if any(kw.lower() in d.lower() for d in descriptions))
|
||||
keyword_in_path = sum(1 for kw in primary_keywords if kw.lower() in path1.lower() or kw.lower() in path2.lower())
|
||||
|
||||
# Calculate score based on keyword presence
|
||||
if len(primary_keywords) > 0:
|
||||
headline_score = min(10, (keyword_in_headline / len(primary_keywords)) * 10)
|
||||
description_score = min(10, (keyword_in_description / len(primary_keywords)) * 10)
|
||||
path_score = min(10, (keyword_in_path / len(primary_keywords)) * 10)
|
||||
|
||||
# Weight the scores (headlines most important)
|
||||
keyword_relevance = (headline_score * 0.6) + (description_score * 0.3) + (path_score * 0.1)
|
||||
else:
|
||||
keyword_relevance = 5 # Default score if no keywords provided
|
||||
|
||||
# Calculate ad relevance (0-10)
|
||||
# Check for ad structure and content quality
|
||||
|
||||
# Check headline count and length
|
||||
headline_count_score = min(10, (len(headlines) / 10) * 10) # Ideal: 10+ headlines
|
||||
headline_length_score = 10 - min(10, (sum(1 for h in headlines if len(h) > 30) / max(1, len(headlines))) * 10)
|
||||
|
||||
# Check description count and length
|
||||
description_count_score = min(10, (len(descriptions) / 4) * 10) # Ideal: 4+ descriptions
|
||||
description_length_score = 10 - min(10, (sum(1 for d in descriptions if len(d) > 90) / max(1, len(descriptions))) * 10)
|
||||
|
||||
# Check for emotional triggers, questions, numbers
|
||||
emotional_words = ['you', 'free', 'because', 'instantly', 'new', 'save', 'proven', 'guarantee', 'love', 'discover']
|
||||
emotional_score = min(10, sum(1 for h in headlines if any(word in h.lower() for word in emotional_words)) +
|
||||
sum(1 for d in descriptions if any(word in d.lower() for word in emotional_words)))
|
||||
|
||||
question_score = min(10, (sum(1 for h in headlines if '?' in h) + sum(1 for d in descriptions if '?' in d)) * 2)
|
||||
|
||||
number_score = min(10, (sum(1 for h in headlines if bool(re.search(r'\d+', h))) +
|
||||
sum(1 for d in descriptions if bool(re.search(r'\d+', d)))) * 2)
|
||||
|
||||
# Calculate overall ad relevance score
|
||||
ad_relevance = (headline_count_score * 0.15) + (headline_length_score * 0.15) + \
|
||||
(description_count_score * 0.15) + (description_length_score * 0.15) + \
|
||||
(emotional_score * 0.2) + (question_score * 0.1) + (number_score * 0.1)
|
||||
|
||||
# Calculate CTA effectiveness (0-10)
|
||||
# Check for clear call to action
|
||||
cta_phrases = ['get', 'buy', 'shop', 'order', 'sign up', 'register', 'download', 'learn', 'discover', 'find', 'call',
|
||||
'contact', 'request', 'start', 'try', 'join', 'subscribe', 'book', 'schedule', 'apply']
|
||||
|
||||
cta_in_headline = any(any(phrase in h.lower() for phrase in cta_phrases) for h in headlines)
|
||||
cta_in_description = any(any(phrase in d.lower() for phrase in cta_phrases) for d in descriptions)
|
||||
|
||||
if cta_in_headline and cta_in_description:
|
||||
cta_effectiveness = 10
|
||||
elif cta_in_headline:
|
||||
cta_effectiveness = 8
|
||||
elif cta_in_description:
|
||||
cta_effectiveness = 7
|
||||
else:
|
||||
cta_effectiveness = 4
|
||||
|
||||
# Calculate landing page relevance (0-10)
|
||||
# In a real implementation, this would analyze the landing page content
|
||||
# For this example, we'll use a simplified approach
|
||||
|
||||
if landing_page:
|
||||
# Check if domain seems relevant to keywords
|
||||
domain = urlparse(landing_page).netloc
|
||||
|
||||
# Check if keywords are in the domain or path
|
||||
keyword_in_url = any(kw.lower() in landing_page.lower() for kw in primary_keywords)
|
||||
|
||||
# Check if URL structure seems appropriate
|
||||
has_https = landing_page.startswith('https://')
|
||||
|
||||
# Calculate landing page score
|
||||
landing_page_relevance = 5 # Base score
|
||||
|
||||
if keyword_in_url:
|
||||
landing_page_relevance += 3
|
||||
|
||||
if has_https:
|
||||
landing_page_relevance += 2
|
||||
|
||||
# Cap at 10
|
||||
landing_page_relevance = min(10, landing_page_relevance)
|
||||
else:
|
||||
landing_page_relevance = 5 # Default score if no landing page provided
|
||||
|
||||
# Calculate overall quality score (0-10)
|
||||
overall_score = (keyword_relevance * 0.4) + (ad_relevance * 0.3) + (cta_effectiveness * 0.2) + (landing_page_relevance * 0.1)
|
||||
|
||||
# Calculate estimated CTR based on quality score
|
||||
# This is a simplified model - in reality, CTR depends on many factors
|
||||
base_ctr = {
|
||||
"Responsive Search Ad": 3.17,
|
||||
"Expanded Text Ad": 2.83,
|
||||
"Call-Only Ad": 3.48,
|
||||
"Dynamic Search Ad": 2.69
|
||||
}.get(ad_type, 3.0)
|
||||
|
||||
# Adjust CTR based on quality score (±50%)
|
||||
quality_factor = (overall_score - 5) / 5 # -1 to 1
|
||||
estimated_ctr = base_ctr * (1 + (quality_factor * 0.5))
|
||||
|
||||
# Calculate estimated conversion rate
|
||||
# Again, this is simplified - actual conversion rates depend on many factors
|
||||
base_conversion_rate = 3.75 # Average conversion rate for search ads
|
||||
|
||||
# Adjust conversion rate based on quality score (±40%)
|
||||
estimated_conversion_rate = base_conversion_rate * (1 + (quality_factor * 0.4))
|
||||
|
||||
# Return the quality score components
|
||||
return {
|
||||
"keyword_relevance": round(keyword_relevance, 1),
|
||||
"ad_relevance": round(ad_relevance, 1),
|
||||
"cta_effectiveness": round(cta_effectiveness, 1),
|
||||
"landing_page_relevance": round(landing_page_relevance, 1),
|
||||
"overall_score": round(overall_score, 1),
|
||||
"estimated_ctr": round(estimated_ctr, 2),
|
||||
"estimated_conversion_rate": round(estimated_conversion_rate, 2)
|
||||
}
|
||||
|
||||
def analyze_keyword_relevance(keywords: List[str], ad_text: str) -> Dict:
|
||||
"""
|
||||
Analyze the relevance of keywords to ad text.
|
||||
|
||||
Args:
|
||||
keywords: List of keywords to analyze
|
||||
ad_text: Combined ad text (headlines and descriptions)
|
||||
|
||||
Returns:
|
||||
Dictionary with keyword relevance analysis
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for keyword in keywords:
|
||||
# Check if keyword is in ad text
|
||||
is_present = keyword.lower() in ad_text.lower()
|
||||
|
||||
# Check if keyword is in the first 100 characters
|
||||
is_in_beginning = keyword.lower() in ad_text.lower()[:100]
|
||||
|
||||
# Count occurrences
|
||||
occurrences = ad_text.lower().count(keyword.lower())
|
||||
|
||||
# Calculate density
|
||||
density = (occurrences * len(keyword)) / len(ad_text) * 100 if len(ad_text) > 0 else 0
|
||||
|
||||
# Store results
|
||||
results[keyword] = {
|
||||
"present": is_present,
|
||||
"in_beginning": is_in_beginning,
|
||||
"occurrences": occurrences,
|
||||
"density": round(density, 2),
|
||||
"optimal_density": 0.5 <= density <= 2.5
|
||||
}
|
||||
|
||||
return results
|
||||
@@ -0,0 +1,320 @@
|
||||
"""
|
||||
Ad Extensions Generator Module
|
||||
|
||||
This module provides functions for generating various types of Google Ads extensions.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
import re
|
||||
from ...gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
def generate_extensions(business_name: str, business_description: str, industry: str,
|
||||
primary_keywords: List[str], unique_selling_points: List[str],
|
||||
landing_page: str) -> Dict:
|
||||
"""
|
||||
Generate a complete set of ad extensions based on business information.
|
||||
|
||||
Args:
|
||||
business_name: Name of the business
|
||||
business_description: Description of the business
|
||||
industry: Industry of the business
|
||||
primary_keywords: List of primary keywords
|
||||
unique_selling_points: List of unique selling points
|
||||
landing_page: Landing page URL
|
||||
|
||||
Returns:
|
||||
Dictionary with generated extensions
|
||||
"""
|
||||
# Generate sitelinks
|
||||
sitelinks = generate_sitelinks(business_name, business_description, industry, primary_keywords, landing_page)
|
||||
|
||||
# Generate callouts
|
||||
callouts = generate_callouts(business_name, unique_selling_points, industry)
|
||||
|
||||
# Generate structured snippets
|
||||
snippets = generate_structured_snippets(business_name, business_description, industry, primary_keywords)
|
||||
|
||||
# Return all extensions
|
||||
return {
|
||||
"sitelinks": sitelinks,
|
||||
"callouts": callouts,
|
||||
"structured_snippets": snippets
|
||||
}
|
||||
|
||||
def generate_sitelinks(business_name: str, business_description: str, industry: str,
|
||||
primary_keywords: List[str], landing_page: str) -> List[Dict]:
|
||||
"""
|
||||
Generate sitelink extensions based on business information.
|
||||
|
||||
Args:
|
||||
business_name: Name of the business
|
||||
business_description: Description of the business
|
||||
industry: Industry of the business
|
||||
primary_keywords: List of primary keywords
|
||||
landing_page: Landing page URL
|
||||
|
||||
Returns:
|
||||
List of dictionaries with sitelink information
|
||||
"""
|
||||
# Define common sitelink types by industry
|
||||
industry_sitelinks = {
|
||||
"E-commerce": ["Shop Now", "Best Sellers", "New Arrivals", "Sale Items", "Customer Reviews", "About Us"],
|
||||
"SaaS/Technology": ["Features", "Pricing", "Demo", "Case Studies", "Support", "Blog"],
|
||||
"Healthcare": ["Services", "Locations", "Providers", "Insurance", "Patient Portal", "Contact Us"],
|
||||
"Education": ["Programs", "Admissions", "Campus", "Faculty", "Student Life", "Apply Now"],
|
||||
"Finance": ["Services", "Rates", "Calculators", "Locations", "Apply Now", "About Us"],
|
||||
"Real Estate": ["Listings", "Sell Your Home", "Neighborhoods", "Agents", "Mortgage", "Contact Us"],
|
||||
"Legal": ["Practice Areas", "Attorneys", "Results", "Testimonials", "Free Consultation", "Contact"],
|
||||
"Travel": ["Destinations", "Deals", "Book Now", "Reviews", "FAQ", "Contact Us"],
|
||||
"Food & Beverage": ["Menu", "Locations", "Order Online", "Reservations", "Catering", "About Us"]
|
||||
}
|
||||
|
||||
# Get sitelinks for the specified industry, or use default
|
||||
sitelink_types = industry_sitelinks.get(industry, ["About Us", "Services", "Products", "Contact Us", "Testimonials", "FAQ"])
|
||||
|
||||
# Generate sitelinks
|
||||
sitelinks = []
|
||||
base_url = landing_page.rstrip('/') if landing_page else ""
|
||||
|
||||
for sitelink_type in sitelink_types:
|
||||
# Generate URL path based on sitelink type
|
||||
path = sitelink_type.lower().replace(' ', '-')
|
||||
url = f"{base_url}/{path}" if base_url else f"https://example.com/{path}"
|
||||
|
||||
# Generate description based on sitelink type
|
||||
description = ""
|
||||
if sitelink_type == "About Us":
|
||||
description = f"Learn more about {business_name} and our mission."
|
||||
elif sitelink_type == "Services" or sitelink_type == "Products":
|
||||
description = f"Explore our range of {primary_keywords[0] if primary_keywords else 'offerings'}."
|
||||
elif sitelink_type == "Contact Us":
|
||||
description = f"Get in touch with our team for assistance."
|
||||
elif sitelink_type == "Testimonials" or sitelink_type == "Reviews":
|
||||
description = f"See what our customers say about us."
|
||||
elif sitelink_type == "FAQ":
|
||||
description = f"Find answers to common questions."
|
||||
elif sitelink_type == "Pricing" or sitelink_type == "Rates":
|
||||
description = f"View our competitive pricing options."
|
||||
elif sitelink_type == "Shop Now" or sitelink_type == "Order Online":
|
||||
description = f"Browse and purchase our {primary_keywords[0] if primary_keywords else 'products'} online."
|
||||
|
||||
# Add the sitelink
|
||||
sitelinks.append({
|
||||
"text": sitelink_type,
|
||||
"url": url,
|
||||
"description": description
|
||||
})
|
||||
|
||||
return sitelinks
|
||||
|
||||
def generate_callouts(business_name: str, unique_selling_points: List[str], industry: str) -> List[str]:
|
||||
"""
|
||||
Generate callout extensions based on business information.
|
||||
|
||||
Args:
|
||||
business_name: Name of the business
|
||||
unique_selling_points: List of unique selling points
|
||||
industry: Industry of the business
|
||||
|
||||
Returns:
|
||||
List of callout texts
|
||||
"""
|
||||
# Use provided USPs if available
|
||||
if unique_selling_points and len(unique_selling_points) >= 4:
|
||||
# Ensure callouts are not too long (25 characters max)
|
||||
callouts = []
|
||||
for usp in unique_selling_points:
|
||||
if len(usp) <= 25:
|
||||
callouts.append(usp)
|
||||
else:
|
||||
# Try to truncate at a space
|
||||
truncated = usp[:22] + "..."
|
||||
callouts.append(truncated)
|
||||
|
||||
return callouts[:8] # Return up to 8 callouts
|
||||
|
||||
# Define common callouts by industry
|
||||
industry_callouts = {
|
||||
"E-commerce": ["Free Shipping", "24/7 Customer Service", "Secure Checkout", "Easy Returns", "Price Match Guarantee", "Next Day Delivery", "Satisfaction Guaranteed", "Exclusive Deals"],
|
||||
"SaaS/Technology": ["24/7 Support", "Free Trial", "No Credit Card Required", "Easy Integration", "Data Security", "Cloud-Based", "Regular Updates", "Customizable"],
|
||||
"Healthcare": ["Board Certified", "Most Insurance Accepted", "Same-Day Appointments", "Compassionate Care", "State-of-the-Art Facility", "Experienced Staff", "Convenient Location", "Telehealth Available"],
|
||||
"Education": ["Accredited Programs", "Expert Faculty", "Financial Aid", "Career Services", "Small Class Sizes", "Flexible Schedule", "Online Options", "Hands-On Learning"],
|
||||
"Finance": ["FDIC Insured", "No Hidden Fees", "Personalized Service", "Online Banking", "Mobile App", "Low Interest Rates", "Financial Planning", "Retirement Services"],
|
||||
"Real Estate": ["Free Home Valuation", "Virtual Tours", "Experienced Agents", "Local Expertise", "Financing Available", "Property Management", "Commercial & Residential", "Investment Properties"],
|
||||
"Legal": ["Free Consultation", "No Win No Fee", "Experienced Attorneys", "24/7 Availability", "Proven Results", "Personalized Service", "Multiple Practice Areas", "Aggressive Representation"]
|
||||
}
|
||||
|
||||
# Get callouts for the specified industry, or use default
|
||||
callouts = industry_callouts.get(industry, ["Professional Service", "Experienced Team", "Customer Satisfaction", "Quality Guaranteed", "Competitive Pricing", "Fast Service", "Personalized Solutions", "Trusted Provider"])
|
||||
|
||||
return callouts
|
||||
|
||||
def generate_structured_snippets(business_name: str, business_description: str, industry: str, primary_keywords: List[str]) -> Dict:
|
||||
"""
|
||||
Generate structured snippet extensions based on business information.
|
||||
|
||||
Args:
|
||||
business_name: Name of the business
|
||||
business_description: Description of the business
|
||||
industry: Industry of the business
|
||||
primary_keywords: List of primary keywords
|
||||
|
||||
Returns:
|
||||
Dictionary with structured snippet information
|
||||
"""
|
||||
# Define common snippet headers and values by industry
|
||||
industry_snippets = {
|
||||
"E-commerce": {
|
||||
"header": "Brands",
|
||||
"values": ["Nike", "Adidas", "Apple", "Samsung", "Sony", "LG", "Dell", "HP"]
|
||||
},
|
||||
"SaaS/Technology": {
|
||||
"header": "Services",
|
||||
"values": ["Cloud Storage", "Data Analytics", "CRM", "Project Management", "Email Marketing", "Cybersecurity", "API Integration", "Automation"]
|
||||
},
|
||||
"Healthcare": {
|
||||
"header": "Services",
|
||||
"values": ["Preventive Care", "Diagnostics", "Treatment", "Surgery", "Rehabilitation", "Counseling", "Telemedicine", "Wellness Programs"]
|
||||
},
|
||||
"Education": {
|
||||
"header": "Courses",
|
||||
"values": ["Business", "Technology", "Healthcare", "Design", "Engineering", "Education", "Arts", "Sciences"]
|
||||
},
|
||||
"Finance": {
|
||||
"header": "Services",
|
||||
"values": ["Checking Accounts", "Savings Accounts", "Loans", "Mortgages", "Investments", "Retirement Planning", "Insurance", "Wealth Management"]
|
||||
},
|
||||
"Real Estate": {
|
||||
"header": "Types",
|
||||
"values": ["Single-Family Homes", "Condos", "Townhouses", "Apartments", "Commercial", "Land", "New Construction", "Luxury Homes"]
|
||||
},
|
||||
"Legal": {
|
||||
"header": "Services",
|
||||
"values": ["Personal Injury", "Family Law", "Criminal Defense", "Estate Planning", "Business Law", "Immigration", "Real Estate Law", "Intellectual Property"]
|
||||
}
|
||||
}
|
||||
|
||||
# Get snippets for the specified industry, or use default
|
||||
snippet_info = industry_snippets.get(industry, {
|
||||
"header": "Services",
|
||||
"values": ["Consultation", "Assessment", "Implementation", "Support", "Maintenance", "Training", "Customization", "Analysis"]
|
||||
})
|
||||
|
||||
# If we have primary keywords, try to incorporate them
|
||||
if primary_keywords:
|
||||
# Try to determine a better header based on keywords
|
||||
service_keywords = ["service", "support", "consultation", "assistance", "help"]
|
||||
product_keywords = ["product", "item", "good", "merchandise"]
|
||||
brand_keywords = ["brand", "make", "manufacturer"]
|
||||
|
||||
for kw in primary_keywords:
|
||||
kw_lower = kw.lower()
|
||||
if any(service_word in kw_lower for service_word in service_keywords):
|
||||
snippet_info["header"] = "Services"
|
||||
break
|
||||
elif any(product_word in kw_lower for product_word in product_keywords):
|
||||
snippet_info["header"] = "Products"
|
||||
break
|
||||
elif any(brand_word in kw_lower for brand_word in brand_keywords):
|
||||
snippet_info["header"] = "Brands"
|
||||
break
|
||||
|
||||
return snippet_info
|
||||
|
||||
def generate_custom_extensions(business_info: Dict, extension_type: str) -> Any:
|
||||
"""
|
||||
Generate custom extensions using AI based on business information.
|
||||
|
||||
Args:
|
||||
business_info: Dictionary with business information
|
||||
extension_type: Type of extension to generate
|
||||
|
||||
Returns:
|
||||
Generated extension data
|
||||
"""
|
||||
# Extract business information
|
||||
business_name = business_info.get("business_name", "")
|
||||
business_description = business_info.get("business_description", "")
|
||||
industry = business_info.get("industry", "")
|
||||
primary_keywords = business_info.get("primary_keywords", [])
|
||||
unique_selling_points = business_info.get("unique_selling_points", [])
|
||||
|
||||
# Create a prompt based on extension type
|
||||
if extension_type == "sitelinks":
|
||||
prompt = f"""
|
||||
Generate 6 sitelink extensions for a Google Ads campaign for the following business:
|
||||
|
||||
Business Name: {business_name}
|
||||
Business Description: {business_description}
|
||||
Industry: {industry}
|
||||
Keywords: {', '.join(primary_keywords)}
|
||||
|
||||
For each sitelink, provide:
|
||||
1. Link text (max 25 characters)
|
||||
2. Description line 1 (max 35 characters)
|
||||
3. Description line 2 (max 35 characters)
|
||||
|
||||
Format the response as a JSON array of objects with "text", "description1", and "description2" fields.
|
||||
"""
|
||||
elif extension_type == "callouts":
|
||||
prompt = f"""
|
||||
Generate 8 callout extensions for a Google Ads campaign for the following business:
|
||||
|
||||
Business Name: {business_name}
|
||||
Business Description: {business_description}
|
||||
Industry: {industry}
|
||||
Keywords: {', '.join(primary_keywords)}
|
||||
Unique Selling Points: {', '.join(unique_selling_points)}
|
||||
|
||||
Each callout should:
|
||||
1. Be 25 characters or less
|
||||
2. Highlight a feature, benefit, or unique selling point
|
||||
3. Be concise and impactful
|
||||
|
||||
Format the response as a JSON array of strings.
|
||||
"""
|
||||
elif extension_type == "structured_snippets":
|
||||
prompt = f"""
|
||||
Generate structured snippet extensions for a Google Ads campaign for the following business:
|
||||
|
||||
Business Name: {business_name}
|
||||
Business Description: {business_description}
|
||||
Industry: {industry}
|
||||
Keywords: {', '.join(primary_keywords)}
|
||||
|
||||
Provide:
|
||||
1. The most appropriate header type (e.g., Brands, Services, Products, Courses, etc.)
|
||||
2. 8 values that are relevant to the business (each 25 characters or less)
|
||||
|
||||
Format the response as a JSON object with "header" and "values" fields.
|
||||
"""
|
||||
else:
|
||||
return None
|
||||
|
||||
# Generate the extensions using the LLM
|
||||
try:
|
||||
response = llm_text_gen(prompt)
|
||||
|
||||
# Process the response based on extension type
|
||||
# In a real implementation, you would parse the JSON response
|
||||
# For this example, we'll return a placeholder
|
||||
|
||||
if extension_type == "sitelinks":
|
||||
return [
|
||||
{"text": "About Us", "description1": "Learn about our company", "description2": "Our history and mission"},
|
||||
{"text": "Services", "description1": "Explore our service offerings", "description2": "Solutions for your needs"},
|
||||
{"text": "Products", "description1": "Browse our product catalog", "description2": "Quality items at great prices"},
|
||||
{"text": "Contact Us", "description1": "Get in touch with our team", "description2": "We're here to help you"},
|
||||
{"text": "Testimonials", "description1": "See what customers say", "description2": "Real reviews from real people"},
|
||||
{"text": "FAQ", "description1": "Frequently asked questions", "description2": "Find quick answers here"}
|
||||
]
|
||||
elif extension_type == "callouts":
|
||||
return ["Free Shipping", "24/7 Support", "Money-Back Guarantee", "Expert Team", "Premium Quality", "Fast Service", "Affordable Prices", "Satisfaction Guaranteed"]
|
||||
elif extension_type == "structured_snippets":
|
||||
return {"header": "Services", "values": ["Consultation", "Installation", "Maintenance", "Repair", "Training", "Support", "Design", "Analysis"]}
|
||||
else:
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error generating extensions: {str(e)}")
|
||||
return None
|
||||
@@ -0,0 +1,219 @@
|
||||
"""
|
||||
Ad Templates Module
|
||||
|
||||
This module provides templates for different ad types and industries.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any
|
||||
|
||||
def get_industry_templates(industry: str) -> Dict:
|
||||
"""
|
||||
Get ad templates specific to an industry.
|
||||
|
||||
Args:
|
||||
industry: The industry to get templates for
|
||||
|
||||
Returns:
|
||||
Dictionary with industry-specific templates
|
||||
"""
|
||||
# Define templates for different industries
|
||||
templates = {
|
||||
"E-commerce": {
|
||||
"headline_templates": [
|
||||
"{product} - {benefit} | {business_name}",
|
||||
"Shop {product} - {discount} Off Today",
|
||||
"Top-Rated {product} - Free Shipping",
|
||||
"{benefit} with Our {product}",
|
||||
"New {product} Collection - {benefit}",
|
||||
"{discount}% Off {product} - Limited Time",
|
||||
"Buy {product} Online - Fast Delivery",
|
||||
"{product} Sale Ends {timeframe}",
|
||||
"Best-Selling {product} from {business_name}",
|
||||
"Premium {product} - {benefit}"
|
||||
],
|
||||
"description_templates": [
|
||||
"Shop our selection of {product} and enjoy {benefit}. Free shipping on orders over ${amount}. Order now!",
|
||||
"Looking for quality {product}? Get {benefit} with our {product}. {discount} off your first order!",
|
||||
"{business_name} offers premium {product} with {benefit}. Shop online or visit our store today!",
|
||||
"Discover our {product} collection. {benefit} guaranteed or your money back. Order now and save {discount}!"
|
||||
],
|
||||
"emotional_triggers": ["exclusive", "limited time", "sale", "discount", "free shipping", "bestseller", "new arrival"],
|
||||
"call_to_actions": ["Shop Now", "Buy Today", "Order Online", "Get Yours", "Add to Cart", "Save Today"]
|
||||
},
|
||||
"SaaS/Technology": {
|
||||
"headline_templates": [
|
||||
"{product} Software - {benefit}",
|
||||
"Try {product} Free for {timeframe}",
|
||||
"{benefit} with Our {product} Platform",
|
||||
"{product} - Rated #1 for {feature}",
|
||||
"New {feature} in Our {product} Software",
|
||||
"{business_name} - {benefit} Software",
|
||||
"Streamline {pain_point} with {product}",
|
||||
"{product} Software - {discount} Off",
|
||||
"Enterprise-Grade {product} for {audience}",
|
||||
"{product} - {benefit} Guaranteed"
|
||||
],
|
||||
"description_templates": [
|
||||
"{business_name}'s {product} helps you {benefit}. Try it free for {timeframe}. No credit card required.",
|
||||
"Struggling with {pain_point}? Our {product} provides {benefit}. Join {number}+ satisfied customers.",
|
||||
"Our {product} platform offers {feature} to help you {benefit}. Rated {rating}/5 by {source}.",
|
||||
"{product} by {business_name}: {benefit} for your business. Plans starting at ${price}/month."
|
||||
],
|
||||
"emotional_triggers": ["efficient", "time-saving", "seamless", "integrated", "secure", "scalable", "innovative"],
|
||||
"call_to_actions": ["Start Free Trial", "Request Demo", "Learn More", "Sign Up Free", "Get Started", "See Plans"]
|
||||
},
|
||||
"Healthcare": {
|
||||
"headline_templates": [
|
||||
"{service} in {location} | {business_name}",
|
||||
"Expert {service} - {benefit}",
|
||||
"Quality {service} for {audience}",
|
||||
"{business_name} - {credential} {professionals}",
|
||||
"Same-Day {service} Appointments",
|
||||
"{service} Specialists in {location}",
|
||||
"Affordable {service} - {benefit}",
|
||||
"{symptom}? Get {service} Today",
|
||||
"Advanced {service} Technology",
|
||||
"Compassionate {service} Care"
|
||||
],
|
||||
"description_templates": [
|
||||
"{business_name} provides expert {service} with {benefit}. Our {credential} team is ready to help. Schedule today!",
|
||||
"Experiencing {symptom}? Our {professionals} offer {service} with {benefit}. Most insurance accepted.",
|
||||
"Quality {service} in {location}. {benefit} from our experienced team. Call now to schedule your appointment.",
|
||||
"Our {service} center provides {benefit} for {audience}. Open {days} with convenient hours."
|
||||
],
|
||||
"emotional_triggers": ["trusted", "experienced", "compassionate", "advanced", "personalized", "comprehensive", "gentle"],
|
||||
"call_to_actions": ["Schedule Now", "Book Appointment", "Call Today", "Free Consultation", "Learn More", "Find Relief"]
|
||||
},
|
||||
"Real Estate": {
|
||||
"headline_templates": [
|
||||
"{property_type} in {location} | {business_name}",
|
||||
"{property_type} for {price_range} - {location}",
|
||||
"Find Your Dream {property_type} in {location}",
|
||||
"{feature} {property_type} - {location}",
|
||||
"New {property_type} Listings in {location}",
|
||||
"Sell Your {property_type} in {timeframe}",
|
||||
"{business_name} - {credential} {professionals}",
|
||||
"{property_type} {benefit} - {location}",
|
||||
"Exclusive {property_type} Listings",
|
||||
"{number}+ {property_type} Available Now"
|
||||
],
|
||||
"description_templates": [
|
||||
"Looking for {property_type} in {location}? {business_name} offers {benefit}. Browse our listings or call us today!",
|
||||
"Sell your {property_type} in {location} with {business_name}. Our {professionals} provide {benefit}. Free valuation!",
|
||||
"{business_name}: {credential} {professionals} helping you find the perfect {property_type} in {location}. Call now!",
|
||||
"Discover {feature} {property_type} in {location}. Prices from {price_range}. Schedule a viewing today!"
|
||||
],
|
||||
"emotional_triggers": ["dream home", "exclusive", "luxury", "investment", "perfect location", "spacious", "modern"],
|
||||
"call_to_actions": ["View Listings", "Schedule Viewing", "Free Valuation", "Call Now", "Learn More", "Get Pre-Approved"]
|
||||
}
|
||||
}
|
||||
|
||||
# Return templates for the specified industry, or a default if not found
|
||||
return templates.get(industry, {
|
||||
"headline_templates": [
|
||||
"{product/service} - {benefit} | {business_name}",
|
||||
"Professional {product/service} - {benefit}",
|
||||
"{benefit} with Our {product/service}",
|
||||
"{business_name} - {credential} {product/service}",
|
||||
"Quality {product/service} for {audience}",
|
||||
"Affordable {product/service} - {benefit}",
|
||||
"{product/service} in {location}",
|
||||
"{feature} {product/service} by {business_name}",
|
||||
"Experienced {product/service} Provider",
|
||||
"{product/service} - Satisfaction Guaranteed"
|
||||
],
|
||||
"description_templates": [
|
||||
"{business_name} offers professional {product/service} with {benefit}. Contact us today to learn more!",
|
||||
"Looking for quality {product/service}? {business_name} provides {benefit}. Call now for more information.",
|
||||
"Our {product/service} helps you {benefit}. Trusted by {number}+ customers. Contact us today!",
|
||||
"{business_name}: {credential} {product/service} provider. We offer {benefit} for {audience}. Learn more!"
|
||||
],
|
||||
"emotional_triggers": ["professional", "quality", "trusted", "experienced", "affordable", "reliable", "satisfaction"],
|
||||
"call_to_actions": ["Contact Us", "Learn More", "Call Now", "Get Quote", "Visit Website", "Schedule Consultation"]
|
||||
})
|
||||
|
||||
def get_ad_type_templates(ad_type: str) -> Dict:
|
||||
"""
|
||||
Get templates specific to an ad type.
|
||||
|
||||
Args:
|
||||
ad_type: The ad type to get templates for
|
||||
|
||||
Returns:
|
||||
Dictionary with ad type-specific templates
|
||||
"""
|
||||
# Define templates for different ad types
|
||||
templates = {
|
||||
"Responsive Search Ad": {
|
||||
"headline_count": 15,
|
||||
"description_count": 4,
|
||||
"headline_max_length": 30,
|
||||
"description_max_length": 90,
|
||||
"best_practices": [
|
||||
"Include at least 3 headlines with keywords",
|
||||
"Create headlines with different lengths",
|
||||
"Include at least 1 headline with a call to action",
|
||||
"Include at least 1 headline with your brand name",
|
||||
"Create descriptions that complement each other",
|
||||
"Include keywords in at least 2 descriptions",
|
||||
"Include a call to action in at least 1 description"
|
||||
]
|
||||
},
|
||||
"Expanded Text Ad": {
|
||||
"headline_count": 3,
|
||||
"description_count": 2,
|
||||
"headline_max_length": 30,
|
||||
"description_max_length": 90,
|
||||
"best_practices": [
|
||||
"Include keywords in Headline 1",
|
||||
"Use a call to action in Headline 2 or 3",
|
||||
"Include your brand name in one headline",
|
||||
"Make descriptions complementary but able to stand alone",
|
||||
"Include keywords in at least one description",
|
||||
"Include a call to action in at least one description"
|
||||
]
|
||||
},
|
||||
"Call-Only Ad": {
|
||||
"headline_count": 2,
|
||||
"description_count": 2,
|
||||
"headline_max_length": 30,
|
||||
"description_max_length": 90,
|
||||
"best_practices": [
|
||||
"Focus on encouraging phone calls",
|
||||
"Include language like 'Call now', 'Speak to an expert', etc.",
|
||||
"Mention phone availability (e.g., '24/7', 'Available now')",
|
||||
"Include benefits of calling rather than clicking",
|
||||
"Be clear about who will answer the call",
|
||||
"Include any special offers for callers"
|
||||
]
|
||||
},
|
||||
"Dynamic Search Ad": {
|
||||
"headline_count": 0, # Headlines are dynamically generated
|
||||
"description_count": 2,
|
||||
"headline_max_length": 0, # N/A
|
||||
"description_max_length": 90,
|
||||
"best_practices": [
|
||||
"Create descriptions that work with any dynamically generated headline",
|
||||
"Focus on your unique selling points",
|
||||
"Include a strong call to action",
|
||||
"Highlight benefits that apply across your product/service range",
|
||||
"Avoid specific product mentions that might not match the dynamic headline"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Return templates for the specified ad type, or a default if not found
|
||||
return templates.get(ad_type, {
|
||||
"headline_count": 3,
|
||||
"description_count": 2,
|
||||
"headline_max_length": 30,
|
||||
"description_max_length": 90,
|
||||
"best_practices": [
|
||||
"Include keywords in headlines",
|
||||
"Use a call to action",
|
||||
"Include your brand name",
|
||||
"Make descriptions informative and compelling",
|
||||
"Include keywords in descriptions",
|
||||
"Highlight unique selling points"
|
||||
]
|
||||
})
|
||||
File diff suppressed because it is too large
Load Diff
215
ToBeMigrated/ai_seo_tools/ENTERPRISE_FEATURES.md
Normal file
215
ToBeMigrated/ai_seo_tools/ENTERPRISE_FEATURES.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# Alwrity Enterprise SEO Features
|
||||
|
||||
## 🚀 Overview
|
||||
|
||||
Alwrity's AI SEO Tools have been enhanced with enterprise-level features that provide comprehensive SEO management, advanced analytics, and AI-powered strategic insights. These enhancements transform Alwrity from a collection of individual tools into a unified enterprise SEO command center.
|
||||
|
||||
## 🏢 Enterprise SEO Suite
|
||||
|
||||
### Unified Command Center (`enterprise_seo_suite.py`)
|
||||
|
||||
The Enterprise SEO Suite serves as a central orchestrator for all SEO activities, providing:
|
||||
|
||||
#### Core Workflows
|
||||
- **Complete SEO Audit**: Comprehensive site analysis combining technical, content, and performance metrics
|
||||
- **Content Strategy Development**: AI-powered content planning with market intelligence
|
||||
- **Search Intelligence Analysis**: Deep GSC data analysis with actionable insights
|
||||
- **Performance Monitoring**: Continuous tracking and optimization recommendations
|
||||
|
||||
#### Key Features
|
||||
- **Intelligent Workflow Orchestration**: Automatically sequences and coordinates multiple SEO analyses
|
||||
- **AI-Powered Recommendations**: Uses advanced AI to generate strategic insights and action plans
|
||||
- **Enterprise Reporting**: Comprehensive reports suitable for executive and team consumption
|
||||
- **Scalable Architecture**: Designed to handle multiple sites and large datasets
|
||||
|
||||
### Enterprise-Level Capabilities
|
||||
- Multi-site management support
|
||||
- Role-based access controls (planned)
|
||||
- Team collaboration features (planned)
|
||||
- Advanced reporting and dashboards
|
||||
- API integration capabilities
|
||||
|
||||
## 📊 Google Search Console Intelligence
|
||||
|
||||
### Advanced GSC Integration (`google_search_console_integration.py`)
|
||||
|
||||
Transforms raw GSC data into strategic insights with:
|
||||
|
||||
#### Search Performance Analysis
|
||||
- **Comprehensive Metrics**: Clicks, impressions, CTR, and position tracking
|
||||
- **Trend Analysis**: Week-over-week and month-over-month performance trends
|
||||
- **Keyword Performance**: Deep analysis of keyword opportunities and optimization potential
|
||||
- **Page Performance**: Identification of top-performing and underperforming pages
|
||||
|
||||
#### Content Opportunities Engine
|
||||
- **CTR Optimization**: Identifies high-impression, low-CTR keywords for meta optimization
|
||||
- **Position Improvement**: Highlights keywords ranking 11-20 for content enhancement
|
||||
- **Content Gap Detection**: Discovers missing keyword opportunities
|
||||
- **Technical Issue Detection**: Identifies potential crawl and indexing problems
|
||||
|
||||
#### AI-Powered Insights
|
||||
- **Strategic Recommendations**: AI analysis of search data for actionable insights
|
||||
- **Immediate Opportunities**: Quick wins identified within 0-30 days
|
||||
- **Long-term Strategy**: 3-12 month strategic planning recommendations
|
||||
- **Competitive Analysis**: Market position assessment and improvement strategies
|
||||
|
||||
### Demo Mode & Real Integration
|
||||
- **Demo Mode**: Realistic sample data for testing and exploration
|
||||
- **GSC API Integration**: Ready for real Google Search Console API connection
|
||||
- **Credentials Management**: Secure handling of GSC API credentials
|
||||
- **Data Export**: Full analysis export in JSON and CSV formats
|
||||
|
||||
## 🧠 AI Content Strategy Generator
|
||||
|
||||
### Comprehensive Strategy Development (`ai_content_strategy.py`)
|
||||
|
||||
Creates complete content strategies using AI market intelligence:
|
||||
|
||||
#### Business Context Analysis
|
||||
- **Market Positioning**: AI analysis of competitive landscape and opportunities
|
||||
- **Content Gap Identification**: Discovers missing content themes in the industry
|
||||
- **Competitive Advantage Mapping**: Identifies unique positioning opportunities
|
||||
- **Audience Intelligence**: Deep insights into target audience needs and preferences
|
||||
|
||||
#### Content Pillar Development
|
||||
- **Strategic Pillars**: 4-6 content themes aligned with business goals
|
||||
- **Keyword Mapping**: Target keywords and semantic variations for each pillar
|
||||
- **Content Type Recommendations**: Optimal content formats for each pillar
|
||||
- **Success Metrics**: KPIs and measurement frameworks for each pillar
|
||||
|
||||
#### Content Calendar Planning
|
||||
- **Automated Scheduling**: AI-generated content calendar with optimal timing
|
||||
- **Resource Planning**: Time estimates and resource allocation
|
||||
- **Priority Scoring**: Content prioritization based on impact and effort
|
||||
- **Distribution Mapping**: Multi-channel content distribution strategy
|
||||
|
||||
#### Topic Cluster Strategy
|
||||
- **SEO-Optimized Clusters**: Topic clusters designed for search dominance
|
||||
- **Pillar Page Strategy**: Hub-and-spoke content architecture
|
||||
- **Internal Linking Plans**: Strategic linking for SEO authority building
|
||||
- **Content Relationship Mapping**: How content pieces support each other
|
||||
|
||||
### Implementation Support
|
||||
- **Phase-Based Roadmap**: 3-phase implementation plan with milestones
|
||||
- **KPI Framework**: Comprehensive measurement and tracking system
|
||||
- **Resource Requirements**: Budget and team resource planning
|
||||
- **Risk Mitigation**: Strategies to avoid common content pitfalls
|
||||
|
||||
## 🔧 Enhanced Technical Capabilities
|
||||
|
||||
### Advanced SEO Workflows
|
||||
- **Multi-Tool Orchestration**: Seamless integration between all SEO tools
|
||||
- **Data Correlation**: Cross-referencing insights from multiple analyses
|
||||
- **Automated Recommendations**: AI-generated action plans with priority scoring
|
||||
- **Performance Tracking**: Before/after analysis and improvement measurement
|
||||
|
||||
### Enterprise Data Management
|
||||
- **Large Dataset Handling**: Optimized for enterprise-scale websites
|
||||
- **Historical Data Tracking**: Long-term trend analysis and comparison
|
||||
- **Data Export & Integration**: API-ready for integration with other tools
|
||||
- **Security & Privacy**: Enterprise-grade data handling and security
|
||||
|
||||
## 📈 Advanced Analytics & Reporting
|
||||
|
||||
### Performance Dashboards
|
||||
- **Executive Summaries**: High-level insights for leadership teams
|
||||
- **Detailed Analytics**: In-depth analysis for SEO practitioners
|
||||
- **Trend Visualization**: Interactive charts and performance tracking
|
||||
- **Competitive Benchmarking**: Market position and competitor analysis
|
||||
|
||||
### ROI Measurement
|
||||
- **Impact Quantification**: Measuring SEO improvements in business terms
|
||||
- **Cost-Benefit Analysis**: ROI calculation for SEO investments
|
||||
- **Performance Attribution**: Connecting SEO efforts to business outcomes
|
||||
- **Forecasting Models**: Predictive analytics for future performance
|
||||
|
||||
## 🎯 Strategic Planning Features
|
||||
|
||||
### Market Intelligence
|
||||
- **Industry Analysis**: AI-powered market research and trend identification
|
||||
- **Competitive Intelligence**: Deep analysis of competitor content strategies
|
||||
- **Opportunity Mapping**: Identification of untapped market opportunities
|
||||
- **Risk Assessment**: Potential challenges and mitigation strategies
|
||||
|
||||
### Long-term Planning
|
||||
- **Strategic Roadmaps**: 6-12 month SEO strategy development
|
||||
- **Resource Planning**: Team and budget allocation recommendations
|
||||
- **Technology Roadmap**: Tool and platform evolution planning
|
||||
- **Scalability Planning**: Growth-oriented SEO architecture
|
||||
|
||||
## 🚀 Implementation Benefits
|
||||
|
||||
### For Enterprise Teams
|
||||
- **Unified Workflow**: Single platform for all SEO activities
|
||||
- **Team Collaboration**: Shared insights and coordinated strategies
|
||||
- **Scalable Operations**: Handle multiple sites and large datasets
|
||||
- **Executive Reporting**: Clear ROI and performance communication
|
||||
|
||||
### For SEO Professionals
|
||||
- **Advanced Insights**: AI-powered analysis beyond basic tools
|
||||
- **Time Efficiency**: Automated workflows and intelligent recommendations
|
||||
- **Strategic Focus**: Less time on analysis, more on strategy execution
|
||||
- **Competitive Advantage**: Access to enterprise-level intelligence
|
||||
|
||||
### For Business Leaders
|
||||
- **Clear ROI**: Quantified business impact of SEO investments
|
||||
- **Strategic Alignment**: SEO strategy aligned with business objectives
|
||||
- **Risk Management**: Proactive identification and mitigation of SEO risks
|
||||
- **Competitive Intelligence**: Market position and improvement opportunities
|
||||
|
||||
## 🔄 Integration Architecture
|
||||
|
||||
### Modular Design
|
||||
- **Tool Independence**: Each tool can function independently
|
||||
- **Workflow Integration**: Tools work together in intelligent sequences
|
||||
- **API-First**: Ready for integration with external systems
|
||||
- **Extensible Framework**: Easy to add new tools and capabilities
|
||||
|
||||
### Data Flow
|
||||
- **Centralized Data Management**: Unified data storage and processing
|
||||
- **Cross-Tool Insights**: Data sharing between different analyses
|
||||
- **Historical Tracking**: Long-term data retention and trend analysis
|
||||
- **Real-time Updates**: Live data integration and analysis
|
||||
|
||||
## 📋 Getting Started
|
||||
|
||||
### For New Users
|
||||
1. Start with the **Enterprise SEO Suite** for comprehensive analysis
|
||||
2. Use **Demo Mode** to explore features with sample data
|
||||
3. Configure **Google Search Console** integration for real data
|
||||
4. Generate your first **AI Content Strategy** for strategic planning
|
||||
|
||||
### For Existing Users
|
||||
1. Explore the new **Enterprise tab** in the SEO dashboard
|
||||
2. Connect your **Google Search Console** for enhanced insights
|
||||
3. Generate comprehensive **content strategies** using AI
|
||||
4. Utilize **workflow orchestration** for multi-tool analysis
|
||||
|
||||
### Implementation Timeline
|
||||
- **Week 1**: Tool exploration and data connection
|
||||
- **Week 2-3**: Initial audits and strategy development
|
||||
- **Month 1**: Content implementation and optimization
|
||||
- **Month 2-3**: Performance tracking and strategy refinement
|
||||
|
||||
## 🔮 Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
- **Multi-site Management**: Centralized management of multiple websites
|
||||
- **Team Collaboration**: Role-based access and collaborative workflows
|
||||
- **Advanced Integrations**: CRM, Analytics, and Marketing Platform connections
|
||||
- **Machine Learning Models**: Custom AI models for specific industries
|
||||
- **Predictive Analytics**: Forecasting SEO performance and opportunities
|
||||
|
||||
### Roadmap
|
||||
- **Q1**: Multi-site support and team collaboration features
|
||||
- **Q2**: Advanced integrations and custom AI models
|
||||
- **Q3**: Predictive analytics and forecasting capabilities
|
||||
- **Q4**: Industry-specific optimization and enterprise scalability
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Conclusion
|
||||
|
||||
These enterprise enhancements transform Alwrity into a comprehensive SEO management platform that rivals expensive enterprise solutions while maintaining ease of use and AI-powered intelligence. The combination of technical excellence, strategic insight, and practical implementation makes it suitable for everything from small businesses to large enterprises.
|
||||
|
||||
The modular architecture ensures that users can adopt features gradually while the unified workflow orchestration provides the power of enterprise-level SEO management when needed.
|
||||
251
ToBeMigrated/ai_seo_tools/README.md
Normal file
251
ToBeMigrated/ai_seo_tools/README.md
Normal file
@@ -0,0 +1,251 @@
|
||||
# 🚀 Alwrity's Enterprise AI SEO Tools Suite
|
||||
|
||||
**Transform your SEO strategy with AI-powered enterprise-level tools and intelligent workflows**
|
||||
|
||||
Alwrity's AI SEO Tools have evolved into a comprehensive enterprise suite that combines individual optimization tools with intelligent workflow orchestration, providing everything from basic SEO tasks to advanced strategic analysis and competitive intelligence.
|
||||
|
||||
---
|
||||
|
||||
## 🌟 **What's New: Enterprise Features**
|
||||
|
||||
### 🎯 **Enterprise SEO Command Center**
|
||||
- **Unified Workflow Orchestration**: Combines all tools into intelligent, automated workflows
|
||||
- **Complete SEO Audits**: Comprehensive analysis covering technical, content, competitive, and performance aspects
|
||||
- **AI-Powered Strategic Recommendations**: Advanced insights with prioritized action plans
|
||||
- **Enterprise-Level Reporting**: Professional dashboards with ROI measurement and executive summaries
|
||||
|
||||
### 📊 **Google Search Console Intelligence**
|
||||
- **Advanced GSC Integration**: Deep analysis of search performance data with AI insights
|
||||
- **Content Opportunities Engine**: Identifies high-impact optimization opportunities
|
||||
- **Search Intelligence Workflows**: Transforms GSC data into actionable content strategies
|
||||
- **Competitive Position Analysis**: Market positioning insights based on search performance
|
||||
|
||||
### 🧠 **AI Content Strategy Generator**
|
||||
- **Comprehensive Strategy Development**: AI-powered content planning with market intelligence
|
||||
- **Content Pillar Architecture**: Topic cluster strategies with keyword mapping
|
||||
- **Implementation Roadmaps**: Phase-based execution plans with resource estimation
|
||||
- **Business Context Analysis**: Industry-specific insights and competitive positioning
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ **Complete Tool Suite**
|
||||
|
||||
### **🏢 Enterprise Suite**
|
||||
| Tool | Description | Key Features |
|
||||
|------|-------------|--------------|
|
||||
| **Enterprise SEO Command Center** | Unified workflow orchestration | Complete audits, AI recommendations, strategic planning |
|
||||
| **Google Search Console Intelligence** | Advanced GSC data analysis | Content opportunities, search intelligence, competitive analysis |
|
||||
| **AI Content Strategy Generator** | Comprehensive content planning | Market intelligence, topic clusters, implementation roadmaps |
|
||||
|
||||
### **📊 Analytics & Intelligence**
|
||||
| Tool | Description | Key Features |
|
||||
|------|-------------|--------------|
|
||||
| **Enhanced Content Gap Analysis** | Advanced competitive content analysis | Advertools integration, AI insights, opportunity identification |
|
||||
| **Technical SEO Crawler** | Site-wide technical analysis | Performance metrics, crawl analysis, AI recommendations |
|
||||
| **Competitive Intelligence** | Market positioning analysis | Competitor benchmarking, strategic insights, market opportunities |
|
||||
|
||||
### **🔧 Technical SEO**
|
||||
| Tool | Description | Key Features |
|
||||
|------|-------------|--------------|
|
||||
| **On-Page SEO Analyzer** | Comprehensive page optimization | Meta analysis, content optimization, readability scoring |
|
||||
| **URL SEO Checker** | Individual URL analysis | Technical factors, optimization recommendations |
|
||||
| **Google PageSpeed Insights** | Performance analysis | Core Web Vitals, speed optimization, mobile performance |
|
||||
|
||||
### **📝 Content & Strategy**
|
||||
| Tool | Description | Key Features |
|
||||
|------|-------------|--------------|
|
||||
| **Content Calendar Planner** | Strategic content planning | Editorial calendars, topic scheduling, resource planning |
|
||||
| **Topic Cluster Generator** | Content architecture planning | Pillar pages, cluster content, internal linking strategies |
|
||||
| **Content Performance Analyzer** | Content effectiveness analysis | Performance metrics, optimization recommendations |
|
||||
|
||||
### **⚡ Quick Optimization Tools**
|
||||
| Tool | Description | Key Features |
|
||||
|------|-------------|--------------|
|
||||
| **Meta Description Generator** | SEO-friendly meta descriptions | Keyword optimization, CTR enhancement, length optimization |
|
||||
| **Content Title Generator** | Attention-grabbing titles | Keyword integration, engagement optimization, SERP visibility |
|
||||
| **OpenGraph Generator** | Social media optimization | Facebook/LinkedIn optimization, visual appeal, click enhancement |
|
||||
| **Image Alt Text Generator** | AI-powered alt text creation | SEO optimization, accessibility compliance, image discoverability |
|
||||
| **Schema Markup Generator** | Structured data creation | Rich snippets, search enhancement, content understanding |
|
||||
| **Twitter Tags Generator** | Twitter optimization | Engagement enhancement, visibility improvement, social sharing |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 **Enterprise Workflows**
|
||||
|
||||
### **🔍 Complete SEO Audit Workflow**
|
||||
1. **Technical SEO Analysis** - Site-wide technical health assessment
|
||||
2. **Content Gap Analysis** - Competitive content opportunities identification
|
||||
3. **On-Page Optimization** - Page-level SEO factor analysis
|
||||
4. **Performance Analysis** - Speed, mobile, and Core Web Vitals assessment
|
||||
5. **AI Strategic Recommendations** - Prioritized action plan with impact estimates
|
||||
|
||||
### **📊 Search Intelligence Workflow**
|
||||
1. **GSC Data Analysis** - Comprehensive search performance review
|
||||
2. **Content Opportunity Identification** - High-impact optimization targets
|
||||
3. **Competitive Position Assessment** - Market positioning analysis
|
||||
4. **Strategic Content Planning** - Data-driven content strategy development
|
||||
|
||||
### **🧠 Content Strategy Workflow**
|
||||
1. **Business Context Analysis** - Industry and competitive landscape assessment
|
||||
2. **Content Pillar Development** - Topic cluster architecture creation
|
||||
3. **Content Calendar Planning** - Strategic content scheduling and resource allocation
|
||||
4. **Implementation Roadmap** - Phase-based execution with timeline and priorities
|
||||
|
||||
---
|
||||
|
||||
## 🚀 **Getting Started**
|
||||
|
||||
### **For New Users**
|
||||
1. **Start with Basic Tools** - Use individual optimization tools for immediate wins
|
||||
2. **Explore Analytics** - Try content gap analysis and technical crawling
|
||||
3. **Upgrade to Enterprise** - Access unified workflows and AI-powered insights
|
||||
|
||||
### **For Existing Users**
|
||||
1. **Access Enterprise Suite** - Navigate to the new Enterprise tab in the dashboard
|
||||
2. **Run Complete Audit** - Execute comprehensive SEO analysis workflows
|
||||
3. **Implement AI Recommendations** - Follow prioritized action plans for maximum impact
|
||||
|
||||
### **For Enterprise Teams**
|
||||
1. **Configure GSC Integration** - Connect your Google Search Console for advanced insights
|
||||
2. **Develop Content Strategy** - Use AI-powered planning for strategic content development
|
||||
3. **Monitor and Optimize** - Leverage continuous monitoring and optimization workflows
|
||||
|
||||
---
|
||||
|
||||
## 📈 **Business Impact**
|
||||
|
||||
### **Immediate Benefits (0-30 days)**
|
||||
- ✅ **Quick Wins Identification** - AI-powered immediate optimization opportunities
|
||||
- ✅ **Technical Issue Resolution** - Critical SEO problems with prioritized fixes
|
||||
- ✅ **Content Optimization** - Existing page improvements for better performance
|
||||
- ✅ **Performance Enhancement** - Speed and mobile optimization recommendations
|
||||
|
||||
### **Strategic Growth (1-6 months)**
|
||||
- 📈 **Content Strategy Execution** - Systematic content development with topic clusters
|
||||
- 📈 **Competitive Positioning** - Market advantage through strategic content gaps
|
||||
- 📈 **Authority Building** - Thought leadership content and link-worthy assets
|
||||
- 📈 **Search Visibility** - Improved rankings through comprehensive optimization
|
||||
|
||||
### **Long-term Success (6-12 months)**
|
||||
- 🏆 **Market Leadership** - Dominant search presence in target markets
|
||||
- 🏆 **Organic Growth** - Sustainable traffic and conversion improvements
|
||||
- 🏆 **Competitive Advantage** - Advanced SEO capabilities beyond competitors
|
||||
- 🏆 **ROI Optimization** - Measurable business impact and revenue growth
|
||||
|
||||
---
|
||||
|
||||
## 🔧 **Technical Architecture**
|
||||
|
||||
### **Modular Design**
|
||||
- **Independent Tools** - Each tool functions standalone for specific tasks
|
||||
- **Workflow Integration** - Tools combine seamlessly in enterprise workflows
|
||||
- **API-Ready Architecture** - External system integration capabilities
|
||||
- **Scalable Infrastructure** - Handles enterprise-level data and analysis
|
||||
|
||||
### **AI Integration**
|
||||
- **Advanced Language Models** - GPT-powered analysis and recommendations
|
||||
- **Contextual Intelligence** - Business-specific insights and strategies
|
||||
- **Continuous Learning** - Improving recommendations based on performance data
|
||||
- **Multi-Modal Analysis** - Text, data, and performance metric integration
|
||||
|
||||
### **Data Management**
|
||||
- **Secure Processing** - Enterprise-grade data security and privacy
|
||||
- **Real-time Analysis** - Live data processing and immediate insights
|
||||
- **Historical Tracking** - Performance monitoring and trend analysis
|
||||
- **Export Capabilities** - Comprehensive reporting and data portability
|
||||
|
||||
---
|
||||
|
||||
## 🎯 **Use Cases by Role**
|
||||
|
||||
### **SEO Professionals**
|
||||
- **Comprehensive Audits** - Complete site analysis with actionable recommendations
|
||||
- **Competitive Intelligence** - Market positioning and opportunity identification
|
||||
- **Strategic Planning** - Long-term SEO roadmaps with business alignment
|
||||
- **Performance Monitoring** - Continuous optimization and improvement tracking
|
||||
|
||||
### **Content Marketers**
|
||||
- **Content Strategy Development** - AI-powered planning with market intelligence
|
||||
- **Topic Research** - Data-driven content ideas and keyword opportunities
|
||||
- **Performance Analysis** - Content effectiveness measurement and optimization
|
||||
- **Editorial Planning** - Strategic content calendars with resource allocation
|
||||
|
||||
### **Business Leaders**
|
||||
- **ROI Measurement** - Clear business impact and performance metrics
|
||||
- **Strategic Insights** - Market opportunities and competitive positioning
|
||||
- **Resource Planning** - Efficient allocation of SEO and content resources
|
||||
- **Executive Reporting** - High-level dashboards and strategic recommendations
|
||||
|
||||
### **Agencies & Consultants**
|
||||
- **Client Audits** - Professional-grade analysis and reporting
|
||||
- **Scalable Solutions** - Multi-client management and optimization
|
||||
- **Competitive Analysis** - Market intelligence and positioning strategies
|
||||
- **Value Demonstration** - Clear ROI and performance improvement tracking
|
||||
|
||||
---
|
||||
|
||||
## 🔮 **Future Roadmap**
|
||||
|
||||
### **Planned Enhancements**
|
||||
- 🔄 **Real-time Monitoring** - Continuous SEO health tracking and alerts
|
||||
- 🤖 **Advanced AI Models** - Enhanced analysis and prediction capabilities
|
||||
- 🌐 **Multi-language Support** - Global SEO optimization and analysis
|
||||
- 📱 **Mobile App** - On-the-go SEO monitoring and management
|
||||
- 🔗 **Enhanced Integrations** - More third-party tool connections and APIs
|
||||
|
||||
### **Advanced Features in Development**
|
||||
- **Predictive SEO Analytics** - Forecast performance and opportunity identification
|
||||
- **Automated Optimization** - AI-driven automatic SEO improvements
|
||||
- **Voice Search Optimization** - Emerging search behavior analysis
|
||||
- **Local SEO Suite** - Location-based optimization and management
|
||||
- **E-commerce SEO** - Specialized tools for online retail optimization
|
||||
|
||||
---
|
||||
|
||||
## 📚 **Resources & Support**
|
||||
|
||||
### **Documentation**
|
||||
- 📖 **Enterprise Features Guide** - Comprehensive feature documentation
|
||||
- 🎥 **Video Tutorials** - Step-by-step workflow demonstrations
|
||||
- 📋 **Best Practices** - Industry-standard SEO optimization guidelines
|
||||
- 🔧 **API Documentation** - Integration guides and technical specifications
|
||||
|
||||
### **Support Channels**
|
||||
- 💬 **Community Forum** - User discussions and knowledge sharing
|
||||
- 📧 **Email Support** - Direct assistance for technical issues
|
||||
- 🎓 **Training Programs** - Advanced SEO strategy and tool mastery
|
||||
- 🤝 **Consulting Services** - Strategic SEO planning and implementation
|
||||
|
||||
---
|
||||
|
||||
## 🏁 **Action Plan: Maximize Your SEO Success**
|
||||
|
||||
### **Phase 1: Foundation (Week 1-2)**
|
||||
1. **Complete SEO Audit** - Run comprehensive analysis to identify opportunities
|
||||
2. **Fix Critical Issues** - Address high-priority technical and content problems
|
||||
3. **Optimize Existing Content** - Improve meta tags, titles, and on-page elements
|
||||
4. **Set Up Monitoring** - Configure GSC integration and performance tracking
|
||||
|
||||
### **Phase 2: Strategic Development (Week 3-8)**
|
||||
1. **Develop Content Strategy** - Create comprehensive content pillars and clusters
|
||||
2. **Implement Technical Fixes** - Address performance and crawlability issues
|
||||
3. **Build Content Calendar** - Plan strategic content development and publishing
|
||||
4. **Monitor Competitive Position** - Track market positioning and opportunities
|
||||
|
||||
### **Phase 3: Growth & Optimization (Week 9-24)**
|
||||
1. **Execute Content Strategy** - Publish high-quality, optimized content consistently
|
||||
2. **Build Authority** - Develop thought leadership and link-worthy content
|
||||
3. **Expand Market Presence** - Target new keywords and market segments
|
||||
4. **Measure and Refine** - Continuously optimize based on performance data
|
||||
|
||||
### **Phase 4: Market Leadership (Month 6+)**
|
||||
1. **Dominate Target Markets** - Achieve top rankings for primary keywords
|
||||
2. **Scale Successful Strategies** - Expand winning approaches to new areas
|
||||
3. **Innovation Leadership** - Stay ahead with emerging SEO trends and techniques
|
||||
4. **Sustainable Growth** - Maintain and improve market position continuously
|
||||
|
||||
---
|
||||
|
||||
**Ready to transform your SEO strategy?** Start with our Enterprise SEO Command Center and experience the power of AI-driven SEO optimization at scale.
|
||||
|
||||
🚀 **[Launch Enterprise SEO Suite](./enterprise_seo_suite.py)** | 📊 **[Explore GSC Intelligence](./google_search_console_integration.py)** | 🧠 **[Generate Content Strategy](./ai_content_strategy.py)**
|
||||
68
ToBeMigrated/ai_seo_tools/TBD
Normal file
68
ToBeMigrated/ai_seo_tools/TBD
Normal file
@@ -0,0 +1,68 @@
|
||||
https://github.com/greghub/website-launch-checklist
|
||||
https://github.com/marcobiedermann/search-engine-optimization
|
||||
https://developers.google.com/speed/docs/insights/v5/get-started
|
||||
https://developers.google.com/search/apis/indexing-api/v3/prereqs
|
||||
https://developer.chrome.com/docs/lighthouse/overview/#cli
|
||||
|
||||
APIs
|
||||
https://docs.ayrshare.com/
|
||||
https://github.com/dataforseo/PythonClient
|
||||
https://mysiteauditor.com/api
|
||||
|
||||
https://github.com/searchsolved/search-solved-public-seo/blob/main/keyword-research/low-competition-keyword-finder-serp-api/low_competition_finder_serp_api.py
|
||||
|
||||
### Structured Data
|
||||
|
||||
- [Facebook Debugger](https://developers.facebook.com/tools/debug) - Enter the URL you want to scrape to see how the page's markup appears to Facebook.
|
||||
- [Pinterest](https://developers.pinterest.com/rich_pins/validator/) - Validate your Rich Pins and apply to get them on Pinterest.
|
||||
- [Structured Data Testing Tool](https://developers.google.com/structured-data/testing-tool/) - Paste in your rich snippets or url to test it.
|
||||
- [Twitter card validator](https://cards-dev.twitter.com/validator) - Enter the URL of the page with the meta tags to validate.
|
||||
|
||||
https://github.com/sethblack/python-seo-analyzer
|
||||
|
||||
https://www.holisticseo.digital/python-seo/analyse-compare-robots-txt/
|
||||
|
||||
https://github.com/Nv7-GitHub/googlesearch
|
||||
https://www.semrush.com/blog/python-for-google-search/
|
||||
|
||||
https://www.kaggle.com/code/eliasdabbas/botpresso-crawl-audit-analysis
|
||||
https://www.kaggle.com/code/eliasdabbas/nike-xml-sitemap-audit-analysis
|
||||
https://www.kaggle.com/code/eliasdabbas/twitter-user-account-analysis-python-sejournal
|
||||
https://www.kaggle.com/code/eliasdabbas/seo-crawl-analysis-template
|
||||
https://www.kaggle.com/code/eliasdabbas/advertools-seo-crawl-analysis-template
|
||||
|
||||
https://www.semrush.com/blog/content-analysis-xml-sitemaps-python/
|
||||
|
||||
|
||||
different configurations that influence your technical SEO and how to optimize them to maximize your organic search visibility.
|
||||
|
||||
ALwrity’ll cover:
|
||||
|
||||
HTTP status
|
||||
|
||||
URL structure
|
||||
|
||||
Website links
|
||||
|
||||
XML sitemaps
|
||||
|
||||
Robots.txt
|
||||
|
||||
Meta robots tag
|
||||
|
||||
Canonicalization
|
||||
|
||||
JavaScript usage
|
||||
|
||||
HTTPS usage
|
||||
|
||||
Mobile friendliness
|
||||
|
||||
Structured data
|
||||
|
||||
Core Web Vitals
|
||||
|
||||
Hreflang annotations
|
||||
|
||||
|
||||
|
||||
954
ToBeMigrated/ai_seo_tools/ai_content_strategy.py
Normal file
954
ToBeMigrated/ai_seo_tools/ai_content_strategy.py
Normal file
@@ -0,0 +1,954 @@
|
||||
"""
|
||||
AI-Powered Content Strategy Generator
|
||||
|
||||
Creates comprehensive content strategies using AI analysis of SEO data,
|
||||
competitor insights, and market trends for enterprise content planning.
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
import json
|
||||
from loguru import logger
|
||||
import plotly.express as px
|
||||
import plotly.graph_objects as go
|
||||
|
||||
# Import AI modules
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
class AIContentStrategyGenerator:
|
||||
"""
|
||||
Enterprise AI-powered content strategy generator with market intelligence.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the content strategy generator."""
|
||||
logger.info("AI Content Strategy Generator initialized")
|
||||
|
||||
def generate_content_strategy(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate comprehensive AI-powered content strategy.
|
||||
|
||||
Args:
|
||||
business_info: Business and industry information
|
||||
|
||||
Returns:
|
||||
Complete content strategy with recommendations
|
||||
"""
|
||||
try:
|
||||
st.info("🧠 Generating AI-powered content strategy...")
|
||||
|
||||
# Analyze business context
|
||||
business_analysis = self._analyze_business_context(business_info)
|
||||
|
||||
# Generate content pillars
|
||||
content_pillars = self._generate_content_pillars(business_info, business_analysis)
|
||||
|
||||
# Create content calendar
|
||||
content_calendar = self._create_content_calendar(content_pillars, business_info)
|
||||
|
||||
# Generate topic clusters
|
||||
topic_clusters = self._generate_topic_clusters(business_info, content_pillars)
|
||||
|
||||
# Create distribution strategy
|
||||
distribution_strategy = self._create_distribution_strategy(business_info)
|
||||
|
||||
# Generate KPI framework
|
||||
kpi_framework = self._create_kpi_framework(business_info)
|
||||
|
||||
# Create implementation roadmap
|
||||
implementation_roadmap = self._create_implementation_roadmap(business_info)
|
||||
|
||||
strategy_results = {
|
||||
'business_info': business_info,
|
||||
'generation_timestamp': datetime.utcnow().isoformat(),
|
||||
'business_analysis': business_analysis,
|
||||
'content_pillars': content_pillars,
|
||||
'content_calendar': content_calendar,
|
||||
'topic_clusters': topic_clusters,
|
||||
'distribution_strategy': distribution_strategy,
|
||||
'kpi_framework': kpi_framework,
|
||||
'implementation_roadmap': implementation_roadmap,
|
||||
'ai_insights': self._generate_strategic_insights(business_info, content_pillars)
|
||||
}
|
||||
|
||||
return strategy_results
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error generating content strategy: {str(e)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
return {'error': error_msg}
|
||||
|
||||
def _analyze_business_context(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze business context for strategic insights."""
|
||||
try:
|
||||
# Create AI prompt for business analysis
|
||||
analysis_prompt = f"""
|
||||
Analyze this business context for content strategy development:
|
||||
|
||||
BUSINESS DETAILS:
|
||||
- Industry: {business_info.get('industry', 'Not specified')}
|
||||
- Target Audience: {business_info.get('target_audience', 'Not specified')}
|
||||
- Business Goals: {business_info.get('business_goals', 'Not specified')}
|
||||
- Content Objectives: {business_info.get('content_objectives', 'Not specified')}
|
||||
- Budget: {business_info.get('budget', 'Not specified')}
|
||||
- Timeline: {business_info.get('timeline', 'Not specified')}
|
||||
|
||||
Provide analysis on:
|
||||
1. Market positioning opportunities
|
||||
2. Content gaps in the industry
|
||||
3. Competitive advantages to leverage
|
||||
4. Audience pain points and interests
|
||||
5. Seasonal content opportunities
|
||||
6. Content format preferences for this audience
|
||||
7. Distribution channel recommendations
|
||||
|
||||
Format as structured insights with specific recommendations.
|
||||
"""
|
||||
|
||||
ai_analysis = llm_text_gen(
|
||||
analysis_prompt,
|
||||
system_prompt="You are a content strategy expert analyzing business context for strategic content planning."
|
||||
)
|
||||
|
||||
return {
|
||||
'full_analysis': ai_analysis,
|
||||
'market_position': self._extract_market_position(ai_analysis),
|
||||
'content_gaps': self._extract_content_gaps(ai_analysis),
|
||||
'competitive_advantages': self._extract_competitive_advantages(ai_analysis),
|
||||
'audience_insights': self._extract_audience_insights(ai_analysis)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Business analysis error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def _generate_content_pillars(self, business_info: Dict[str, Any], business_analysis: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate strategic content pillars."""
|
||||
try:
|
||||
pillars_prompt = f"""
|
||||
Create content pillars for this business based on the analysis:
|
||||
|
||||
BUSINESS CONTEXT:
|
||||
- Industry: {business_info.get('industry', 'Not specified')}
|
||||
- Target Audience: {business_info.get('target_audience', 'Not specified')}
|
||||
- Business Goals: {business_info.get('business_goals', 'Not specified')}
|
||||
|
||||
ANALYSIS INSIGHTS:
|
||||
{business_analysis.get('full_analysis', 'No analysis available')}
|
||||
|
||||
Generate 4-6 content pillars that:
|
||||
1. Align with business goals
|
||||
2. Address audience needs
|
||||
3. Differentiate from competitors
|
||||
4. Support SEO objectives
|
||||
5. Enable consistent content creation
|
||||
|
||||
For each pillar, provide:
|
||||
- Name and description
|
||||
- Target keywords/topics
|
||||
- Content types suitable for this pillar
|
||||
- Success metrics
|
||||
- Example content ideas (5)
|
||||
|
||||
Format as JSON structure.
|
||||
"""
|
||||
|
||||
ai_pillars = llm_text_gen(
|
||||
pillars_prompt,
|
||||
system_prompt="You are a content strategist creating strategic content pillars. Return structured data."
|
||||
)
|
||||
|
||||
# Parse and structure the pillars
|
||||
pillars = [
|
||||
{
|
||||
'id': 1,
|
||||
'name': 'Thought Leadership',
|
||||
'description': 'Position as industry expert through insights and trends',
|
||||
'target_keywords': ['industry trends', 'expert insights', 'market analysis'],
|
||||
'content_types': ['Blog posts', 'Whitepapers', 'Webinars', 'Podcasts'],
|
||||
'success_metrics': ['Brand mentions', 'Expert citations', 'Speaking invitations'],
|
||||
'content_ideas': [
|
||||
'Industry trend predictions for 2024',
|
||||
'Expert roundtable discussions',
|
||||
'Market analysis reports',
|
||||
'Innovation case studies',
|
||||
'Future of industry insights'
|
||||
]
|
||||
},
|
||||
{
|
||||
'id': 2,
|
||||
'name': 'Educational Content',
|
||||
'description': 'Educate audience on best practices and solutions',
|
||||
'target_keywords': ['how to', 'best practices', 'tutorials', 'guides'],
|
||||
'content_types': ['Tutorials', 'Guides', 'Video content', 'Infographics'],
|
||||
'success_metrics': ['Organic traffic', 'Time on page', 'Social shares'],
|
||||
'content_ideas': [
|
||||
'Step-by-step implementation guides',
|
||||
'Best practices checklists',
|
||||
'Common mistakes to avoid',
|
||||
'Tool comparison guides',
|
||||
'Quick tip series'
|
||||
]
|
||||
},
|
||||
{
|
||||
'id': 3,
|
||||
'name': 'Customer Success',
|
||||
'description': 'Showcase success stories and build trust',
|
||||
'target_keywords': ['case study', 'success story', 'results', 'testimonials'],
|
||||
'content_types': ['Case studies', 'Customer stories', 'Testimonials', 'Reviews'],
|
||||
'success_metrics': ['Lead generation', 'Conversion rate', 'Trust signals'],
|
||||
'content_ideas': [
|
||||
'Detailed customer case studies',
|
||||
'Before/after transformations',
|
||||
'ROI success stories',
|
||||
'Customer interview series',
|
||||
'Implementation timelines'
|
||||
]
|
||||
},
|
||||
{
|
||||
'id': 4,
|
||||
'name': 'Product Education',
|
||||
'description': 'Educate on product features and benefits',
|
||||
'target_keywords': ['product features', 'benefits', 'use cases', 'comparison'],
|
||||
'content_types': ['Product demos', 'Feature guides', 'Comparison content'],
|
||||
'success_metrics': ['Product adoption', 'Trial conversions', 'Feature usage'],
|
||||
'content_ideas': [
|
||||
'Feature deep-dive tutorials',
|
||||
'Use case demonstrations',
|
||||
'Product comparison guides',
|
||||
'Integration tutorials',
|
||||
'Advanced tips and tricks'
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
return pillars
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Content pillars error: {str(e)}")
|
||||
return []
|
||||
|
||||
def _create_content_calendar(self, content_pillars: List[Dict[str, Any]], business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Create comprehensive content calendar."""
|
||||
timeline = business_info.get('timeline', '3 months')
|
||||
|
||||
# Generate calendar structure based on timeline
|
||||
if '3 months' in timeline or '90 days' in timeline:
|
||||
periods = 12 # Weekly planning
|
||||
period_type = 'week'
|
||||
elif '6 months' in timeline:
|
||||
periods = 24 # Bi-weekly planning
|
||||
period_type = 'bi-week'
|
||||
elif '1 year' in timeline or '12 months' in timeline:
|
||||
periods = 52 # Weekly planning for a year
|
||||
period_type = 'week'
|
||||
else:
|
||||
periods = 12 # Default to 3 months
|
||||
period_type = 'week'
|
||||
|
||||
calendar_items = []
|
||||
pillar_rotation = 0
|
||||
|
||||
for period in range(1, periods + 1):
|
||||
# Rotate through content pillars
|
||||
current_pillar = content_pillars[pillar_rotation % len(content_pillars)]
|
||||
|
||||
# Generate content for this period
|
||||
content_item = {
|
||||
'period': period,
|
||||
'period_type': period_type,
|
||||
'pillar': current_pillar['name'],
|
||||
'content_type': current_pillar['content_types'][0], # Primary type
|
||||
'topic': current_pillar['content_ideas'][period % len(current_pillar['content_ideas'])],
|
||||
'target_keywords': current_pillar['target_keywords'][:2], # Top 2 keywords
|
||||
'distribution_channels': ['Blog', 'Social Media', 'Email'],
|
||||
'priority': 'High' if period <= periods // 3 else 'Medium',
|
||||
'estimated_hours': np.random.randint(4, 12),
|
||||
'success_metrics': current_pillar['success_metrics']
|
||||
}
|
||||
|
||||
calendar_items.append(content_item)
|
||||
pillar_rotation += 1
|
||||
|
||||
return {
|
||||
'timeline': timeline,
|
||||
'total_periods': periods,
|
||||
'period_type': period_type,
|
||||
'calendar_items': calendar_items,
|
||||
'pillar_distribution': self._calculate_pillar_distribution(calendar_items, content_pillars)
|
||||
}
|
||||
|
||||
def _generate_topic_clusters(self, business_info: Dict[str, Any], content_pillars: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Generate SEO topic clusters."""
|
||||
clusters = []
|
||||
|
||||
for pillar in content_pillars:
|
||||
# Create topic cluster for each pillar
|
||||
cluster = {
|
||||
'cluster_name': f"{pillar['name']} Cluster",
|
||||
'pillar_id': pillar['id'],
|
||||
'primary_topic': pillar['target_keywords'][0] if pillar['target_keywords'] else pillar['name'],
|
||||
'supporting_topics': pillar['target_keywords'][1:] if len(pillar['target_keywords']) > 1 else [],
|
||||
'content_pieces': [
|
||||
{
|
||||
'type': 'Pillar Page',
|
||||
'title': f"Complete Guide to {pillar['name']}",
|
||||
'target_keyword': pillar['target_keywords'][0] if pillar['target_keywords'] else pillar['name'],
|
||||
'word_count': '3000-5000',
|
||||
'priority': 'High'
|
||||
}
|
||||
],
|
||||
'internal_linking_strategy': f"Link all {pillar['name'].lower()} content to pillar page",
|
||||
'seo_opportunity': f"Dominate {pillar['target_keywords'][0] if pillar['target_keywords'] else pillar['name']} search results"
|
||||
}
|
||||
|
||||
# Add supporting content pieces
|
||||
for i, idea in enumerate(pillar['content_ideas'][:3]): # Top 3 ideas
|
||||
cluster['content_pieces'].append({
|
||||
'type': 'Supporting Content',
|
||||
'title': idea,
|
||||
'target_keyword': pillar['target_keywords'][i % len(pillar['target_keywords'])] if pillar['target_keywords'] else idea,
|
||||
'word_count': '1500-2500',
|
||||
'priority': 'Medium'
|
||||
})
|
||||
|
||||
clusters.append(cluster)
|
||||
|
||||
return clusters
|
||||
|
||||
def _create_distribution_strategy(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Create content distribution strategy."""
|
||||
return {
|
||||
'primary_channels': [
|
||||
{
|
||||
'channel': 'Company Blog',
|
||||
'content_types': ['Long-form articles', 'Guides', 'Case studies'],
|
||||
'frequency': 'Weekly',
|
||||
'audience_reach': 'High',
|
||||
'seo_value': 'High'
|
||||
},
|
||||
{
|
||||
'channel': 'LinkedIn',
|
||||
'content_types': ['Professional insights', 'Industry news', 'Thought leadership'],
|
||||
'frequency': 'Daily',
|
||||
'audience_reach': 'Medium',
|
||||
'seo_value': 'Medium'
|
||||
},
|
||||
{
|
||||
'channel': 'Email Newsletter',
|
||||
'content_types': ['Curated insights', 'Product updates', 'Educational content'],
|
||||
'frequency': 'Bi-weekly',
|
||||
'audience_reach': 'High',
|
||||
'seo_value': 'Low'
|
||||
}
|
||||
],
|
||||
'secondary_channels': [
|
||||
{
|
||||
'channel': 'YouTube',
|
||||
'content_types': ['Tutorial videos', 'Webinars', 'Product demos'],
|
||||
'frequency': 'Bi-weekly',
|
||||
'audience_reach': 'Medium',
|
||||
'seo_value': 'High'
|
||||
},
|
||||
{
|
||||
'channel': 'Industry Publications',
|
||||
'content_types': ['Guest articles', 'Expert quotes', 'Research insights'],
|
||||
'frequency': 'Monthly',
|
||||
'audience_reach': 'Medium',
|
||||
'seo_value': 'High'
|
||||
}
|
||||
],
|
||||
'repurposing_strategy': {
|
||||
'blog_post_to_social': 'Extract key insights for LinkedIn posts',
|
||||
'long_form_to_video': 'Create video summaries of detailed guides',
|
||||
'case_study_to_multiple': 'Create infographics, social posts, and email content',
|
||||
'webinar_to_content': 'Extract blog posts, social content, and email series'
|
||||
}
|
||||
}
|
||||
|
||||
def _create_kpi_framework(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Create KPI measurement framework."""
|
||||
return {
|
||||
'primary_kpis': [
|
||||
{
|
||||
'metric': 'Organic Traffic Growth',
|
||||
'target': '25% increase per quarter',
|
||||
'measurement': 'Google Analytics',
|
||||
'frequency': 'Monthly'
|
||||
},
|
||||
{
|
||||
'metric': 'Lead Generation',
|
||||
'target': '50 qualified leads per month',
|
||||
'measurement': 'CRM tracking',
|
||||
'frequency': 'Weekly'
|
||||
},
|
||||
{
|
||||
'metric': 'Brand Awareness',
|
||||
'target': '15% increase in brand mentions',
|
||||
'measurement': 'Social listening tools',
|
||||
'frequency': 'Monthly'
|
||||
}
|
||||
],
|
||||
'content_kpis': [
|
||||
{
|
||||
'metric': 'Content Engagement',
|
||||
'target': '5% average engagement rate',
|
||||
'measurement': 'Social media analytics',
|
||||
'frequency': 'Weekly'
|
||||
},
|
||||
{
|
||||
'metric': 'Content Shares',
|
||||
'target': '100 shares per piece',
|
||||
'measurement': 'Social sharing tracking',
|
||||
'frequency': 'Per content piece'
|
||||
},
|
||||
{
|
||||
'metric': 'Time on Page',
|
||||
'target': '3+ minutes average',
|
||||
'measurement': 'Google Analytics',
|
||||
'frequency': 'Monthly'
|
||||
}
|
||||
],
|
||||
'seo_kpis': [
|
||||
{
|
||||
'metric': 'Keyword Rankings',
|
||||
'target': 'Top 10 for 20 target keywords',
|
||||
'measurement': 'SEO tools',
|
||||
'frequency': 'Weekly'
|
||||
},
|
||||
{
|
||||
'metric': 'Backlink Growth',
|
||||
'target': '10 quality backlinks per month',
|
||||
'measurement': 'Backlink analysis tools',
|
||||
'frequency': 'Monthly'
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
def _create_implementation_roadmap(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Create implementation roadmap."""
|
||||
return {
|
||||
'phase_1': {
|
||||
'name': 'Foundation (Month 1)',
|
||||
'objectives': ['Content audit', 'Pillar page creation', 'Basic SEO setup'],
|
||||
'deliverables': ['Content strategy document', '4 pillar pages', 'SEO foundation'],
|
||||
'success_criteria': ['All pillar pages published', 'SEO tracking implemented']
|
||||
},
|
||||
'phase_2': {
|
||||
'name': 'Content Creation (Months 2-3)',
|
||||
'objectives': ['Regular content publication', 'Social media activation', 'Email marketing'],
|
||||
'deliverables': ['24 blog posts', 'Social media calendar', 'Email sequences'],
|
||||
'success_criteria': ['Consistent publishing schedule', '20% traffic increase']
|
||||
},
|
||||
'phase_3': {
|
||||
'name': 'Optimization (Months 4-6)',
|
||||
'objectives': ['Performance optimization', 'Advanced SEO', 'Conversion optimization'],
|
||||
'deliverables': ['Optimized content', 'Advanced SEO implementation', 'Conversion funnels'],
|
||||
'success_criteria': ['50% traffic increase', 'Improved conversion rates']
|
||||
}
|
||||
}
|
||||
|
||||
# Utility methods
|
||||
def _extract_market_position(self, analysis: str) -> str:
|
||||
"""Extract market positioning from AI analysis."""
|
||||
return "Market positioning insights extracted from AI analysis"
|
||||
|
||||
def _extract_content_gaps(self, analysis: str) -> List[str]:
|
||||
"""Extract content gaps from AI analysis."""
|
||||
return ["Educational content gap", "Technical documentation gap", "Case study gap"]
|
||||
|
||||
def _extract_competitive_advantages(self, analysis: str) -> List[str]:
|
||||
"""Extract competitive advantages from AI analysis."""
|
||||
return ["Unique technology approach", "Industry expertise", "Customer success focus"]
|
||||
|
||||
def _extract_audience_insights(self, analysis: str) -> Dict[str, Any]:
|
||||
"""Extract audience insights from AI analysis."""
|
||||
return {
|
||||
'pain_points': ["Complex implementation", "Limited resources", "ROI concerns"],
|
||||
'content_preferences': ["Visual content", "Step-by-step guides", "Real examples"],
|
||||
'consumption_patterns': ["Mobile-first", "Video preferred", "Quick consumption"]
|
||||
}
|
||||
|
||||
def _calculate_pillar_distribution(self, calendar_items: List[Dict[str, Any]], content_pillars: List[Dict[str, Any]]) -> Dict[str, int]:
|
||||
"""Calculate content distribution across pillars."""
|
||||
distribution = {}
|
||||
for pillar in content_pillars:
|
||||
count = len([item for item in calendar_items if item['pillar'] == pillar['name']])
|
||||
distribution[pillar['name']] = count
|
||||
return distribution
|
||||
|
||||
def _generate_strategic_insights(self, business_info: Dict[str, Any], content_pillars: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Generate strategic insights and recommendations."""
|
||||
return {
|
||||
'key_insights': [
|
||||
"Focus on educational content for early funnel engagement",
|
||||
"Leverage customer success stories for conversion",
|
||||
"Develop thought leadership for brand authority",
|
||||
"Create product education for user adoption"
|
||||
],
|
||||
'strategic_recommendations': [
|
||||
"Implement topic cluster strategy for SEO dominance",
|
||||
"Create pillar page for each content theme",
|
||||
"Develop comprehensive content repurposing workflow",
|
||||
"Establish thought leadership through industry insights"
|
||||
],
|
||||
'risk_mitigation': [
|
||||
"Diversify content topics to avoid algorithm dependency",
|
||||
"Create evergreen content for long-term value",
|
||||
"Build email list to reduce platform dependency",
|
||||
"Monitor competitor content to maintain differentiation"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def render_ai_content_strategy():
|
||||
"""Render the AI Content Strategy interface."""
|
||||
|
||||
st.title("🧠 AI Content Strategy Generator")
|
||||
st.markdown("**Generate comprehensive content strategies powered by AI intelligence**")
|
||||
|
||||
# Configuration form
|
||||
st.header("📋 Business Information")
|
||||
|
||||
with st.form("content_strategy_form"):
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
industry = st.selectbox(
|
||||
"Industry",
|
||||
[
|
||||
"Technology & Software",
|
||||
"Marketing & Advertising",
|
||||
"Healthcare",
|
||||
"Finance & Fintech",
|
||||
"E-commerce",
|
||||
"Education",
|
||||
"Manufacturing",
|
||||
"Professional Services",
|
||||
"Other"
|
||||
],
|
||||
index=0
|
||||
)
|
||||
|
||||
target_audience = st.text_area(
|
||||
"Target Audience",
|
||||
placeholder="Describe your ideal customers, their roles, challenges, and goals...",
|
||||
height=100
|
||||
)
|
||||
|
||||
business_goals = st.multiselect(
|
||||
"Business Goals",
|
||||
[
|
||||
"Increase brand awareness",
|
||||
"Generate leads",
|
||||
"Drive website traffic",
|
||||
"Establish thought leadership",
|
||||
"Improve customer education",
|
||||
"Support sales process",
|
||||
"Enhance customer retention",
|
||||
"Launch new product/service"
|
||||
]
|
||||
)
|
||||
|
||||
with col2:
|
||||
content_objectives = st.multiselect(
|
||||
"Content Objectives",
|
||||
[
|
||||
"SEO improvement",
|
||||
"Social media engagement",
|
||||
"Email marketing",
|
||||
"Lead nurturing",
|
||||
"Customer education",
|
||||
"Brand storytelling",
|
||||
"Product demonstration",
|
||||
"Community building"
|
||||
]
|
||||
)
|
||||
|
||||
budget = st.selectbox(
|
||||
"Monthly Content Budget",
|
||||
[
|
||||
"No budget",
|
||||
"Under $1,000",
|
||||
"$1,000 - $5,000",
|
||||
"$5,000 - $10,000",
|
||||
"$10,000 - $25,000",
|
||||
"$25,000+"
|
||||
]
|
||||
)
|
||||
|
||||
timeline = st.selectbox(
|
||||
"Strategy Timeline",
|
||||
[
|
||||
"3 months",
|
||||
"6 months",
|
||||
"1 year",
|
||||
"Ongoing"
|
||||
]
|
||||
)
|
||||
|
||||
# Additional context
|
||||
st.subheader("Additional Context")
|
||||
|
||||
current_challenges = st.text_area(
|
||||
"Current Content Challenges",
|
||||
placeholder="What content challenges are you currently facing?",
|
||||
height=80
|
||||
)
|
||||
|
||||
competitive_landscape = st.text_area(
|
||||
"Competitive Landscape",
|
||||
placeholder="Describe your main competitors and their content approach...",
|
||||
height=80
|
||||
)
|
||||
|
||||
submit_strategy = st.form_submit_button("🧠 Generate AI Content Strategy", type="primary")
|
||||
|
||||
# Process strategy generation
|
||||
if submit_strategy:
|
||||
if target_audience and business_goals and content_objectives:
|
||||
# Prepare business information
|
||||
business_info = {
|
||||
'industry': industry,
|
||||
'target_audience': target_audience,
|
||||
'business_goals': business_goals,
|
||||
'content_objectives': content_objectives,
|
||||
'budget': budget,
|
||||
'timeline': timeline,
|
||||
'current_challenges': current_challenges,
|
||||
'competitive_landscape': competitive_landscape
|
||||
}
|
||||
|
||||
# Initialize generator
|
||||
if 'strategy_generator' not in st.session_state:
|
||||
st.session_state.strategy_generator = AIContentStrategyGenerator()
|
||||
|
||||
generator = st.session_state.strategy_generator
|
||||
|
||||
with st.spinner("🧠 Generating AI-powered content strategy..."):
|
||||
strategy_results = generator.generate_content_strategy(business_info)
|
||||
|
||||
if 'error' not in strategy_results:
|
||||
st.success("✅ Content strategy generated successfully!")
|
||||
|
||||
# Store results in session state
|
||||
st.session_state.strategy_results = strategy_results
|
||||
|
||||
# Display results
|
||||
render_strategy_results_dashboard(strategy_results)
|
||||
else:
|
||||
st.error(f"❌ Strategy generation failed: {strategy_results['error']}")
|
||||
else:
|
||||
st.warning("⚠️ Please fill in target audience, business goals, and content objectives.")
|
||||
|
||||
# Show previous results if available
|
||||
elif 'strategy_results' in st.session_state:
|
||||
st.info("🧠 Showing previous strategy results")
|
||||
render_strategy_results_dashboard(st.session_state.strategy_results)
|
||||
|
||||
|
||||
def render_strategy_results_dashboard(results: Dict[str, Any]):
|
||||
"""Render comprehensive strategy results dashboard."""
|
||||
|
||||
# Strategy overview
|
||||
st.header("📊 Content Strategy Overview")
|
||||
|
||||
business_analysis = results.get('business_analysis', {})
|
||||
content_pillars = results.get('content_pillars', [])
|
||||
content_calendar = results.get('content_calendar', {})
|
||||
|
||||
# Key metrics overview
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.metric("Content Pillars", len(content_pillars))
|
||||
|
||||
with col2:
|
||||
calendar_items = content_calendar.get('calendar_items', [])
|
||||
st.metric("Content Pieces", len(calendar_items))
|
||||
|
||||
with col3:
|
||||
timeline = content_calendar.get('timeline', 'Not specified')
|
||||
st.metric("Timeline", timeline)
|
||||
|
||||
with col4:
|
||||
total_hours = sum(item.get('estimated_hours', 0) for item in calendar_items)
|
||||
st.metric("Est. Hours", f"{total_hours}h")
|
||||
|
||||
# Strategy tabs
|
||||
tab1, tab2, tab3, tab4, tab5, tab6 = st.tabs([
|
||||
"🧠 AI Insights",
|
||||
"🏛️ Content Pillars",
|
||||
"📅 Content Calendar",
|
||||
"🎯 Topic Clusters",
|
||||
"📢 Distribution",
|
||||
"📊 Implementation"
|
||||
])
|
||||
|
||||
with tab1:
|
||||
if business_analysis:
|
||||
st.subheader("Business Analysis & Insights")
|
||||
|
||||
# Market positioning
|
||||
market_position = business_analysis.get('market_position', '')
|
||||
if market_position:
|
||||
st.markdown("#### 🎯 Market Positioning")
|
||||
st.info(market_position)
|
||||
|
||||
# Content gaps
|
||||
content_gaps = business_analysis.get('content_gaps', [])
|
||||
if content_gaps:
|
||||
st.markdown("#### 🔍 Content Gaps Identified")
|
||||
for gap in content_gaps:
|
||||
st.warning(f"📌 {gap}")
|
||||
|
||||
# Competitive advantages
|
||||
advantages = business_analysis.get('competitive_advantages', [])
|
||||
if advantages:
|
||||
st.markdown("#### 🏆 Competitive Advantages")
|
||||
for advantage in advantages:
|
||||
st.success(f"✅ {advantage}")
|
||||
|
||||
# AI insights
|
||||
ai_insights = results.get('ai_insights', {})
|
||||
if ai_insights:
|
||||
st.markdown("#### 🧠 Strategic AI Insights")
|
||||
|
||||
insights = ai_insights.get('key_insights', [])
|
||||
for insight in insights:
|
||||
st.info(f"💡 {insight}")
|
||||
|
||||
recommendations = ai_insights.get('strategic_recommendations', [])
|
||||
if recommendations:
|
||||
st.markdown("#### 🎯 Strategic Recommendations")
|
||||
for rec in recommendations:
|
||||
st.success(f"📋 {rec}")
|
||||
|
||||
with tab2:
|
||||
if content_pillars:
|
||||
st.subheader("Content Pillars Strategy")
|
||||
|
||||
# Pillars overview chart
|
||||
pillar_names = [pillar['name'] for pillar in content_pillars]
|
||||
pillar_ideas = [len(pillar['content_ideas']) for pillar in content_pillars]
|
||||
|
||||
fig = px.bar(
|
||||
x=pillar_names,
|
||||
y=pillar_ideas,
|
||||
title="Content Ideas per Pillar",
|
||||
labels={'x': 'Content Pillars', 'y': 'Number of Ideas'}
|
||||
)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Detailed pillar information
|
||||
for pillar in content_pillars:
|
||||
with st.expander(f"🏛️ {pillar['name']}", expanded=False):
|
||||
st.markdown(f"**Description:** {pillar['description']}")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("**Target Keywords:**")
|
||||
for keyword in pillar['target_keywords']:
|
||||
st.code(keyword)
|
||||
|
||||
st.markdown("**Content Types:**")
|
||||
for content_type in pillar['content_types']:
|
||||
st.write(f"• {content_type}")
|
||||
|
||||
with col2:
|
||||
st.markdown("**Success Metrics:**")
|
||||
for metric in pillar['success_metrics']:
|
||||
st.write(f"📊 {metric}")
|
||||
|
||||
st.markdown("**Content Ideas:**")
|
||||
for idea in pillar['content_ideas']:
|
||||
st.write(f"💡 {idea}")
|
||||
|
||||
with tab3:
|
||||
if content_calendar:
|
||||
st.subheader("Content Calendar & Planning")
|
||||
|
||||
calendar_items = content_calendar.get('calendar_items', [])
|
||||
|
||||
if calendar_items:
|
||||
# Calendar overview
|
||||
df_calendar = pd.DataFrame(calendar_items)
|
||||
|
||||
# Priority distribution
|
||||
priority_counts = df_calendar['priority'].value_counts()
|
||||
fig_priority = px.pie(
|
||||
values=priority_counts.values,
|
||||
names=priority_counts.index,
|
||||
title="Content Priority Distribution"
|
||||
)
|
||||
st.plotly_chart(fig_priority, use_container_width=True)
|
||||
|
||||
# Content calendar table
|
||||
st.markdown("#### 📅 Detailed Content Calendar")
|
||||
|
||||
display_df = df_calendar[[
|
||||
'period', 'pillar', 'content_type', 'topic',
|
||||
'priority', 'estimated_hours'
|
||||
]].copy()
|
||||
|
||||
display_df.columns = [
|
||||
'Period', 'Pillar', 'Content Type', 'Topic',
|
||||
'Priority', 'Est. Hours'
|
||||
]
|
||||
|
||||
st.dataframe(
|
||||
display_df,
|
||||
column_config={
|
||||
"Priority": st.column_config.SelectboxColumn(
|
||||
"Priority",
|
||||
options=["High", "Medium", "Low"]
|
||||
),
|
||||
"Est. Hours": st.column_config.NumberColumn(
|
||||
"Est. Hours",
|
||||
format="%d h"
|
||||
)
|
||||
},
|
||||
hide_index=True,
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
# Export calendar
|
||||
csv = df_calendar.to_csv(index=False)
|
||||
st.download_button(
|
||||
label="📥 Download Content Calendar",
|
||||
data=csv,
|
||||
file_name=f"content_calendar_{datetime.now().strftime('%Y%m%d')}.csv",
|
||||
mime="text/csv"
|
||||
)
|
||||
|
||||
with tab4:
|
||||
topic_clusters = results.get('topic_clusters', [])
|
||||
if topic_clusters:
|
||||
st.subheader("SEO Topic Clusters")
|
||||
|
||||
for cluster in topic_clusters:
|
||||
with st.expander(f"🎯 {cluster['cluster_name']}", expanded=False):
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown(f"**Primary Topic:** {cluster['primary_topic']}")
|
||||
st.markdown(f"**SEO Opportunity:** {cluster['seo_opportunity']}")
|
||||
st.markdown(f"**Linking Strategy:** {cluster['internal_linking_strategy']}")
|
||||
|
||||
with col2:
|
||||
st.markdown("**Supporting Topics:**")
|
||||
for topic in cluster['supporting_topics']:
|
||||
st.code(topic)
|
||||
|
||||
st.markdown("**Content Pieces:**")
|
||||
content_pieces = cluster['content_pieces']
|
||||
df_pieces = pd.DataFrame(content_pieces)
|
||||
st.dataframe(df_pieces, hide_index=True, use_container_width=True)
|
||||
|
||||
with tab5:
|
||||
distribution_strategy = results.get('distribution_strategy', {})
|
||||
if distribution_strategy:
|
||||
st.subheader("Content Distribution Strategy")
|
||||
|
||||
# Primary channels
|
||||
primary_channels = distribution_strategy.get('primary_channels', [])
|
||||
if primary_channels:
|
||||
st.markdown("#### 📢 Primary Distribution Channels")
|
||||
df_primary = pd.DataFrame(primary_channels)
|
||||
st.dataframe(df_primary, hide_index=True, use_container_width=True)
|
||||
|
||||
# Secondary channels
|
||||
secondary_channels = distribution_strategy.get('secondary_channels', [])
|
||||
if secondary_channels:
|
||||
st.markdown("#### 📺 Secondary Distribution Channels")
|
||||
df_secondary = pd.DataFrame(secondary_channels)
|
||||
st.dataframe(df_secondary, hide_index=True, use_container_width=True)
|
||||
|
||||
# Repurposing strategy
|
||||
repurposing = distribution_strategy.get('repurposing_strategy', {})
|
||||
if repurposing:
|
||||
st.markdown("#### ♻️ Content Repurposing Strategy")
|
||||
for strategy, description in repurposing.items():
|
||||
st.write(f"**{strategy.replace('_', ' ').title()}:** {description}")
|
||||
|
||||
with tab6:
|
||||
# Implementation roadmap
|
||||
roadmap = results.get('implementation_roadmap', {})
|
||||
kpi_framework = results.get('kpi_framework', {})
|
||||
|
||||
if roadmap:
|
||||
st.subheader("Implementation Roadmap")
|
||||
|
||||
for phase_key, phase_data in roadmap.items():
|
||||
with st.expander(f"📋 {phase_data['name']}", expanded=False):
|
||||
st.markdown(f"**Objectives:**")
|
||||
for objective in phase_data['objectives']:
|
||||
st.write(f"• {objective}")
|
||||
|
||||
st.markdown(f"**Deliverables:**")
|
||||
for deliverable in phase_data['deliverables']:
|
||||
st.write(f"📦 {deliverable}")
|
||||
|
||||
st.markdown(f"**Success Criteria:**")
|
||||
for criteria in phase_data['success_criteria']:
|
||||
st.write(f"✅ {criteria}")
|
||||
|
||||
if kpi_framework:
|
||||
st.subheader("KPI Framework")
|
||||
|
||||
# Primary KPIs
|
||||
primary_kpis = kpi_framework.get('primary_kpis', [])
|
||||
if primary_kpis:
|
||||
st.markdown("#### 🎯 Primary KPIs")
|
||||
df_primary_kpis = pd.DataFrame(primary_kpis)
|
||||
st.dataframe(df_primary_kpis, hide_index=True, use_container_width=True)
|
||||
|
||||
# Content KPIs
|
||||
content_kpis = kpi_framework.get('content_kpis', [])
|
||||
if content_kpis:
|
||||
st.markdown("#### 📝 Content KPIs")
|
||||
df_content_kpis = pd.DataFrame(content_kpis)
|
||||
st.dataframe(df_content_kpis, hide_index=True, use_container_width=True)
|
||||
|
||||
# Export functionality
|
||||
st.markdown("---")
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
if st.button("📥 Export Full Strategy", use_container_width=True):
|
||||
strategy_json = json.dumps(results, indent=2, default=str)
|
||||
st.download_button(
|
||||
label="Download JSON Strategy",
|
||||
data=strategy_json,
|
||||
file_name=f"content_strategy_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
||||
mime="application/json"
|
||||
)
|
||||
|
||||
with col2:
|
||||
if st.button("📊 Export Calendar", use_container_width=True):
|
||||
calendar_items = content_calendar.get('calendar_items', [])
|
||||
if calendar_items:
|
||||
df_calendar = pd.DataFrame(calendar_items)
|
||||
csv = df_calendar.to_csv(index=False)
|
||||
st.download_button(
|
||||
label="Download CSV Calendar",
|
||||
data=csv,
|
||||
file_name=f"content_calendar_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv",
|
||||
mime="text/csv"
|
||||
)
|
||||
|
||||
with col3:
|
||||
if st.button("🔄 Generate New Strategy", use_container_width=True):
|
||||
if 'strategy_results' in st.session_state:
|
||||
del st.session_state.strategy_results
|
||||
st.rerun()
|
||||
|
||||
|
||||
# Main execution
|
||||
if __name__ == "__main__":
|
||||
render_ai_content_strategy()
|
||||
919
ToBeMigrated/ai_seo_tools/enterprise_seo_suite.py
Normal file
919
ToBeMigrated/ai_seo_tools/enterprise_seo_suite.py
Normal file
@@ -0,0 +1,919 @@
|
||||
"""
|
||||
Enterprise SEO Command Center
|
||||
|
||||
Unified AI-powered SEO suite that orchestrates all existing tools into
|
||||
intelligent workflows for enterprise-level SEO management.
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import asyncio
|
||||
import pandas as pd
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
import json
|
||||
from loguru import logger
|
||||
|
||||
# Import existing SEO tools
|
||||
from .on_page_seo_analyzer import fetch_seo_data
|
||||
from .content_gap_analysis.enhanced_analyzer import EnhancedContentGapAnalyzer
|
||||
from .technical_seo_crawler.crawler import TechnicalSEOCrawler
|
||||
from .weburl_seo_checker import url_seo_checker
|
||||
from .google_pagespeed_insights import google_pagespeed_insights
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
# Import the new enterprise tools
|
||||
from .google_search_console_integration import GoogleSearchConsoleAnalyzer, render_gsc_integration
|
||||
from .ai_content_strategy import AIContentStrategyGenerator, render_ai_content_strategy
|
||||
|
||||
class EnterpriseSEOSuite:
|
||||
"""
|
||||
Enterprise-level SEO suite orchestrating all tools into intelligent workflows.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the enterprise SEO suite."""
|
||||
self.gap_analyzer = EnhancedContentGapAnalyzer()
|
||||
self.technical_crawler = TechnicalSEOCrawler()
|
||||
|
||||
# Initialize new enterprise tools
|
||||
self.gsc_analyzer = GoogleSearchConsoleAnalyzer()
|
||||
self.content_strategy_generator = AIContentStrategyGenerator()
|
||||
|
||||
# SEO workflow templates
|
||||
self.workflow_templates = {
|
||||
'complete_audit': 'Complete SEO Audit',
|
||||
'content_strategy': 'Content Strategy Development',
|
||||
'technical_optimization': 'Technical SEO Optimization',
|
||||
'competitor_intelligence': 'Competitive Intelligence',
|
||||
'keyword_domination': 'Keyword Domination Strategy',
|
||||
'local_seo': 'Local SEO Optimization',
|
||||
'enterprise_monitoring': 'Enterprise SEO Monitoring'
|
||||
}
|
||||
|
||||
logger.info("Enterprise SEO Suite initialized")
|
||||
|
||||
async def execute_complete_seo_audit(self, website_url: str, competitors: List[str],
|
||||
target_keywords: List[str]) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute a comprehensive enterprise SEO audit combining all tools.
|
||||
|
||||
Args:
|
||||
website_url: Primary website to audit
|
||||
competitors: List of competitor URLs (max 5)
|
||||
target_keywords: Primary keywords to optimize for
|
||||
|
||||
Returns:
|
||||
Comprehensive audit results with prioritized action plan
|
||||
"""
|
||||
try:
|
||||
st.info("🚀 Initiating Complete Enterprise SEO Audit...")
|
||||
|
||||
audit_results = {
|
||||
'audit_timestamp': datetime.utcnow().isoformat(),
|
||||
'website_url': website_url,
|
||||
'competitors': competitors[:5],
|
||||
'target_keywords': target_keywords,
|
||||
'technical_audit': {},
|
||||
'content_analysis': {},
|
||||
'competitive_intelligence': {},
|
||||
'on_page_analysis': {},
|
||||
'performance_metrics': {},
|
||||
'strategic_recommendations': {},
|
||||
'priority_action_plan': []
|
||||
}
|
||||
|
||||
# Phase 1: Technical SEO Audit
|
||||
with st.expander("🔧 Technical SEO Analysis", expanded=True):
|
||||
st.info("Analyzing technical SEO factors...")
|
||||
technical_results = await self._run_technical_audit(website_url)
|
||||
audit_results['technical_audit'] = technical_results
|
||||
st.success("✅ Technical audit completed")
|
||||
|
||||
# Phase 2: Content Gap Analysis
|
||||
with st.expander("📊 Content Intelligence Analysis", expanded=True):
|
||||
st.info("Analyzing content gaps and opportunities...")
|
||||
content_results = await self._run_content_analysis(
|
||||
website_url, competitors, target_keywords
|
||||
)
|
||||
audit_results['content_analysis'] = content_results
|
||||
st.success("✅ Content analysis completed")
|
||||
|
||||
# Phase 3: On-Page SEO Analysis
|
||||
with st.expander("🔍 On-Page SEO Analysis", expanded=True):
|
||||
st.info("Analyzing on-page SEO factors...")
|
||||
onpage_results = await self._run_onpage_analysis(website_url)
|
||||
audit_results['on_page_analysis'] = onpage_results
|
||||
st.success("✅ On-page analysis completed")
|
||||
|
||||
# Phase 4: Performance Analysis
|
||||
with st.expander("⚡ Performance Analysis", expanded=True):
|
||||
st.info("Analyzing website performance...")
|
||||
performance_results = await self._run_performance_analysis(website_url)
|
||||
audit_results['performance_metrics'] = performance_results
|
||||
st.success("✅ Performance analysis completed")
|
||||
|
||||
# Phase 5: AI-Powered Strategic Recommendations
|
||||
with st.expander("🤖 AI Strategic Analysis", expanded=True):
|
||||
st.info("Generating AI-powered strategic recommendations...")
|
||||
strategic_analysis = await self._generate_strategic_recommendations(audit_results)
|
||||
audit_results['strategic_recommendations'] = strategic_analysis
|
||||
|
||||
# Generate prioritized action plan
|
||||
action_plan = await self._create_priority_action_plan(audit_results)
|
||||
audit_results['priority_action_plan'] = action_plan
|
||||
st.success("✅ Strategic analysis completed")
|
||||
|
||||
return audit_results
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error in complete SEO audit: {str(e)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
st.error(error_msg)
|
||||
return {'error': error_msg}
|
||||
|
||||
async def _run_technical_audit(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Run comprehensive technical SEO audit."""
|
||||
try:
|
||||
# Use existing technical crawler
|
||||
technical_results = self.technical_crawler.analyze_website_technical_seo(
|
||||
website_url, crawl_depth=3, max_pages=100
|
||||
)
|
||||
|
||||
# Enhance with additional technical checks
|
||||
enhanced_results = {
|
||||
'crawler_results': technical_results,
|
||||
'critical_issues': self._identify_critical_technical_issues(technical_results),
|
||||
'performance_score': self._calculate_technical_score(technical_results),
|
||||
'priority_fixes': self._prioritize_technical_fixes(technical_results)
|
||||
}
|
||||
|
||||
return enhanced_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Technical audit error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
async def _run_content_analysis(self, website_url: str, competitors: List[str],
|
||||
keywords: List[str]) -> Dict[str, Any]:
|
||||
"""Run comprehensive content gap analysis."""
|
||||
try:
|
||||
# Use existing content gap analyzer
|
||||
content_results = self.gap_analyzer.analyze_comprehensive_gap(
|
||||
website_url, competitors, keywords, industry="general"
|
||||
)
|
||||
|
||||
# Enhance with content strategy insights
|
||||
enhanced_results = {
|
||||
'gap_analysis': content_results,
|
||||
'content_opportunities': self._identify_content_opportunities(content_results),
|
||||
'keyword_strategy': self._develop_keyword_strategy(content_results),
|
||||
'competitive_advantages': self._find_competitive_advantages(content_results)
|
||||
}
|
||||
|
||||
return enhanced_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Content analysis error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
async def _run_onpage_analysis(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Run on-page SEO analysis."""
|
||||
try:
|
||||
# Use existing on-page analyzer
|
||||
onpage_data = fetch_seo_data(website_url)
|
||||
|
||||
# Enhanced analysis
|
||||
enhanced_results = {
|
||||
'seo_data': onpage_data,
|
||||
'optimization_score': self._calculate_onpage_score(onpage_data),
|
||||
'meta_optimization': self._analyze_meta_optimization(onpage_data),
|
||||
'content_optimization': self._analyze_content_optimization(onpage_data)
|
||||
}
|
||||
|
||||
return enhanced_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"On-page analysis error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
async def _run_performance_analysis(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Run website performance analysis."""
|
||||
try:
|
||||
# Comprehensive performance metrics
|
||||
performance_results = {
|
||||
'core_web_vitals': await self._analyze_core_web_vitals(website_url),
|
||||
'loading_performance': await self._analyze_loading_performance(website_url),
|
||||
'mobile_optimization': await self._analyze_mobile_optimization(website_url),
|
||||
'performance_score': 0 # Will be calculated
|
||||
}
|
||||
|
||||
# Calculate overall performance score
|
||||
performance_results['performance_score'] = self._calculate_performance_score(
|
||||
performance_results
|
||||
)
|
||||
|
||||
return performance_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Performance analysis error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
async def _generate_strategic_recommendations(self, audit_results: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate AI-powered strategic recommendations."""
|
||||
try:
|
||||
# Compile audit summary for AI analysis
|
||||
audit_summary = {
|
||||
'technical_score': audit_results.get('technical_audit', {}).get('performance_score', 0),
|
||||
'content_gaps': len(audit_results.get('content_analysis', {}).get('content_opportunities', [])),
|
||||
'onpage_score': audit_results.get('on_page_analysis', {}).get('optimization_score', 0),
|
||||
'performance_score': audit_results.get('performance_metrics', {}).get('performance_score', 0)
|
||||
}
|
||||
|
||||
strategic_prompt = f"""
|
||||
Analyze this comprehensive SEO audit and provide strategic recommendations:
|
||||
|
||||
AUDIT SUMMARY:
|
||||
- Technical SEO Score: {audit_summary['technical_score']}/100
|
||||
- Content Gaps Identified: {audit_summary['content_gaps']}
|
||||
- On-Page SEO Score: {audit_summary['onpage_score']}/100
|
||||
- Performance Score: {audit_summary['performance_score']}/100
|
||||
|
||||
DETAILED FINDINGS:
|
||||
Technical Issues: {json.dumps(audit_results.get('technical_audit', {}), indent=2)[:1000]}
|
||||
Content Opportunities: {json.dumps(audit_results.get('content_analysis', {}), indent=2)[:1000]}
|
||||
|
||||
Provide strategic recommendations in these categories:
|
||||
|
||||
1. IMMEDIATE WINS (0-30 days):
|
||||
- Quick technical fixes with high impact
|
||||
- Content optimizations for existing pages
|
||||
- Critical performance improvements
|
||||
|
||||
2. STRATEGIC INITIATIVES (1-3 months):
|
||||
- Content strategy development
|
||||
- Technical architecture improvements
|
||||
- Competitive positioning strategies
|
||||
|
||||
3. LONG-TERM GROWTH (3-12 months):
|
||||
- Authority building strategies
|
||||
- Market expansion opportunities
|
||||
- Advanced SEO techniques
|
||||
|
||||
4. RISK MITIGATION:
|
||||
- Technical vulnerabilities to address
|
||||
- Content gaps that competitors could exploit
|
||||
- Performance issues affecting user experience
|
||||
|
||||
Provide specific, actionable recommendations with expected impact and effort estimates.
|
||||
"""
|
||||
|
||||
strategic_analysis = llm_text_gen(
|
||||
strategic_prompt,
|
||||
system_prompt="You are an enterprise SEO strategist with 10+ years of experience. Provide detailed, actionable recommendations based on comprehensive audit data."
|
||||
)
|
||||
|
||||
return {
|
||||
'full_analysis': strategic_analysis,
|
||||
'immediate_wins': self._extract_immediate_wins(strategic_analysis),
|
||||
'strategic_initiatives': self._extract_strategic_initiatives(strategic_analysis),
|
||||
'long_term_growth': self._extract_long_term_growth(strategic_analysis),
|
||||
'risk_mitigation': self._extract_risk_mitigation(strategic_analysis)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Strategic analysis error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
async def _create_priority_action_plan(self, audit_results: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Create prioritized action plan from audit results."""
|
||||
try:
|
||||
action_plan = []
|
||||
|
||||
# Extract recommendations from all analysis phases
|
||||
strategic_recs = audit_results.get('strategic_recommendations', {})
|
||||
|
||||
# Immediate wins (High priority, low effort)
|
||||
immediate_wins = strategic_recs.get('immediate_wins', [])
|
||||
for win in immediate_wins[:5]:
|
||||
action_plan.append({
|
||||
'category': 'Immediate Win',
|
||||
'priority': 'Critical',
|
||||
'effort': 'Low',
|
||||
'timeframe': '0-30 days',
|
||||
'action': win,
|
||||
'expected_impact': 'High',
|
||||
'source': 'Strategic Analysis'
|
||||
})
|
||||
|
||||
# Technical fixes
|
||||
technical_issues = audit_results.get('technical_audit', {}).get('critical_issues', [])
|
||||
for issue in technical_issues[:3]:
|
||||
action_plan.append({
|
||||
'category': 'Technical SEO',
|
||||
'priority': 'High',
|
||||
'effort': 'Medium',
|
||||
'timeframe': '1-4 weeks',
|
||||
'action': issue,
|
||||
'expected_impact': 'High',
|
||||
'source': 'Technical Audit'
|
||||
})
|
||||
|
||||
# Content opportunities
|
||||
content_ops = audit_results.get('content_analysis', {}).get('content_opportunities', [])
|
||||
for opportunity in content_ops[:3]:
|
||||
action_plan.append({
|
||||
'category': 'Content Strategy',
|
||||
'priority': 'Medium',
|
||||
'effort': 'High',
|
||||
'timeframe': '2-8 weeks',
|
||||
'action': opportunity,
|
||||
'expected_impact': 'Medium',
|
||||
'source': 'Content Analysis'
|
||||
})
|
||||
|
||||
# Sort by priority and expected impact
|
||||
priority_order = {'Critical': 0, 'High': 1, 'Medium': 2, 'Low': 3}
|
||||
action_plan.sort(key=lambda x: priority_order.get(x['priority'], 4))
|
||||
|
||||
return action_plan[:15] # Top 15 actions
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Action plan creation error: {str(e)}")
|
||||
return []
|
||||
|
||||
# Utility methods for analysis
|
||||
def _identify_critical_technical_issues(self, technical_results: Dict[str, Any]) -> List[str]:
|
||||
"""Identify critical technical SEO issues."""
|
||||
critical_issues = []
|
||||
|
||||
# Add logic to identify critical technical issues
|
||||
# This would analyze the technical_results and extract critical problems
|
||||
|
||||
return critical_issues
|
||||
|
||||
def _calculate_technical_score(self, technical_results: Dict[str, Any]) -> int:
|
||||
"""Calculate technical SEO score."""
|
||||
# Implement scoring algorithm based on technical audit results
|
||||
return 75 # Placeholder
|
||||
|
||||
def _prioritize_technical_fixes(self, technical_results: Dict[str, Any]) -> List[str]:
|
||||
"""Prioritize technical fixes by impact and effort."""
|
||||
# Implement prioritization logic
|
||||
return ["Fix broken links", "Optimize images", "Improve page speed"]
|
||||
|
||||
def _identify_content_opportunities(self, content_results: Dict[str, Any]) -> List[str]:
|
||||
"""Identify top content opportunities."""
|
||||
# Extract content opportunities from gap analysis
|
||||
return ["Create FAQ content", "Develop comparison guides", "Write how-to articles"]
|
||||
|
||||
def _develop_keyword_strategy(self, content_results: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Develop keyword strategy from content analysis."""
|
||||
return {
|
||||
'primary_keywords': [],
|
||||
'secondary_keywords': [],
|
||||
'long_tail_opportunities': [],
|
||||
'competitor_gaps': []
|
||||
}
|
||||
|
||||
def _find_competitive_advantages(self, content_results: Dict[str, Any]) -> List[str]:
|
||||
"""Find competitive advantages from analysis."""
|
||||
return ["Unique content angles", "Underserved niches", "Technical superiority"]
|
||||
|
||||
def _calculate_onpage_score(self, onpage_data: Dict[str, Any]) -> int:
|
||||
"""Calculate on-page SEO score."""
|
||||
return 80 # Placeholder
|
||||
|
||||
def _analyze_meta_optimization(self, onpage_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze meta tag optimization."""
|
||||
return {'title_optimization': 'good', 'description_optimization': 'needs_work'}
|
||||
|
||||
def _analyze_content_optimization(self, onpage_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze content optimization."""
|
||||
return {'keyword_density': 'optimal', 'content_length': 'adequate'}
|
||||
|
||||
async def _analyze_core_web_vitals(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Analyze Core Web Vitals."""
|
||||
return {'lcp': 2.5, 'fid': 100, 'cls': 0.1}
|
||||
|
||||
async def _analyze_loading_performance(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Analyze loading performance."""
|
||||
return {'ttfb': 200, 'fcp': 1.5, 'speed_index': 3.0}
|
||||
|
||||
async def _analyze_mobile_optimization(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Analyze mobile optimization."""
|
||||
return {'mobile_friendly': True, 'responsive_design': True}
|
||||
|
||||
def _calculate_performance_score(self, performance_results: Dict[str, Any]) -> int:
|
||||
"""Calculate overall performance score."""
|
||||
return 85 # Placeholder
|
||||
|
||||
def _extract_immediate_wins(self, analysis: str) -> List[str]:
|
||||
"""Extract immediate wins from strategic analysis."""
|
||||
# Parse the AI analysis and extract immediate wins
|
||||
lines = analysis.split('\n')
|
||||
wins = []
|
||||
in_immediate_section = False
|
||||
|
||||
for line in lines:
|
||||
if 'IMMEDIATE WINS' in line.upper():
|
||||
in_immediate_section = True
|
||||
continue
|
||||
elif 'STRATEGIC INITIATIVES' in line.upper():
|
||||
in_immediate_section = False
|
||||
continue
|
||||
|
||||
if in_immediate_section and line.strip().startswith('-'):
|
||||
wins.append(line.strip().lstrip('- '))
|
||||
|
||||
return wins[:5]
|
||||
|
||||
def _extract_strategic_initiatives(self, analysis: str) -> List[str]:
|
||||
"""Extract strategic initiatives from analysis."""
|
||||
# Similar extraction logic for strategic initiatives
|
||||
return ["Develop content hub", "Implement schema markup", "Build authority pages"]
|
||||
|
||||
def _extract_long_term_growth(self, analysis: str) -> List[str]:
|
||||
"""Extract long-term growth strategies."""
|
||||
return ["Market expansion", "Authority building", "Advanced technical SEO"]
|
||||
|
||||
def _extract_risk_mitigation(self, analysis: str) -> List[str]:
|
||||
"""Extract risk mitigation strategies."""
|
||||
return ["Fix technical vulnerabilities", "Address content gaps", "Improve performance"]
|
||||
|
||||
def execute_content_strategy_workflow(self, business_info: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute comprehensive content strategy workflow using AI insights.
|
||||
|
||||
Args:
|
||||
business_info: Business context and objectives
|
||||
|
||||
Returns:
|
||||
Complete content strategy with implementation plan
|
||||
"""
|
||||
try:
|
||||
st.info("🧠 Executing AI-powered content strategy workflow...")
|
||||
|
||||
# Generate AI content strategy
|
||||
content_strategy = self.content_strategy_generator.generate_content_strategy(business_info)
|
||||
|
||||
# If GSC data is available, enhance with search insights
|
||||
if business_info.get('gsc_site_url'):
|
||||
gsc_insights = self.gsc_analyzer.analyze_search_performance(
|
||||
business_info['gsc_site_url'],
|
||||
business_info.get('gsc_date_range', 90)
|
||||
)
|
||||
content_strategy['gsc_insights'] = gsc_insights
|
||||
|
||||
# Generate SEO-optimized content recommendations
|
||||
seo_content_recs = self._generate_seo_content_recommendations(content_strategy)
|
||||
content_strategy['seo_recommendations'] = seo_content_recs
|
||||
|
||||
return content_strategy
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Content strategy workflow error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def execute_search_intelligence_workflow(self, site_url: str, date_range: int = 90) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute comprehensive search intelligence workflow using GSC data.
|
||||
|
||||
Args:
|
||||
site_url: Website URL registered in GSC
|
||||
date_range: Analysis period in days
|
||||
|
||||
Returns:
|
||||
Complete search intelligence analysis with actionable insights
|
||||
"""
|
||||
try:
|
||||
st.info("📊 Executing search intelligence workflow...")
|
||||
|
||||
# Analyze GSC performance
|
||||
gsc_analysis = self.gsc_analyzer.analyze_search_performance(site_url, date_range)
|
||||
|
||||
# Enhance with technical SEO analysis
|
||||
technical_analysis = self.technical_crawler.crawl_and_analyze(site_url)
|
||||
gsc_analysis['technical_insights'] = technical_analysis
|
||||
|
||||
# Generate content gap analysis based on GSC keywords
|
||||
if gsc_analysis.get('keyword_analysis'):
|
||||
keywords = [kw['keyword'] for kw in gsc_analysis['keyword_analysis'].get('high_volume_keywords', [])]
|
||||
content_gaps = self.gap_analyzer.analyze_content_gaps(
|
||||
keywords[:10], # Top 10 keywords
|
||||
site_url
|
||||
)
|
||||
gsc_analysis['content_gap_analysis'] = content_gaps
|
||||
|
||||
# Generate comprehensive recommendations
|
||||
search_recommendations = self._generate_search_intelligence_recommendations(gsc_analysis)
|
||||
gsc_analysis['comprehensive_recommendations'] = search_recommendations
|
||||
|
||||
return gsc_analysis
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Search intelligence workflow error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def _generate_seo_content_recommendations(self, content_strategy: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate SEO-optimized content recommendations based on strategy."""
|
||||
try:
|
||||
content_pillars = content_strategy.get('content_pillars', [])
|
||||
|
||||
seo_recommendations = {
|
||||
'keyword_optimization': [],
|
||||
'content_structure': [],
|
||||
'internal_linking': [],
|
||||
'technical_seo': []
|
||||
}
|
||||
|
||||
for pillar in content_pillars:
|
||||
# Keyword optimization recommendations
|
||||
for keyword in pillar.get('target_keywords', []):
|
||||
seo_recommendations['keyword_optimization'].append({
|
||||
'pillar': pillar['name'],
|
||||
'keyword': keyword,
|
||||
'recommendation': f"Create comprehensive content targeting '{keyword}' with semantic variations",
|
||||
'priority': 'High' if keyword in pillar['target_keywords'][:2] else 'Medium'
|
||||
})
|
||||
|
||||
# Content structure recommendations
|
||||
seo_recommendations['content_structure'].append({
|
||||
'pillar': pillar['name'],
|
||||
'recommendation': f"Create pillar page for {pillar['name']} with supporting cluster content",
|
||||
'structure': 'Pillar + Cluster model'
|
||||
})
|
||||
|
||||
# Internal linking strategy
|
||||
seo_recommendations['internal_linking'] = [
|
||||
"Link all cluster content to relevant pillar pages",
|
||||
"Create topic-based internal linking structure",
|
||||
"Use contextual anchor text with target keywords",
|
||||
"Implement breadcrumb navigation for topic clusters"
|
||||
]
|
||||
|
||||
# Technical SEO recommendations
|
||||
seo_recommendations['technical_seo'] = [
|
||||
"Optimize page speed for all content pages",
|
||||
"Implement structured data for articles",
|
||||
"Create XML sitemap sections for content categories",
|
||||
"Optimize images with descriptive alt text"
|
||||
]
|
||||
|
||||
return seo_recommendations
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"SEO content recommendations error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def _generate_search_intelligence_recommendations(self, gsc_analysis: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate comprehensive recommendations from search intelligence analysis."""
|
||||
try:
|
||||
recommendations = {
|
||||
'immediate_actions': [],
|
||||
'content_opportunities': [],
|
||||
'technical_improvements': [],
|
||||
'strategic_initiatives': []
|
||||
}
|
||||
|
||||
# Extract content opportunities from GSC analysis
|
||||
content_opps = gsc_analysis.get('content_opportunities', [])
|
||||
for opp in content_opps[:5]: # Top 5 opportunities
|
||||
recommendations['content_opportunities'].append({
|
||||
'type': opp['type'],
|
||||
'keyword': opp['keyword'],
|
||||
'action': opp['opportunity'],
|
||||
'priority': opp['priority'],
|
||||
'estimated_impact': opp['potential_impact']
|
||||
})
|
||||
|
||||
# Technical improvements from analysis
|
||||
technical_insights = gsc_analysis.get('technical_insights', {})
|
||||
if technical_insights.get('crawl_issues_indicators'):
|
||||
for issue in technical_insights['crawl_issues_indicators']:
|
||||
recommendations['technical_improvements'].append({
|
||||
'issue': issue,
|
||||
'priority': 'High',
|
||||
'category': 'Crawl & Indexing'
|
||||
})
|
||||
|
||||
# Immediate actions based on performance
|
||||
performance = gsc_analysis.get('performance_overview', {})
|
||||
if performance.get('avg_ctr', 0) < 2:
|
||||
recommendations['immediate_actions'].append({
|
||||
'action': 'Improve meta descriptions and titles for better CTR',
|
||||
'expected_impact': 'Increase CTR by 1-2%',
|
||||
'timeline': '2-4 weeks'
|
||||
})
|
||||
|
||||
if performance.get('avg_position', 0) > 10:
|
||||
recommendations['immediate_actions'].append({
|
||||
'action': 'Focus on improving content quality for top keywords',
|
||||
'expected_impact': 'Improve average position by 2-5 ranks',
|
||||
'timeline': '4-8 weeks'
|
||||
})
|
||||
|
||||
# Strategic initiatives
|
||||
competitive_analysis = gsc_analysis.get('competitive_analysis', {})
|
||||
if competitive_analysis.get('market_position') in ['Challenger', 'Emerging Player']:
|
||||
recommendations['strategic_initiatives'].append({
|
||||
'initiative': 'Develop thought leadership content strategy',
|
||||
'goal': 'Improve market position and brand authority',
|
||||
'timeline': '3-6 months'
|
||||
})
|
||||
|
||||
return recommendations
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Search intelligence recommendations error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def render_enterprise_seo_suite():
|
||||
"""Render the Enterprise SEO Command Center interface."""
|
||||
|
||||
st.set_page_config(
|
||||
page_title="Enterprise SEO Command Center",
|
||||
page_icon="🚀",
|
||||
layout="wide"
|
||||
)
|
||||
|
||||
st.title("🚀 Enterprise SEO Command Center")
|
||||
st.markdown("**Unified AI-powered SEO suite orchestrating all tools into intelligent workflows**")
|
||||
|
||||
# Initialize suite
|
||||
if 'enterprise_seo_suite' not in st.session_state:
|
||||
st.session_state.enterprise_seo_suite = EnterpriseSEOSuite()
|
||||
|
||||
suite = st.session_state.enterprise_seo_suite
|
||||
|
||||
# Workflow selection
|
||||
st.sidebar.header("🎯 SEO Workflow Selection")
|
||||
selected_workflow = st.sidebar.selectbox(
|
||||
"Choose Workflow",
|
||||
list(suite.workflow_templates.keys()),
|
||||
format_func=lambda x: suite.workflow_templates[x]
|
||||
)
|
||||
|
||||
# Main workflow interface
|
||||
if selected_workflow == 'complete_audit':
|
||||
st.header("🔍 Complete Enterprise SEO Audit")
|
||||
render_complete_audit_interface(suite)
|
||||
elif selected_workflow == 'content_strategy':
|
||||
st.header("📊 Content Strategy Development")
|
||||
render_content_strategy_interface(suite)
|
||||
elif selected_workflow == 'technical_optimization':
|
||||
st.header("🔧 Technical SEO Optimization")
|
||||
render_technical_optimization_interface(suite)
|
||||
else:
|
||||
st.info(f"Workflow '{suite.workflow_templates[selected_workflow]}' is being developed.")
|
||||
|
||||
def render_complete_audit_interface(suite: EnterpriseSEOSuite):
|
||||
"""Render the complete audit workflow interface."""
|
||||
|
||||
# Input form
|
||||
with st.form("enterprise_audit_form"):
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
website_url = st.text_input(
|
||||
"Website URL",
|
||||
value="https://example.com",
|
||||
help="Enter your website URL for comprehensive analysis"
|
||||
)
|
||||
|
||||
target_keywords = st.text_area(
|
||||
"Target Keywords (one per line)",
|
||||
value="AI content creation\nSEO tools\ncontent optimization",
|
||||
help="Enter your primary keywords to optimize for"
|
||||
)
|
||||
|
||||
with col2:
|
||||
competitors = st.text_area(
|
||||
"Competitor URLs (one per line)",
|
||||
value="https://jasper.ai\nhttps://copy.ai\nhttps://writesonic.com",
|
||||
help="Enter up to 5 competitor URLs for analysis"
|
||||
)
|
||||
|
||||
submit_audit = st.form_submit_button("🚀 Start Complete SEO Audit", type="primary")
|
||||
|
||||
# Process audit
|
||||
if submit_audit:
|
||||
if website_url and target_keywords:
|
||||
# Parse inputs
|
||||
keywords_list = [k.strip() for k in target_keywords.split('\n') if k.strip()]
|
||||
competitors_list = [c.strip() for c in competitors.split('\n') if c.strip()]
|
||||
|
||||
# Run audit
|
||||
with st.spinner("🔍 Running comprehensive SEO audit..."):
|
||||
audit_results = asyncio.run(
|
||||
suite.execute_complete_seo_audit(
|
||||
website_url, competitors_list, keywords_list
|
||||
)
|
||||
)
|
||||
|
||||
if 'error' not in audit_results:
|
||||
st.success("✅ Enterprise SEO audit completed!")
|
||||
|
||||
# Display results dashboard
|
||||
render_audit_results_dashboard(audit_results)
|
||||
else:
|
||||
st.error(f"❌ Audit failed: {audit_results['error']}")
|
||||
else:
|
||||
st.warning("⚠️ Please enter website URL and target keywords.")
|
||||
|
||||
def render_audit_results_dashboard(results: Dict[str, Any]):
|
||||
"""Render comprehensive audit results dashboard."""
|
||||
|
||||
# Priority Action Plan (Most Important)
|
||||
st.header("📋 Priority Action Plan")
|
||||
action_plan = results.get('priority_action_plan', [])
|
||||
|
||||
if action_plan:
|
||||
# Display as interactive table
|
||||
df_actions = pd.DataFrame(action_plan)
|
||||
|
||||
# Style the dataframe
|
||||
st.dataframe(
|
||||
df_actions,
|
||||
column_config={
|
||||
"category": "Category",
|
||||
"priority": st.column_config.SelectboxColumn(
|
||||
"Priority",
|
||||
options=["Critical", "High", "Medium", "Low"]
|
||||
),
|
||||
"effort": "Effort Level",
|
||||
"timeframe": "Timeline",
|
||||
"action": "Action Required",
|
||||
"expected_impact": "Expected Impact"
|
||||
},
|
||||
hide_index=True,
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
# Key Metrics Overview
|
||||
st.header("📊 SEO Health Dashboard")
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
technical_score = results.get('technical_audit', {}).get('performance_score', 0)
|
||||
st.metric("Technical SEO", f"{technical_score}/100", delta=None)
|
||||
|
||||
with col2:
|
||||
onpage_score = results.get('on_page_analysis', {}).get('optimization_score', 0)
|
||||
st.metric("On-Page SEO", f"{onpage_score}/100", delta=None)
|
||||
|
||||
with col3:
|
||||
performance_score = results.get('performance_metrics', {}).get('performance_score', 0)
|
||||
st.metric("Performance", f"{performance_score}/100", delta=None)
|
||||
|
||||
with col4:
|
||||
content_gaps = len(results.get('content_analysis', {}).get('content_opportunities', []))
|
||||
st.metric("Content Opportunities", content_gaps, delta=None)
|
||||
|
||||
# Detailed Analysis Sections
|
||||
tab1, tab2, tab3, tab4, tab5 = st.tabs([
|
||||
"🤖 Strategic Insights",
|
||||
"🔧 Technical Analysis",
|
||||
"📊 Content Intelligence",
|
||||
"🔍 On-Page Analysis",
|
||||
"⚡ Performance Metrics"
|
||||
])
|
||||
|
||||
with tab1:
|
||||
strategic_recs = results.get('strategic_recommendations', {})
|
||||
if strategic_recs:
|
||||
st.subheader("AI-Powered Strategic Recommendations")
|
||||
|
||||
# Immediate wins
|
||||
immediate_wins = strategic_recs.get('immediate_wins', [])
|
||||
if immediate_wins:
|
||||
st.markdown("#### 🚀 Immediate Wins (0-30 days)")
|
||||
for win in immediate_wins[:5]:
|
||||
st.success(f"✅ {win}")
|
||||
|
||||
# Strategic initiatives
|
||||
strategic_initiatives = strategic_recs.get('strategic_initiatives', [])
|
||||
if strategic_initiatives:
|
||||
st.markdown("#### 📈 Strategic Initiatives (1-3 months)")
|
||||
for initiative in strategic_initiatives[:3]:
|
||||
st.info(f"📋 {initiative}")
|
||||
|
||||
# Full analysis
|
||||
full_analysis = strategic_recs.get('full_analysis', '')
|
||||
if full_analysis:
|
||||
with st.expander("🧠 Complete Strategic Analysis"):
|
||||
st.write(full_analysis)
|
||||
|
||||
with tab2:
|
||||
technical_audit = results.get('technical_audit', {})
|
||||
if technical_audit:
|
||||
st.subheader("Technical SEO Analysis")
|
||||
|
||||
critical_issues = technical_audit.get('critical_issues', [])
|
||||
if critical_issues:
|
||||
st.markdown("#### ⚠️ Critical Issues")
|
||||
for issue in critical_issues:
|
||||
st.error(f"🚨 {issue}")
|
||||
|
||||
priority_fixes = technical_audit.get('priority_fixes', [])
|
||||
if priority_fixes:
|
||||
st.markdown("#### 🔧 Priority Fixes")
|
||||
for fix in priority_fixes:
|
||||
st.warning(f"🛠️ {fix}")
|
||||
|
||||
with tab3:
|
||||
content_analysis = results.get('content_analysis', {})
|
||||
if content_analysis:
|
||||
st.subheader("Content Intelligence")
|
||||
|
||||
content_opportunities = content_analysis.get('content_opportunities', [])
|
||||
if content_opportunities:
|
||||
st.markdown("#### 📝 Content Opportunities")
|
||||
for opportunity in content_opportunities[:5]:
|
||||
st.info(f"💡 {opportunity}")
|
||||
|
||||
competitive_advantages = content_analysis.get('competitive_advantages', [])
|
||||
if competitive_advantages:
|
||||
st.markdown("#### 🏆 Competitive Advantages")
|
||||
for advantage in competitive_advantages:
|
||||
st.success(f"⭐ {advantage}")
|
||||
|
||||
with tab4:
|
||||
onpage_analysis = results.get('on_page_analysis', {})
|
||||
if onpage_analysis:
|
||||
st.subheader("On-Page SEO Analysis")
|
||||
|
||||
meta_optimization = onpage_analysis.get('meta_optimization', {})
|
||||
content_optimization = onpage_analysis.get('content_optimization', {})
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("#### 🏷️ Meta Tag Optimization")
|
||||
st.json(meta_optimization)
|
||||
|
||||
with col2:
|
||||
st.markdown("#### 📄 Content Optimization")
|
||||
st.json(content_optimization)
|
||||
|
||||
with tab5:
|
||||
performance_metrics = results.get('performance_metrics', {})
|
||||
if performance_metrics:
|
||||
st.subheader("Performance Analysis")
|
||||
|
||||
core_vitals = performance_metrics.get('core_web_vitals', {})
|
||||
loading_performance = performance_metrics.get('loading_performance', {})
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.markdown("#### ⚡ Core Web Vitals")
|
||||
st.json(core_vitals)
|
||||
|
||||
with col2:
|
||||
st.markdown("#### 🚀 Loading Performance")
|
||||
st.json(loading_performance)
|
||||
|
||||
# Export functionality
|
||||
st.markdown("---")
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
if st.button("📥 Export Full Report", use_container_width=True):
|
||||
# Create downloadable report
|
||||
report_json = json.dumps(results, indent=2, default=str)
|
||||
st.download_button(
|
||||
label="Download JSON Report",
|
||||
data=report_json,
|
||||
file_name=f"seo_audit_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
||||
mime="application/json"
|
||||
)
|
||||
|
||||
with col2:
|
||||
if st.button("📊 Export Action Plan", use_container_width=True):
|
||||
# Create CSV of action plan
|
||||
df_actions = pd.DataFrame(action_plan)
|
||||
csv = df_actions.to_csv(index=False)
|
||||
st.download_button(
|
||||
label="Download CSV Action Plan",
|
||||
data=csv,
|
||||
file_name=f"action_plan_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv",
|
||||
mime="text/csv"
|
||||
)
|
||||
|
||||
with col3:
|
||||
if st.button("🔄 Schedule Follow-up Audit", use_container_width=True):
|
||||
st.info("Follow-up scheduling feature coming soon!")
|
||||
|
||||
def render_content_strategy_interface(suite: EnterpriseSEOSuite):
|
||||
"""Render content strategy development interface."""
|
||||
st.info("🚧 Content Strategy Development workflow coming soon!")
|
||||
|
||||
def render_technical_optimization_interface(suite: EnterpriseSEOSuite):
|
||||
"""Render technical optimization interface."""
|
||||
st.info("🚧 Technical SEO Optimization workflow coming soon!")
|
||||
|
||||
|
||||
# Main execution
|
||||
if __name__ == "__main__":
|
||||
render_enterprise_seo_suite()
|
||||
135
ToBeMigrated/ai_seo_tools/google_pagespeed_insights.py
Normal file
135
ToBeMigrated/ai_seo_tools/google_pagespeed_insights.py
Normal file
@@ -0,0 +1,135 @@
|
||||
import requests
|
||||
import streamlit as st
|
||||
import json
|
||||
import pandas as pd
|
||||
import plotly.express as px
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
from datetime import datetime
|
||||
|
||||
def run_pagespeed(url, api_key=None, strategy='DESKTOP', locale='en'):
|
||||
"""Fetches and processes PageSpeed Insights data."""
|
||||
serviceurl = 'https://www.googleapis.com/pagespeedonline/v5/runPagespeed'
|
||||
base_url = f"{serviceurl}?url={url}&strategy={strategy}&locale={locale}&category=performance&category=accessibility&category=best-practices&category=seo"
|
||||
|
||||
if api_key:
|
||||
base_url += f"&key={api_key}"
|
||||
|
||||
try:
|
||||
response = requests.get(base_url)
|
||||
response.raise_for_status() # Raise an exception for bad status codes
|
||||
data = response.json()
|
||||
return data
|
||||
except requests.exceptions.RequestException as e:
|
||||
st.error(f"Error fetching PageSpeed Insights data: {e}")
|
||||
return None
|
||||
|
||||
def display_results(data):
|
||||
"""Presents PageSpeed Insights data in a user-friendly format."""
|
||||
st.subheader("PageSpeed Insights Report")
|
||||
|
||||
# Extract scores from the PageSpeed Insights data
|
||||
scores = {
|
||||
"Performance": data['lighthouseResult']['categories']['performance']['score'] * 100,
|
||||
"Accessibility": data['lighthouseResult']['categories']['accessibility']['score'] * 100,
|
||||
"SEO": data['lighthouseResult']['categories']['seo']['score'] * 100,
|
||||
"Best Practices": data['lighthouseResult']['categories']['best-practices']['score'] * 100
|
||||
}
|
||||
|
||||
descriptions = {
|
||||
"Performance": data['lighthouseResult']['categories']['performance'].get('description', "This score represents Google's assessment of your page's speed. A higher percentage indicates better performance."),
|
||||
"Accessibility": data['lighthouseResult']['categories']['accessibility'].get('description', "This score evaluates how accessible your page is to users with disabilities. A higher percentage means better accessibility."),
|
||||
"SEO": data['lighthouseResult']['categories']['seo'].get('description', "This score measures how well your page is optimized for search engines. A higher percentage indicates better SEO practices."),
|
||||
"Best Practices": data['lighthouseResult']['categories']['best-practices'].get('description', "This score reflects how well your page follows best practices for web development. A higher percentage signifies adherence to best practices.")
|
||||
}
|
||||
|
||||
for category, score in scores.items():
|
||||
st.metric(label=f"Overall {category} Score", value=f"{score:.0f}%", help=descriptions[category])
|
||||
|
||||
# Display additional metrics
|
||||
st.subheader("Additional Metrics")
|
||||
additional_metrics = {
|
||||
"First Contentful Paint (FCP)": data['lighthouseResult']['audits']['first-contentful-paint']['displayValue'],
|
||||
"Largest Contentful Paint (LCP)": data['lighthouseResult']['audits']['largest-contentful-paint']['displayValue'],
|
||||
"Time to Interactive (TTI)": data['lighthouseResult']['audits']['interactive']['displayValue'],
|
||||
"Total Blocking Time (TBT)": data['lighthouseResult']['audits']['total-blocking-time']['displayValue'],
|
||||
"Cumulative Layout Shift (CLS)": data['lighthouseResult']['audits']['cumulative-layout-shift']['displayValue']
|
||||
}
|
||||
|
||||
st.table(pd.DataFrame(additional_metrics.items(), columns=["Metric", "Value"]))
|
||||
|
||||
# Display Network Requests
|
||||
st.subheader("Network Requests")
|
||||
if 'network-requests' in data['lighthouseResult']['audits']:
|
||||
network_requests = [
|
||||
{
|
||||
"End Time": item.get("endTime", "N/A"),
|
||||
"Start Time": item.get("startTime", "N/A"),
|
||||
"Transfer Size (MB)": round(item.get("transferSize", 0) / 1048576, 2),
|
||||
"Resource Size (MB)": round(item.get("resourceSize", 0) / 1048576, 2),
|
||||
"URL": item.get("url", "N/A")
|
||||
}
|
||||
for item in data["lighthouseResult"]["audits"]["network-requests"]["details"]["items"]
|
||||
if item.get("transferSize", 0) > 100000 or item.get("resourceSize", 0) > 100000
|
||||
]
|
||||
if network_requests:
|
||||
st.dataframe(pd.DataFrame(network_requests), use_container_width=True)
|
||||
else:
|
||||
st.write("No significant network requests found.")
|
||||
|
||||
# Display Mainthread Work Breakdown
|
||||
st.subheader("Mainthread Work Breakdown")
|
||||
if 'mainthread-work-breakdown' in data['lighthouseResult']['audits']:
|
||||
mainthread_data = [
|
||||
{"Process": item.get("groupLabel", "N/A"), "Duration (ms)": item.get("duration", "N/A")}
|
||||
for item in data["lighthouseResult"]["audits"]["mainthread-work-breakdown"]["details"]["items"] if item.get("duration", "N/A") != "N/A"
|
||||
]
|
||||
if mainthread_data:
|
||||
fig = px.bar(pd.DataFrame(mainthread_data), x="Process", y="Duration (ms)", title="Mainthread Work Breakdown", labels={"Process": "Process", "Duration (ms)": "Duration (ms)"})
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
else:
|
||||
st.write("No significant main thread work breakdown data found.")
|
||||
|
||||
# Display other metrics
|
||||
metrics = [
|
||||
("Use of Passive Event Listeners", 'uses-passive-event-listeners', ["URL", "Code Line"]),
|
||||
("DOM Size", 'dom-size', ["Score", "DOM Size"]),
|
||||
("Offscreen Images", 'offscreen-images', ["URL", "Total Bytes", "Wasted Bytes", "Wasted Percentage"]),
|
||||
("Critical Request Chains", 'critical-request-chains', ["URL", "Start Time", "End Time", "Transfer Size", "Chain"]),
|
||||
("Total Bytes Weight", 'total-byte-weight', ["URL", "Total Bytes"]),
|
||||
("Render Blocking Resources", 'render-blocking-resources', ["URL", "Total Bytes", "Wasted Milliseconds"]),
|
||||
("Use of Rel Preload", 'uses-rel-preload', ["URL", "Wasted Milliseconds"])
|
||||
]
|
||||
|
||||
for metric_title, audit_key, columns in metrics:
|
||||
st.subheader(metric_title)
|
||||
if audit_key in data['lighthouseResult']['audits']:
|
||||
details = data['lighthouseResult']['audits'][audit_key].get("details", {}).get("items", [])
|
||||
if details:
|
||||
st.table(pd.DataFrame(details, columns=columns))
|
||||
else:
|
||||
st.write(f"No significant {metric_title.lower()} data found.")
|
||||
|
||||
def google_pagespeed_insights():
|
||||
st.markdown("<h1 style='text-align: center; color: #1565C0;'>PageSpeed Insights Analyzer</h1>", unsafe_allow_html=True)
|
||||
st.markdown("<h3 style='text-align: center;'>Get detailed insights into your website's performance! Powered by Google PageSpeed Insights <a href='https://developer.chrome.com/docs/lighthouse/overview/'>[Learn More]</a></h3>", unsafe_allow_html=True)
|
||||
|
||||
# User Input
|
||||
with st.form("pagespeed_form"):
|
||||
url = st.text_input("Enter Website URL", placeholder="https://www.example.com")
|
||||
api_key = st.text_input("Enter Google API Key (Optional)", placeholder="Your API Key", help="Get your API key here: [https://developers.google.com/speed/docs/insights/v5/get-started#key]")
|
||||
device = st.selectbox("Choose Device", ["Mobile", "Desktop"])
|
||||
locale = st.selectbox("Choose Locale", ["en", "fr", "es", "de", "ja"])
|
||||
categories = st.multiselect("Select Categories to Analyze", ['PERFORMANCE', 'ACCESSIBILITY', 'BEST_PRACTICES', 'SEO'], default=['PERFORMANCE', 'ACCESSIBILITY', 'BEST_PRACTICES', 'SEO'])
|
||||
|
||||
submitted = st.form_submit_button("Analyze")
|
||||
|
||||
if submitted:
|
||||
if not url:
|
||||
st.error("Please provide the website URL.")
|
||||
else:
|
||||
strategy = 'mobile' if device == "Mobile" else 'desktop'
|
||||
data = run_pagespeed(url, api_key, strategy=strategy, locale=locale)
|
||||
if data:
|
||||
display_results(data)
|
||||
else:
|
||||
st.error("Failed to retrieve PageSpeed Insights data.")
|
||||
864
ToBeMigrated/ai_seo_tools/google_search_console_integration.py
Normal file
864
ToBeMigrated/ai_seo_tools/google_search_console_integration.py
Normal file
@@ -0,0 +1,864 @@
|
||||
"""
|
||||
Google Search Console Integration for Enterprise SEO
|
||||
|
||||
Connects GSC data with AI-powered content strategy and keyword intelligence.
|
||||
Provides enterprise-level search performance insights and content recommendations.
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from datetime import datetime, timedelta
|
||||
import json
|
||||
from loguru import logger
|
||||
import plotly.express as px
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
|
||||
# Import AI modules
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
class GoogleSearchConsoleAnalyzer:
|
||||
"""
|
||||
Enterprise Google Search Console analyzer with AI-powered insights.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the GSC analyzer."""
|
||||
self.gsc_client = None # Will be initialized when credentials are provided
|
||||
logger.info("Google Search Console Analyzer initialized")
|
||||
|
||||
def analyze_search_performance(self, site_url: str, date_range: int = 90) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze comprehensive search performance from GSC data.
|
||||
|
||||
Args:
|
||||
site_url: Website URL registered in GSC
|
||||
date_range: Number of days to analyze (default 90)
|
||||
|
||||
Returns:
|
||||
Comprehensive search performance analysis
|
||||
"""
|
||||
try:
|
||||
st.info("📊 Analyzing Google Search Console data...")
|
||||
|
||||
# Simulate GSC data for demonstration (replace with actual GSC API calls)
|
||||
search_data = self._get_mock_gsc_data(site_url, date_range)
|
||||
|
||||
# Perform comprehensive analysis
|
||||
analysis_results = {
|
||||
'site_url': site_url,
|
||||
'analysis_period': f"Last {date_range} days",
|
||||
'analysis_timestamp': datetime.utcnow().isoformat(),
|
||||
'performance_overview': self._analyze_performance_overview(search_data),
|
||||
'keyword_analysis': self._analyze_keyword_performance(search_data),
|
||||
'page_analysis': self._analyze_page_performance(search_data),
|
||||
'content_opportunities': self._identify_content_opportunities(search_data),
|
||||
'technical_insights': self._analyze_technical_seo_signals(search_data),
|
||||
'competitive_analysis': self._analyze_competitive_position(search_data),
|
||||
'ai_recommendations': self._generate_ai_recommendations(search_data)
|
||||
}
|
||||
|
||||
return analysis_results
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error analyzing search performance: {str(e)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
return {'error': error_msg}
|
||||
|
||||
def _get_mock_gsc_data(self, site_url: str, days: int) -> Dict[str, pd.DataFrame]:
|
||||
"""
|
||||
Generate mock GSC data for demonstration.
|
||||
In production, this would fetch real data from GSC API.
|
||||
"""
|
||||
# Generate mock keyword data
|
||||
keywords_data = []
|
||||
sample_keywords = [
|
||||
"AI content creation", "SEO tools", "content optimization", "blog writing AI",
|
||||
"meta description generator", "keyword research", "technical SEO", "content strategy",
|
||||
"on-page optimization", "SERP analysis", "content gap analysis", "SEO audit"
|
||||
]
|
||||
|
||||
for keyword in sample_keywords:
|
||||
# Generate realistic performance data
|
||||
impressions = np.random.randint(100, 10000)
|
||||
clicks = int(impressions * np.random.uniform(0.02, 0.15)) # CTR between 2-15%
|
||||
position = np.random.uniform(3, 25)
|
||||
|
||||
keywords_data.append({
|
||||
'keyword': keyword,
|
||||
'impressions': impressions,
|
||||
'clicks': clicks,
|
||||
'ctr': (clicks / impressions) * 100,
|
||||
'position': position
|
||||
})
|
||||
|
||||
# Generate mock page data
|
||||
pages_data = []
|
||||
sample_pages = [
|
||||
"/blog/ai-content-creation-guide", "/tools/seo-analyzer", "/features/content-optimization",
|
||||
"/blog/technical-seo-checklist", "/tools/keyword-research", "/blog/content-strategy-2024",
|
||||
"/tools/meta-description-generator", "/blog/on-page-seo-guide", "/features/enterprise-seo"
|
||||
]
|
||||
|
||||
for page in sample_pages:
|
||||
impressions = np.random.randint(500, 5000)
|
||||
clicks = int(impressions * np.random.uniform(0.03, 0.12))
|
||||
position = np.random.uniform(5, 20)
|
||||
|
||||
pages_data.append({
|
||||
'page': page,
|
||||
'impressions': impressions,
|
||||
'clicks': clicks,
|
||||
'ctr': (clicks / impressions) * 100,
|
||||
'position': position
|
||||
})
|
||||
|
||||
# Generate time series data
|
||||
time_series_data = []
|
||||
for i in range(days):
|
||||
date = datetime.now() - timedelta(days=i)
|
||||
daily_clicks = np.random.randint(50, 500)
|
||||
daily_impressions = np.random.randint(1000, 8000)
|
||||
|
||||
time_series_data.append({
|
||||
'date': date.strftime('%Y-%m-%d'),
|
||||
'clicks': daily_clicks,
|
||||
'impressions': daily_impressions,
|
||||
'ctr': (daily_clicks / daily_impressions) * 100,
|
||||
'position': np.random.uniform(8, 15)
|
||||
})
|
||||
|
||||
return {
|
||||
'keywords': pd.DataFrame(keywords_data),
|
||||
'pages': pd.DataFrame(pages_data),
|
||||
'time_series': pd.DataFrame(time_series_data)
|
||||
}
|
||||
|
||||
def _analyze_performance_overview(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Analyze overall search performance metrics."""
|
||||
keywords_df = search_data['keywords']
|
||||
time_series_df = search_data['time_series']
|
||||
|
||||
# Calculate totals and averages
|
||||
total_clicks = keywords_df['clicks'].sum()
|
||||
total_impressions = keywords_df['impressions'].sum()
|
||||
avg_ctr = (total_clicks / total_impressions) * 100 if total_impressions > 0 else 0
|
||||
avg_position = keywords_df['position'].mean()
|
||||
|
||||
# Calculate trends
|
||||
recent_clicks = time_series_df.head(7)['clicks'].mean()
|
||||
previous_clicks = time_series_df.tail(7)['clicks'].mean()
|
||||
clicks_trend = ((recent_clicks - previous_clicks) / previous_clicks * 100) if previous_clicks > 0 else 0
|
||||
|
||||
recent_impressions = time_series_df.head(7)['impressions'].mean()
|
||||
previous_impressions = time_series_df.tail(7)['impressions'].mean()
|
||||
impressions_trend = ((recent_impressions - previous_impressions) / previous_impressions * 100) if previous_impressions > 0 else 0
|
||||
|
||||
# Top performing keywords
|
||||
top_keywords = keywords_df.nlargest(5, 'clicks')[['keyword', 'clicks', 'impressions', 'position']].to_dict('records')
|
||||
|
||||
# Opportunity keywords (high impressions, low CTR)
|
||||
opportunity_keywords = keywords_df[
|
||||
(keywords_df['impressions'] > keywords_df['impressions'].median()) &
|
||||
(keywords_df['ctr'] < 3)
|
||||
].nlargest(5, 'impressions')[['keyword', 'impressions', 'ctr', 'position']].to_dict('records')
|
||||
|
||||
return {
|
||||
'total_clicks': int(total_clicks),
|
||||
'total_impressions': int(total_impressions),
|
||||
'avg_ctr': round(avg_ctr, 2),
|
||||
'avg_position': round(avg_position, 1),
|
||||
'clicks_trend': round(clicks_trend, 1),
|
||||
'impressions_trend': round(impressions_trend, 1),
|
||||
'top_keywords': top_keywords,
|
||||
'opportunity_keywords': opportunity_keywords
|
||||
}
|
||||
|
||||
def _analyze_keyword_performance(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Analyze keyword performance and opportunities."""
|
||||
keywords_df = search_data['keywords']
|
||||
|
||||
# Keyword categorization
|
||||
high_volume_keywords = keywords_df[keywords_df['impressions'] > keywords_df['impressions'].quantile(0.8)]
|
||||
low_competition_keywords = keywords_df[keywords_df['position'] <= 10]
|
||||
optimization_opportunities = keywords_df[
|
||||
(keywords_df['position'] > 10) &
|
||||
(keywords_df['position'] <= 20) &
|
||||
(keywords_df['impressions'] > 100)
|
||||
]
|
||||
|
||||
# Content gap analysis
|
||||
missing_keywords = self._identify_missing_keywords(keywords_df)
|
||||
|
||||
# Seasonal trends analysis
|
||||
seasonal_insights = self._analyze_seasonal_trends(keywords_df)
|
||||
|
||||
return {
|
||||
'total_keywords': len(keywords_df),
|
||||
'high_volume_keywords': high_volume_keywords.to_dict('records'),
|
||||
'ranking_keywords': low_competition_keywords.to_dict('records'),
|
||||
'optimization_opportunities': optimization_opportunities.to_dict('records'),
|
||||
'missing_keywords': missing_keywords,
|
||||
'seasonal_insights': seasonal_insights,
|
||||
'keyword_distribution': {
|
||||
'positions_1_3': len(keywords_df[keywords_df['position'] <= 3]),
|
||||
'positions_4_10': len(keywords_df[(keywords_df['position'] > 3) & (keywords_df['position'] <= 10)]),
|
||||
'positions_11_20': len(keywords_df[(keywords_df['position'] > 10) & (keywords_df['position'] <= 20)]),
|
||||
'positions_21_plus': len(keywords_df[keywords_df['position'] > 20])
|
||||
}
|
||||
}
|
||||
|
||||
def _analyze_page_performance(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Analyze page-level performance."""
|
||||
pages_df = search_data['pages']
|
||||
|
||||
# Top performing pages
|
||||
top_pages = pages_df.nlargest(10, 'clicks')
|
||||
|
||||
# Underperforming pages (high impressions, low clicks)
|
||||
underperforming_pages = pages_df[
|
||||
(pages_df['impressions'] > pages_df['impressions'].median()) &
|
||||
(pages_df['ctr'] < 2)
|
||||
].nlargest(5, 'impressions')
|
||||
|
||||
# Page type analysis
|
||||
page_types = self._categorize_pages(pages_df)
|
||||
|
||||
return {
|
||||
'top_pages': top_pages.to_dict('records'),
|
||||
'underperforming_pages': underperforming_pages.to_dict('records'),
|
||||
'page_types_performance': page_types,
|
||||
'total_pages': len(pages_df)
|
||||
}
|
||||
|
||||
def _identify_content_opportunities(self, search_data: Dict[str, pd.DataFrame]) -> List[Dict[str, Any]]:
|
||||
"""Identify content creation and optimization opportunities."""
|
||||
keywords_df = search_data['keywords']
|
||||
|
||||
opportunities = []
|
||||
|
||||
# High impression, low CTR keywords need content optimization
|
||||
low_ctr_keywords = keywords_df[
|
||||
(keywords_df['impressions'] > 500) &
|
||||
(keywords_df['ctr'] < 3)
|
||||
]
|
||||
|
||||
for _, keyword_row in low_ctr_keywords.iterrows():
|
||||
opportunities.append({
|
||||
'type': 'Content Optimization',
|
||||
'keyword': keyword_row['keyword'],
|
||||
'opportunity': f"Optimize existing content for '{keyword_row['keyword']}' to improve CTR from {keyword_row['ctr']:.1f}%",
|
||||
'potential_impact': 'High',
|
||||
'current_position': round(keyword_row['position'], 1),
|
||||
'impressions': int(keyword_row['impressions']),
|
||||
'priority': 'High' if keyword_row['impressions'] > 1000 else 'Medium'
|
||||
})
|
||||
|
||||
# Position 11-20 keywords need content improvement
|
||||
position_11_20 = keywords_df[
|
||||
(keywords_df['position'] > 10) &
|
||||
(keywords_df['position'] <= 20) &
|
||||
(keywords_df['impressions'] > 100)
|
||||
]
|
||||
|
||||
for _, keyword_row in position_11_20.iterrows():
|
||||
opportunities.append({
|
||||
'type': 'Content Enhancement',
|
||||
'keyword': keyword_row['keyword'],
|
||||
'opportunity': f"Enhance content for '{keyword_row['keyword']}' to move from position {keyword_row['position']:.1f} to first page",
|
||||
'potential_impact': 'Medium',
|
||||
'current_position': round(keyword_row['position'], 1),
|
||||
'impressions': int(keyword_row['impressions']),
|
||||
'priority': 'Medium'
|
||||
})
|
||||
|
||||
# Sort by potential impact and impressions
|
||||
opportunities = sorted(opportunities, key=lambda x: x['impressions'], reverse=True)
|
||||
|
||||
return opportunities[:10] # Top 10 opportunities
|
||||
|
||||
def _analyze_technical_seo_signals(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Analyze technical SEO signals from search data."""
|
||||
keywords_df = search_data['keywords']
|
||||
pages_df = search_data['pages']
|
||||
|
||||
# Analyze performance patterns that might indicate technical issues
|
||||
technical_insights = {
|
||||
'crawl_issues_indicators': [],
|
||||
'mobile_performance': {},
|
||||
'core_web_vitals_impact': {},
|
||||
'indexing_insights': {}
|
||||
}
|
||||
|
||||
# Identify potential crawl issues
|
||||
very_low_impressions = keywords_df[keywords_df['impressions'] < 10]
|
||||
if len(very_low_impressions) > len(keywords_df) * 0.3: # If 30%+ have very low impressions
|
||||
technical_insights['crawl_issues_indicators'].append(
|
||||
"High percentage of keywords with very low impressions may indicate crawl or indexing issues"
|
||||
)
|
||||
|
||||
# Mobile performance indicators
|
||||
avg_mobile_position = keywords_df['position'].mean() # In real implementation, this would be mobile-specific
|
||||
technical_insights['mobile_performance'] = {
|
||||
'avg_mobile_position': round(avg_mobile_position, 1),
|
||||
'mobile_optimization_needed': avg_mobile_position > 15
|
||||
}
|
||||
|
||||
return technical_insights
|
||||
|
||||
def _analyze_competitive_position(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Analyze competitive positioning based on search data."""
|
||||
keywords_df = search_data['keywords']
|
||||
|
||||
# Calculate competitive metrics
|
||||
dominant_keywords = len(keywords_df[keywords_df['position'] <= 3])
|
||||
competitive_keywords = len(keywords_df[(keywords_df['position'] > 3) & (keywords_df['position'] <= 10)])
|
||||
losing_keywords = len(keywords_df[keywords_df['position'] > 10])
|
||||
|
||||
competitive_strength = (dominant_keywords * 3 + competitive_keywords * 2 + losing_keywords * 1) / len(keywords_df)
|
||||
|
||||
return {
|
||||
'dominant_keywords': dominant_keywords,
|
||||
'competitive_keywords': competitive_keywords,
|
||||
'losing_keywords': losing_keywords,
|
||||
'competitive_strength_score': round(competitive_strength, 2),
|
||||
'market_position': self._determine_market_position(competitive_strength)
|
||||
}
|
||||
|
||||
def _generate_ai_recommendations(self, search_data: Dict[str, pd.DataFrame]) -> Dict[str, Any]:
|
||||
"""Generate AI-powered recommendations based on search data."""
|
||||
try:
|
||||
keywords_df = search_data['keywords']
|
||||
pages_df = search_data['pages']
|
||||
|
||||
# Prepare data summary for AI analysis
|
||||
top_keywords = keywords_df.nlargest(5, 'impressions')['keyword'].tolist()
|
||||
avg_position = keywords_df['position'].mean()
|
||||
total_impressions = keywords_df['impressions'].sum()
|
||||
total_clicks = keywords_df['clicks'].sum()
|
||||
avg_ctr = (total_clicks / total_impressions * 100) if total_impressions > 0 else 0
|
||||
|
||||
# Create comprehensive prompt for AI analysis
|
||||
ai_prompt = f"""
|
||||
Analyze this Google Search Console data and provide strategic SEO recommendations:
|
||||
|
||||
SEARCH PERFORMANCE SUMMARY:
|
||||
- Total Keywords Tracked: {len(keywords_df)}
|
||||
- Total Impressions: {total_impressions:,}
|
||||
- Total Clicks: {total_clicks:,}
|
||||
- Average CTR: {avg_ctr:.2f}%
|
||||
- Average Position: {avg_position:.1f}
|
||||
|
||||
TOP PERFORMING KEYWORDS:
|
||||
{', '.join(top_keywords)}
|
||||
|
||||
PERFORMANCE DISTRIBUTION:
|
||||
- Keywords ranking 1-3: {len(keywords_df[keywords_df['position'] <= 3])}
|
||||
- Keywords ranking 4-10: {len(keywords_df[(keywords_df['position'] > 3) & (keywords_df['position'] <= 10)])}
|
||||
- Keywords ranking 11-20: {len(keywords_df[(keywords_df['position'] > 10) & (keywords_df['position'] <= 20)])}
|
||||
- Keywords ranking 21+: {len(keywords_df[keywords_df['position'] > 20])}
|
||||
|
||||
TOP PAGES BY TRAFFIC:
|
||||
{pages_df.nlargest(3, 'clicks')['page'].tolist()}
|
||||
|
||||
Based on this data, provide:
|
||||
|
||||
1. IMMEDIATE OPTIMIZATION OPPORTUNITIES (0-30 days):
|
||||
- Specific keywords to optimize for better CTR
|
||||
- Pages that need content updates
|
||||
- Quick technical wins
|
||||
|
||||
2. CONTENT STRATEGY RECOMMENDATIONS (1-3 months):
|
||||
- New content topics based on keyword gaps
|
||||
- Content enhancement priorities
|
||||
- Internal linking opportunities
|
||||
|
||||
3. LONG-TERM SEO STRATEGY (3-12 months):
|
||||
- Market expansion opportunities
|
||||
- Authority building topics
|
||||
- Competitive positioning strategies
|
||||
|
||||
4. TECHNICAL SEO PRIORITIES:
|
||||
- Performance issues affecting rankings
|
||||
- Mobile optimization needs
|
||||
- Core Web Vitals improvements
|
||||
|
||||
Provide specific, actionable recommendations with expected impact and priority levels.
|
||||
"""
|
||||
|
||||
ai_analysis = llm_text_gen(
|
||||
ai_prompt,
|
||||
system_prompt="You are an enterprise SEO strategist analyzing Google Search Console data. Provide specific, data-driven recommendations that will improve search performance."
|
||||
)
|
||||
|
||||
return {
|
||||
'full_analysis': ai_analysis,
|
||||
'immediate_opportunities': self._extract_immediate_opportunities(ai_analysis),
|
||||
'content_strategy': self._extract_content_strategy(ai_analysis),
|
||||
'long_term_strategy': self._extract_long_term_strategy(ai_analysis),
|
||||
'technical_priorities': self._extract_technical_priorities(ai_analysis)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"AI recommendations error: {str(e)}")
|
||||
return {'error': str(e)}
|
||||
|
||||
# Utility methods
|
||||
def _identify_missing_keywords(self, keywords_df: pd.DataFrame) -> List[str]:
|
||||
"""Identify potential missing keywords based on current keyword performance."""
|
||||
# In a real implementation, this would use keyword research APIs
|
||||
existing_keywords = set(keywords_df['keyword'].str.lower())
|
||||
|
||||
potential_keywords = [
|
||||
"AI writing tools", "content automation", "SEO content generator",
|
||||
"blog post optimizer", "meta tag generator", "keyword analyzer"
|
||||
]
|
||||
|
||||
missing = [kw for kw in potential_keywords if kw.lower() not in existing_keywords]
|
||||
return missing[:5]
|
||||
|
||||
def _analyze_seasonal_trends(self, keywords_df: pd.DataFrame) -> Dict[str, Any]:
|
||||
"""Analyze seasonal trends in keyword performance."""
|
||||
# Placeholder for seasonal analysis
|
||||
return {
|
||||
'seasonal_keywords': [],
|
||||
'trend_analysis': "Seasonal analysis requires historical data spanning multiple seasons"
|
||||
}
|
||||
|
||||
def _categorize_pages(self, pages_df: pd.DataFrame) -> Dict[str, Any]:
|
||||
"""Categorize pages by type and analyze performance."""
|
||||
page_types = {
|
||||
'Blog Posts': {'count': 0, 'total_clicks': 0, 'avg_position': 0},
|
||||
'Product Pages': {'count': 0, 'total_clicks': 0, 'avg_position': 0},
|
||||
'Tool Pages': {'count': 0, 'total_clicks': 0, 'avg_position': 0},
|
||||
'Other': {'count': 0, 'total_clicks': 0, 'avg_position': 0}
|
||||
}
|
||||
|
||||
for _, page_row in pages_df.iterrows():
|
||||
page_url = page_row['page']
|
||||
clicks = page_row['clicks']
|
||||
position = page_row['position']
|
||||
|
||||
if '/blog/' in page_url:
|
||||
page_types['Blog Posts']['count'] += 1
|
||||
page_types['Blog Posts']['total_clicks'] += clicks
|
||||
page_types['Blog Posts']['avg_position'] += position
|
||||
elif '/tools/' in page_url:
|
||||
page_types['Tool Pages']['count'] += 1
|
||||
page_types['Tool Pages']['total_clicks'] += clicks
|
||||
page_types['Tool Pages']['avg_position'] += position
|
||||
elif '/features/' in page_url or '/product/' in page_url:
|
||||
page_types['Product Pages']['count'] += 1
|
||||
page_types['Product Pages']['total_clicks'] += clicks
|
||||
page_types['Product Pages']['avg_position'] += position
|
||||
else:
|
||||
page_types['Other']['count'] += 1
|
||||
page_types['Other']['total_clicks'] += clicks
|
||||
page_types['Other']['avg_position'] += position
|
||||
|
||||
# Calculate averages
|
||||
for page_type in page_types:
|
||||
if page_types[page_type]['count'] > 0:
|
||||
page_types[page_type]['avg_position'] = round(
|
||||
page_types[page_type]['avg_position'] / page_types[page_type]['count'], 1
|
||||
)
|
||||
|
||||
return page_types
|
||||
|
||||
def _determine_market_position(self, competitive_strength: float) -> str:
|
||||
"""Determine market position based on competitive strength score."""
|
||||
if competitive_strength >= 2.5:
|
||||
return "Market Leader"
|
||||
elif competitive_strength >= 2.0:
|
||||
return "Strong Competitor"
|
||||
elif competitive_strength >= 1.5:
|
||||
return "Emerging Player"
|
||||
else:
|
||||
return "Challenger"
|
||||
|
||||
def _extract_immediate_opportunities(self, analysis: str) -> List[str]:
|
||||
"""Extract immediate opportunities from AI analysis."""
|
||||
lines = analysis.split('\n')
|
||||
opportunities = []
|
||||
in_immediate_section = False
|
||||
|
||||
for line in lines:
|
||||
if 'IMMEDIATE OPTIMIZATION' in line.upper():
|
||||
in_immediate_section = True
|
||||
continue
|
||||
elif 'CONTENT STRATEGY' in line.upper():
|
||||
in_immediate_section = False
|
||||
continue
|
||||
|
||||
if in_immediate_section and line.strip().startswith('-'):
|
||||
opportunities.append(line.strip().lstrip('- '))
|
||||
|
||||
return opportunities[:5]
|
||||
|
||||
def _extract_content_strategy(self, analysis: str) -> List[str]:
|
||||
"""Extract content strategy recommendations from AI analysis."""
|
||||
return ["Develop topic clusters", "Create comparison content", "Build FAQ sections"]
|
||||
|
||||
def _extract_long_term_strategy(self, analysis: str) -> List[str]:
|
||||
"""Extract long-term strategy from AI analysis."""
|
||||
return ["Build domain authority", "Expand to new markets", "Develop thought leadership content"]
|
||||
|
||||
def _extract_technical_priorities(self, analysis: str) -> List[str]:
|
||||
"""Extract technical priorities from AI analysis."""
|
||||
return ["Improve page speed", "Optimize mobile experience", "Fix crawl errors"]
|
||||
|
||||
|
||||
def render_gsc_integration():
|
||||
"""Render the Google Search Console integration interface."""
|
||||
|
||||
st.title("📊 Google Search Console Intelligence")
|
||||
st.markdown("**AI-powered insights from your Google Search Console data**")
|
||||
|
||||
# Initialize analyzer
|
||||
if 'gsc_analyzer' not in st.session_state:
|
||||
st.session_state.gsc_analyzer = GoogleSearchConsoleAnalyzer()
|
||||
|
||||
analyzer = st.session_state.gsc_analyzer
|
||||
|
||||
# Configuration section
|
||||
st.header("🔧 Configuration")
|
||||
|
||||
with st.expander("📋 Setup Instructions", expanded=False):
|
||||
st.markdown("""
|
||||
### Setting up Google Search Console Integration
|
||||
|
||||
1. **Verify your website** in Google Search Console
|
||||
2. **Enable the Search Console API** in Google Cloud Console
|
||||
3. **Create service account credentials** and download the JSON file
|
||||
4. **Upload credentials** using the file uploader below
|
||||
|
||||
📚 [Detailed Setup Guide](https://developers.google.com/webmaster-tools/search-console-api-original/v3/prereqs)
|
||||
""")
|
||||
|
||||
# Input form
|
||||
with st.form("gsc_analysis_form"):
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
site_url = st.text_input(
|
||||
"Site URL",
|
||||
value="https://example.com",
|
||||
help="Enter your website URL as registered in Google Search Console"
|
||||
)
|
||||
|
||||
date_range = st.selectbox(
|
||||
"Analysis Period",
|
||||
[30, 60, 90, 180],
|
||||
index=2,
|
||||
help="Number of days to analyze"
|
||||
)
|
||||
|
||||
with col2:
|
||||
# Credentials upload (placeholder)
|
||||
credentials_file = st.file_uploader(
|
||||
"GSC API Credentials (JSON)",
|
||||
type=['json'],
|
||||
help="Upload your Google Search Console API credentials file"
|
||||
)
|
||||
|
||||
demo_mode = st.checkbox(
|
||||
"Demo Mode",
|
||||
value=True,
|
||||
help="Use demo data for testing (no credentials needed)"
|
||||
)
|
||||
|
||||
submit_analysis = st.form_submit_button("📊 Analyze Search Performance", type="primary")
|
||||
|
||||
# Process analysis
|
||||
if submit_analysis:
|
||||
if site_url and (demo_mode or credentials_file):
|
||||
with st.spinner("📊 Analyzing Google Search Console data..."):
|
||||
analysis_results = analyzer.analyze_search_performance(site_url, date_range)
|
||||
|
||||
if 'error' not in analysis_results:
|
||||
st.success("✅ Search Console analysis completed!")
|
||||
|
||||
# Store results in session state
|
||||
st.session_state.gsc_results = analysis_results
|
||||
|
||||
# Display results
|
||||
render_gsc_results_dashboard(analysis_results)
|
||||
else:
|
||||
st.error(f"❌ Analysis failed: {analysis_results['error']}")
|
||||
else:
|
||||
st.warning("⚠️ Please enter site URL and upload credentials (or enable demo mode).")
|
||||
|
||||
# Show previous results if available
|
||||
elif 'gsc_results' in st.session_state:
|
||||
st.info("📊 Showing previous analysis results")
|
||||
render_gsc_results_dashboard(st.session_state.gsc_results)
|
||||
|
||||
|
||||
def render_gsc_results_dashboard(results: Dict[str, Any]):
|
||||
"""Render comprehensive GSC analysis results."""
|
||||
|
||||
# Performance overview
|
||||
st.header("📊 Search Performance Overview")
|
||||
|
||||
overview = results['performance_overview']
|
||||
|
||||
col1, col2, col3, col4 = st.columns(4)
|
||||
|
||||
with col1:
|
||||
st.metric(
|
||||
"Total Clicks",
|
||||
f"{overview['total_clicks']:,}",
|
||||
delta=f"{overview['clicks_trend']:+.1f}%" if overview['clicks_trend'] != 0 else None
|
||||
)
|
||||
|
||||
with col2:
|
||||
st.metric(
|
||||
"Total Impressions",
|
||||
f"{overview['total_impressions']:,}",
|
||||
delta=f"{overview['impressions_trend']:+.1f}%" if overview['impressions_trend'] != 0 else None
|
||||
)
|
||||
|
||||
with col3:
|
||||
st.metric(
|
||||
"Average CTR",
|
||||
f"{overview['avg_ctr']:.2f}%"
|
||||
)
|
||||
|
||||
with col4:
|
||||
st.metric(
|
||||
"Average Position",
|
||||
f"{overview['avg_position']:.1f}"
|
||||
)
|
||||
|
||||
# Content opportunities (Most important section)
|
||||
st.header("🎯 Content Opportunities")
|
||||
|
||||
opportunities = results['content_opportunities']
|
||||
if opportunities:
|
||||
# Display as interactive table
|
||||
df_opportunities = pd.DataFrame(opportunities)
|
||||
|
||||
st.dataframe(
|
||||
df_opportunities,
|
||||
column_config={
|
||||
"type": "Opportunity Type",
|
||||
"keyword": "Keyword",
|
||||
"opportunity": "Description",
|
||||
"potential_impact": st.column_config.SelectboxColumn(
|
||||
"Impact",
|
||||
options=["High", "Medium", "Low"]
|
||||
),
|
||||
"current_position": st.column_config.NumberColumn(
|
||||
"Current Position",
|
||||
format="%.1f"
|
||||
),
|
||||
"impressions": st.column_config.NumberColumn(
|
||||
"Impressions",
|
||||
format="%d"
|
||||
),
|
||||
"priority": st.column_config.SelectboxColumn(
|
||||
"Priority",
|
||||
options=["High", "Medium", "Low"]
|
||||
)
|
||||
},
|
||||
hide_index=True,
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
# Detailed analysis tabs
|
||||
tab1, tab2, tab3, tab4, tab5 = st.tabs([
|
||||
"🤖 AI Insights",
|
||||
"🎯 Keyword Analysis",
|
||||
"📄 Page Performance",
|
||||
"🏆 Competitive Position",
|
||||
"🔧 Technical Signals"
|
||||
])
|
||||
|
||||
with tab1:
|
||||
ai_recs = results.get('ai_recommendations', {})
|
||||
if ai_recs and 'error' not in ai_recs:
|
||||
st.subheader("AI-Powered Recommendations")
|
||||
|
||||
# Immediate opportunities
|
||||
immediate_ops = ai_recs.get('immediate_opportunities', [])
|
||||
if immediate_ops:
|
||||
st.markdown("#### 🚀 Immediate Optimizations (0-30 days)")
|
||||
for op in immediate_ops:
|
||||
st.success(f"✅ {op}")
|
||||
|
||||
# Content strategy
|
||||
content_strategy = ai_recs.get('content_strategy', [])
|
||||
if content_strategy:
|
||||
st.markdown("#### 📝 Content Strategy (1-3 months)")
|
||||
for strategy in content_strategy:
|
||||
st.info(f"📋 {strategy}")
|
||||
|
||||
# Full analysis
|
||||
full_analysis = ai_recs.get('full_analysis', '')
|
||||
if full_analysis:
|
||||
with st.expander("🧠 Complete AI Analysis"):
|
||||
st.write(full_analysis)
|
||||
|
||||
with tab2:
|
||||
keyword_analysis = results.get('keyword_analysis', {})
|
||||
if keyword_analysis:
|
||||
st.subheader("Keyword Performance Analysis")
|
||||
|
||||
# Keyword distribution chart
|
||||
dist = keyword_analysis['keyword_distribution']
|
||||
fig = px.pie(
|
||||
values=[dist['positions_1_3'], dist['positions_4_10'], dist['positions_11_20'], dist['positions_21_plus']],
|
||||
names=['Positions 1-3', 'Positions 4-10', 'Positions 11-20', 'Positions 21+'],
|
||||
title="Keyword Position Distribution"
|
||||
)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# High volume keywords
|
||||
high_volume = keyword_analysis.get('high_volume_keywords', [])
|
||||
if high_volume:
|
||||
st.markdown("#### 📈 High Volume Keywords")
|
||||
st.dataframe(pd.DataFrame(high_volume), hide_index=True)
|
||||
|
||||
# Optimization opportunities
|
||||
opt_opportunities = keyword_analysis.get('optimization_opportunities', [])
|
||||
if opt_opportunities:
|
||||
st.markdown("#### 🎯 Optimization Opportunities (Positions 11-20)")
|
||||
st.dataframe(pd.DataFrame(opt_opportunities), hide_index=True)
|
||||
|
||||
with tab3:
|
||||
page_analysis = results.get('page_analysis', {})
|
||||
if page_analysis:
|
||||
st.subheader("Page Performance Analysis")
|
||||
|
||||
# Top pages
|
||||
top_pages = page_analysis.get('top_pages', [])
|
||||
if top_pages:
|
||||
st.markdown("#### 🏆 Top Performing Pages")
|
||||
st.dataframe(pd.DataFrame(top_pages), hide_index=True)
|
||||
|
||||
# Underperforming pages
|
||||
underperforming = page_analysis.get('underperforming_pages', [])
|
||||
if underperforming:
|
||||
st.markdown("#### ⚠️ Underperforming Pages (High Impressions, Low CTR)")
|
||||
st.dataframe(pd.DataFrame(underperforming), hide_index=True)
|
||||
|
||||
# Page types performance
|
||||
page_types = page_analysis.get('page_types_performance', {})
|
||||
if page_types:
|
||||
st.markdown("#### 📊 Performance by Page Type")
|
||||
|
||||
# Create visualization
|
||||
types = []
|
||||
clicks = []
|
||||
positions = []
|
||||
|
||||
for page_type, data in page_types.items():
|
||||
if data['count'] > 0:
|
||||
types.append(page_type)
|
||||
clicks.append(data['total_clicks'])
|
||||
positions.append(data['avg_position'])
|
||||
|
||||
if types:
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
fig_clicks = px.bar(x=types, y=clicks, title="Total Clicks by Page Type")
|
||||
st.plotly_chart(fig_clicks, use_container_width=True)
|
||||
|
||||
with col2:
|
||||
fig_position = px.bar(x=types, y=positions, title="Average Position by Page Type")
|
||||
st.plotly_chart(fig_position, use_container_width=True)
|
||||
|
||||
with tab4:
|
||||
competitive_analysis = results.get('competitive_analysis', {})
|
||||
if competitive_analysis:
|
||||
st.subheader("Competitive Position Analysis")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.metric("Market Position", competitive_analysis['market_position'])
|
||||
st.metric("Competitive Strength", f"{competitive_analysis['competitive_strength_score']}/3.0")
|
||||
|
||||
with col2:
|
||||
# Competitive distribution
|
||||
comp_data = {
|
||||
'Dominant (1-3)': competitive_analysis['dominant_keywords'],
|
||||
'Competitive (4-10)': competitive_analysis['competitive_keywords'],
|
||||
'Losing (11+)': competitive_analysis['losing_keywords']
|
||||
}
|
||||
|
||||
fig = px.bar(
|
||||
x=list(comp_data.keys()),
|
||||
y=list(comp_data.values()),
|
||||
title="Keyword Competitive Position"
|
||||
)
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
with tab5:
|
||||
technical_insights = results.get('technical_insights', {})
|
||||
if technical_insights:
|
||||
st.subheader("Technical SEO Signals")
|
||||
|
||||
# Crawl issues indicators
|
||||
crawl_issues = technical_insights.get('crawl_issues_indicators', [])
|
||||
if crawl_issues:
|
||||
st.markdown("#### ⚠️ Potential Issues")
|
||||
for issue in crawl_issues:
|
||||
st.warning(f"🚨 {issue}")
|
||||
|
||||
# Mobile performance
|
||||
mobile_perf = technical_insights.get('mobile_performance', {})
|
||||
if mobile_perf:
|
||||
st.markdown("#### 📱 Mobile Performance")
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
st.metric("Avg Mobile Position", f"{mobile_perf.get('avg_mobile_position', 0):.1f}")
|
||||
|
||||
with col2:
|
||||
if mobile_perf.get('mobile_optimization_needed', False):
|
||||
st.warning("📱 Mobile optimization needed")
|
||||
else:
|
||||
st.success("📱 Mobile performance good")
|
||||
|
||||
# Export functionality
|
||||
st.markdown("---")
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
if st.button("📥 Export Full Report", use_container_width=True):
|
||||
report_json = json.dumps(results, indent=2, default=str)
|
||||
st.download_button(
|
||||
label="Download JSON Report",
|
||||
data=report_json,
|
||||
file_name=f"gsc_analysis_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
||||
mime="application/json"
|
||||
)
|
||||
|
||||
with col2:
|
||||
if st.button("📊 Export Opportunities", use_container_width=True):
|
||||
if opportunities:
|
||||
df_opportunities = pd.DataFrame(opportunities)
|
||||
csv = df_opportunities.to_csv(index=False)
|
||||
st.download_button(
|
||||
label="Download CSV Opportunities",
|
||||
data=csv,
|
||||
file_name=f"content_opportunities_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv",
|
||||
mime="text/csv"
|
||||
)
|
||||
|
||||
with col3:
|
||||
if st.button("🔄 Refresh Analysis", use_container_width=True):
|
||||
# Clear cached results to force refresh
|
||||
if 'gsc_results' in st.session_state:
|
||||
del st.session_state.gsc_results
|
||||
st.rerun()
|
||||
|
||||
|
||||
# Main execution
|
||||
if __name__ == "__main__":
|
||||
render_gsc_integration()
|
||||
112
ToBeMigrated/ai_seo_tools/image_alt_text_generator.py
Normal file
112
ToBeMigrated/ai_seo_tools/image_alt_text_generator.py
Normal file
@@ -0,0 +1,112 @@
|
||||
import streamlit as st
|
||||
import base64
|
||||
import requests
|
||||
from PIL import Image
|
||||
import os
|
||||
|
||||
|
||||
def encode_image(image_path):
|
||||
"""
|
||||
Encodes an image to base64 format.
|
||||
|
||||
Args:
|
||||
image_path (str): Path to the image file.
|
||||
|
||||
Returns:
|
||||
str: Base64 encoded string of the image.
|
||||
|
||||
Raises:
|
||||
ValueError: If the image path is invalid.
|
||||
"""
|
||||
safe_root = os.getenv('SAFE_ROOT_DIRECTORY', '/safe/root/directory') # Use an environment variable for the safe root directory
|
||||
normalized_path = os.path.normpath(image_path)
|
||||
if not normalized_path.startswith(safe_root):
|
||||
raise ValueError("Invalid image path")
|
||||
with open(normalized_path, "rb") as image_file:
|
||||
return base64.b64encode(image_file.read()).decode('utf-8')
|
||||
|
||||
|
||||
def get_image_description(image_path):
|
||||
"""
|
||||
Generates a description for the given image using an external API.
|
||||
|
||||
Args:
|
||||
image_path (str): Path to the image file.
|
||||
|
||||
Returns:
|
||||
str: Description of the image.
|
||||
|
||||
Raises:
|
||||
ValueError: If the image path is invalid.
|
||||
"""
|
||||
safe_root = os.getenv('SAFE_ROOT_DIRECTORY', '/safe/root/directory') # Use an environment variable for the safe root directory
|
||||
normalized_path = os.path.normpath(image_path)
|
||||
if not normalized_path.startswith(safe_root):
|
||||
raise ValueError("Invalid image path")
|
||||
base64_image = encode_image(normalized_path)
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}"
|
||||
}
|
||||
|
||||
payload = {
|
||||
"model": "gpt-4o-mini",
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": """You are an SEO expert specializing in writing optimized Alt text for images.
|
||||
Your goal is to create clear, descriptive, and concise Alt text that accurately represents
|
||||
the content and context of the given image. Make sure your response is optimized for search engines and accessibility."""
|
||||
},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{base64_image}"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"max_tokens": 300
|
||||
}
|
||||
|
||||
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
|
||||
response_data = response.json()
|
||||
|
||||
# Extract the content field from the response
|
||||
content = response_data['choices'][0]['message']['content']
|
||||
return content
|
||||
|
||||
|
||||
def alt_text_gen():
|
||||
"""
|
||||
Streamlit app function to generate Alt text for an uploaded image.
|
||||
"""
|
||||
st.title("Image Description Generator")
|
||||
|
||||
image_path = st.text_input("Enter the full path of the image file", help="Provide the full path to a .jpg, .jpeg, or .png image file")
|
||||
|
||||
if image_path:
|
||||
if os.path.exists(image_path) and image_path.lower().endswith(('jpg', 'jpeg', 'png')):
|
||||
try:
|
||||
image = Image.open(image_path)
|
||||
st.image(image, caption='Uploaded Image', use_column_width=True)
|
||||
|
||||
if st.button("Get Image Alt Text"):
|
||||
with st.spinner("Generating Alt Text..."):
|
||||
try:
|
||||
description = get_image_description(image_path)
|
||||
st.success("Alt Text generated successfully!")
|
||||
st.write("Alt Text:", description)
|
||||
except Exception as e:
|
||||
st.error(f"Error generating description: {e}")
|
||||
except Exception as e:
|
||||
st.error(f"Error processing image: {e}")
|
||||
else:
|
||||
st.error("Please enter a valid image file path ending with .jpg, .jpeg, or .png")
|
||||
else:
|
||||
st.info("Please enter the full path of an image file.")
|
||||
110
ToBeMigrated/ai_seo_tools/meta_desc_generator.py
Normal file
110
ToBeMigrated/ai_seo_tools/meta_desc_generator.py
Normal file
@@ -0,0 +1,110 @@
|
||||
import os
|
||||
import json
|
||||
import streamlit as st
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
from loguru import logger
|
||||
import sys
|
||||
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
def metadesc_generator_main():
|
||||
"""
|
||||
Streamlit app for generating SEO-optimized blog meta descriptions.
|
||||
"""
|
||||
st.title("✍️ Alwrity - AI Blog Meta Description Generator")
|
||||
st.markdown(
|
||||
"Create compelling, SEO-optimized meta descriptions in just a few clicks. Perfect for enhancing your blog's click-through rates!"
|
||||
)
|
||||
|
||||
# Input section
|
||||
with st.expander("**PRO-TIP** - Read the instructions below. 🚀", expanded=True):
|
||||
col1, col2, _ = st.columns([5, 5, 0.5])
|
||||
|
||||
# Column 1: Keywords and Tone
|
||||
with col1:
|
||||
keywords = st.text_input(
|
||||
"🔑 Target Keywords (comma-separated):",
|
||||
placeholder="e.g., content marketing, SEO, social media, online business",
|
||||
help="Enter your target keywords, separated by commas. 📝",
|
||||
)
|
||||
|
||||
tone_options = ["General", "Informative", "Engaging", "Humorous", "Intriguing", "Playful"]
|
||||
tone = st.selectbox(
|
||||
"🎨 Desired Tone (optional):",
|
||||
options=tone_options,
|
||||
help="Choose the overall tone you want for your meta description. 🎭",
|
||||
)
|
||||
|
||||
# Column 2: Search Intent and Language
|
||||
with col2:
|
||||
search_type = st.selectbox(
|
||||
"🔍 Search Intent:",
|
||||
("Informational Intent", "Commercial Intent", "Transactional Intent", "Navigational Intent"),
|
||||
index=0,
|
||||
)
|
||||
|
||||
language_options = ["English", "Spanish", "French", "German", "Other"]
|
||||
language_choice = st.selectbox(
|
||||
"🌐 Preferred Language:",
|
||||
options=language_options,
|
||||
help="Select the language for your meta description. 🗣️",
|
||||
)
|
||||
|
||||
language = (
|
||||
st.text_input(
|
||||
"Specify Other Language:",
|
||||
placeholder="e.g., Italian, Chinese",
|
||||
help="Enter your preferred language. 🌍",
|
||||
)
|
||||
if language_choice == "Other"
|
||||
else language_choice
|
||||
)
|
||||
|
||||
# Generate Meta Description button
|
||||
if st.button("**✨ Generate Meta Description ✨**"):
|
||||
if not keywords.strip():
|
||||
st.error("**🫣 Target Keywords are required! Please provide at least one keyword.**")
|
||||
return
|
||||
|
||||
with st.spinner("Crafting your Meta descriptions... ⏳"):
|
||||
blog_metadesc = generate_blog_metadesc(keywords, tone, search_type, language)
|
||||
if blog_metadesc:
|
||||
st.success("**🎉 Meta Descriptions Generated Successfully! 🚀**")
|
||||
with st.expander("**Your SEO-Boosting Blog Meta Descriptions 🎆🎇**", expanded=True):
|
||||
st.markdown(blog_metadesc)
|
||||
else:
|
||||
st.error("💥 **Failed to generate blog meta description. Please try again!**")
|
||||
|
||||
|
||||
def generate_blog_metadesc(keywords, tone, search_type, language):
|
||||
"""
|
||||
Generate blog meta descriptions using LLM.
|
||||
|
||||
Args:
|
||||
keywords (str): Comma-separated target keywords.
|
||||
tone (str): Desired tone for the meta description.
|
||||
search_type (str): Search intent type.
|
||||
language (str): Preferred language for the description.
|
||||
|
||||
Returns:
|
||||
str: Generated meta descriptions or error message.
|
||||
"""
|
||||
prompt = f"""
|
||||
Craft 3 engaging and SEO-friendly meta descriptions for a blog post based on the following details:
|
||||
|
||||
Blog Post Keywords: {keywords}
|
||||
Search Intent Type: {search_type}
|
||||
Desired Tone: {tone}
|
||||
Preferred Language: {language}
|
||||
|
||||
Output Format:
|
||||
|
||||
Respond with 3 compelling and concise meta descriptions, approximately 155-160 characters long, that incorporate the target keywords, reflect the blog post content, resonate with the target audience, and entice users to click through to read the full article.
|
||||
"""
|
||||
try:
|
||||
return llm_text_gen(prompt)
|
||||
except Exception as err:
|
||||
logger.error(f"Error generating meta description: {err}")
|
||||
st.error(f"💥 Error: Failed to generate response from LLM: {err}")
|
||||
return None
|
||||
1070
ToBeMigrated/ai_seo_tools/on_page_seo_analyzer.py
Normal file
1070
ToBeMigrated/ai_seo_tools/on_page_seo_analyzer.py
Normal file
File diff suppressed because it is too large
Load Diff
129
ToBeMigrated/ai_seo_tools/opengraph_generator.py
Normal file
129
ToBeMigrated/ai_seo_tools/opengraph_generator.py
Normal file
@@ -0,0 +1,129 @@
|
||||
import streamlit as st
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
def generate_og_tags(url, title_hint, description_hint, platform="General"):
|
||||
"""
|
||||
Generate Open Graph tags based on the provided URL, title hint, description hint, and platform.
|
||||
|
||||
Args:
|
||||
url (str): The URL of the webpage.
|
||||
title_hint (str): A hint for the title.
|
||||
description_hint (str): A hint for the description.
|
||||
platform (str): The platform for which to generate the tags (General, Facebook, or Twitter).
|
||||
|
||||
Returns:
|
||||
str: The generated Open Graph tags or an error message.
|
||||
"""
|
||||
# Create a prompt for the text generation model
|
||||
prompt = (
|
||||
f"Generate Open Graph tags for the following page:\nURL: {url}\n"
|
||||
f"Title hint: {title_hint}\nDescription hint: {description_hint}"
|
||||
)
|
||||
if platform == "Facebook":
|
||||
prompt += "\nSpecifically for Facebook"
|
||||
elif platform == "Twitter":
|
||||
prompt += "\nSpecifically for Twitter"
|
||||
|
||||
try:
|
||||
# Generate Open Graph tags using the text generation model
|
||||
response = llm_text_gen(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
st.error(f"Failed to generate Open Graph tags: {err}")
|
||||
return None
|
||||
|
||||
|
||||
def extract_default_og_tags(url):
|
||||
"""
|
||||
Extract default Open Graph tags from the provided URL.
|
||||
|
||||
Args:
|
||||
url (str): The URL of the webpage.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing the title, description, and image URL, or None in case of an error.
|
||||
"""
|
||||
try:
|
||||
# Fetch the HTML content of the URL
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
|
||||
# Parse the HTML content using BeautifulSoup
|
||||
soup = BeautifulSoup(response.content, 'html.parser')
|
||||
|
||||
# Extract the title, description, and image URL
|
||||
title = soup.find('title').text if soup.find('title') else None
|
||||
description = soup.find('meta', attrs={'name': 'description'})['content'] if soup.find('meta', attrs={'name': 'description'}) else None
|
||||
image_url = soup.find('meta', attrs={'property': 'og:image'})['content'] if soup.find('meta', attrs={'property': 'og:image'}) else None
|
||||
|
||||
return title, description, image_url
|
||||
|
||||
except requests.exceptions.RequestException as req_err:
|
||||
st.error(f"Error fetching the URL: {req_err}")
|
||||
return None, None, None
|
||||
|
||||
except Exception as err:
|
||||
st.error(f"Error parsing the HTML content: {err}")
|
||||
return None, None, None
|
||||
|
||||
|
||||
def og_tag_generator():
|
||||
"""Main function to run the Streamlit app."""
|
||||
st.title("AI Open Graph Tag Generator")
|
||||
|
||||
# Platform selection
|
||||
platform = st.selectbox(
|
||||
"**Select the platform**",
|
||||
["General", "Facebook", "Twitter"],
|
||||
help="Choose the platform for which you want to generate Open Graph tags."
|
||||
)
|
||||
|
||||
# URL input
|
||||
url = st.text_input(
|
||||
"**Enter the URL of the page to generate Open Graph tags for:**",
|
||||
placeholder="e.g., https://example.com",
|
||||
help="Provide the URL of the page you want to generate Open Graph tags for."
|
||||
)
|
||||
|
||||
if url:
|
||||
# Extract default Open Graph tags
|
||||
title, description, image_url = extract_default_og_tags(url)
|
||||
|
||||
# Title hint input
|
||||
title_hint = st.text_input(
|
||||
"**Modify existing title or suggest a new one (optional):**",
|
||||
value=title if title else "",
|
||||
placeholder="e.g., Amazing Blog Post Title"
|
||||
)
|
||||
|
||||
# Description hint input
|
||||
description_hint = st.text_area(
|
||||
"**Modify existing description or suggest a new one (optional):**",
|
||||
value=description if description else "",
|
||||
placeholder="e.g., This is a detailed description of the content."
|
||||
)
|
||||
|
||||
# Image URL hint input
|
||||
image_hint = st.text_input(
|
||||
"**Use this image or suggest a new URL (optional):**",
|
||||
value=image_url if image_url else "",
|
||||
placeholder="e.g., https://example.com/image.jpg"
|
||||
)
|
||||
|
||||
# Generate Open Graph tags
|
||||
if st.button("Generate Open Graph Tags"):
|
||||
with st.spinner("Generating Open Graph tags..."):
|
||||
try:
|
||||
og_tags = generate_og_tags(url, title_hint, description_hint, platform)
|
||||
if og_tags:
|
||||
st.success("Open Graph tags generated successfully!")
|
||||
st.markdown(og_tags)
|
||||
else:
|
||||
st.error("Failed to generate Open Graph tags.")
|
||||
except Exception as e:
|
||||
st.error(f"Failed to generate Open Graph tags: {e}")
|
||||
else:
|
||||
st.info("Please enter a URL to generate Open Graph tags.")
|
||||
2
ToBeMigrated/ai_seo_tools/opengraph_image_generate.py
Normal file
2
ToBeMigrated/ai_seo_tools/opengraph_image_generate.py
Normal file
@@ -0,0 +1,2 @@
|
||||
|
||||
ogImage TBD
|
||||
187
ToBeMigrated/ai_seo_tools/optimize_images_for_upload.py
Normal file
187
ToBeMigrated/ai_seo_tools/optimize_images_for_upload.py
Normal file
@@ -0,0 +1,187 @@
|
||||
import os
|
||||
import sys
|
||||
import tinify
|
||||
from PIL import Image
|
||||
from loguru import logger
|
||||
from dotenv import load_dotenv
|
||||
import streamlit as st
|
||||
from tempfile import NamedTemporaryFile
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Set Tinyfy API key from environment variable
|
||||
TINIFY_API_KEY = os.getenv('TINIFY_API_KEY')
|
||||
if TINIFY_API_KEY:
|
||||
tinify.key = TINIFY_API_KEY
|
||||
|
||||
def setup_logger() -> None:
|
||||
"""Configure the logger."""
|
||||
logger.remove()
|
||||
logger.add(
|
||||
sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
setup_logger()
|
||||
|
||||
def compress_image(image: Image.Image, quality: int = 45, resize: tuple = None, preserve_exif: bool = False) -> Image.Image:
|
||||
"""
|
||||
Compress and optionally resize an image.
|
||||
|
||||
Args:
|
||||
image (PIL.Image): Image object to compress.
|
||||
quality (int): Quality of the output image (1-100).
|
||||
resize (tuple): Tuple (width, height) to resize the image.
|
||||
preserve_exif (bool): Preserve EXIF data if True.
|
||||
|
||||
Returns:
|
||||
PIL.Image: The compressed and resized image object.
|
||||
"""
|
||||
try:
|
||||
if image.mode == 'RGBA':
|
||||
logger.info("Converting RGBA image to RGB.")
|
||||
image = image.convert('RGB')
|
||||
|
||||
exif = image.info.get('exif') if preserve_exif and 'exif' in image.info else None
|
||||
|
||||
if resize:
|
||||
image = image.resize(resize, Image.LANCZOS)
|
||||
logger.info(f"Resized image to {resize}")
|
||||
|
||||
with NamedTemporaryFile(delete=False, suffix=".jpg") as temp_file:
|
||||
temp_path = temp_file.name
|
||||
try:
|
||||
image.save(temp_path, optimize=True, quality=quality, exif=exif)
|
||||
except Exception as exif_error:
|
||||
logger.warning(f"Error saving image with EXIF: {exif_error}. Saving without EXIF.")
|
||||
image.save(temp_path, optimize=True, quality=quality)
|
||||
|
||||
logger.info("Image compression successful.")
|
||||
return Image.open(temp_path)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error compressing image: {e}")
|
||||
st.error("Failed to compress the image. Please try again.")
|
||||
return None
|
||||
|
||||
def convert_to_webp(image: Image.Image, image_path: str) -> str:
|
||||
"""
|
||||
Convert an image to WebP format.
|
||||
|
||||
Args:
|
||||
image (PIL.Image): Image object to convert.
|
||||
image_path (str): Path to save the WebP image.
|
||||
|
||||
Returns:
|
||||
str: Path to the WebP image.
|
||||
"""
|
||||
try:
|
||||
webp_path = os.path.splitext(image_path)[0] + '.webp'
|
||||
image.save(webp_path, 'WEBP', quality=80, method=6)
|
||||
return webp_path
|
||||
except Exception as e:
|
||||
logger.error(f"Error converting image to WebP: {e}")
|
||||
st.error("Failed to convert the image to WebP format. Please try again.")
|
||||
return None
|
||||
|
||||
def compress_image_tinyfy(image_path: str) -> None:
|
||||
"""
|
||||
Compress an image using Tinyfy API.
|
||||
|
||||
Args:
|
||||
image_path (str): Path to the image to be compressed.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
try:
|
||||
if not tinify.key:
|
||||
logger.warning("Tinyfy API key is not set. Skipping Tinyfy compression.")
|
||||
return
|
||||
|
||||
source = tinify.from_file(image_path)
|
||||
source.to_file(image_path)
|
||||
logger.info("Tinyfy compression successful.")
|
||||
except tinify.errors.AccountError:
|
||||
logger.error("Verify your Tinyfy API key and account limit.")
|
||||
st.warning("Tinyfy compression failed. Check your API key and account limit.")
|
||||
except Exception as e:
|
||||
logger.error(f"Error during Tinyfy compression: {e}")
|
||||
st.warning("Tinyfy compression failed. Ensure the API key is set.")
|
||||
|
||||
def optimize_image(image: Image.Image, image_path: str, quality: int, resize: tuple, preserve_exif: bool) -> str:
|
||||
"""
|
||||
Optimize the image by compressing and converting it to WebP, with optional Tinyfy compression.
|
||||
|
||||
Args:
|
||||
image (PIL.Image): The original image.
|
||||
image_path (str): The path to the image file.
|
||||
quality (int): Quality level for compression.
|
||||
resize (tuple): Dimensions to resize the image.
|
||||
preserve_exif (bool): Whether to preserve EXIF data.
|
||||
|
||||
Returns:
|
||||
str: Path to the optimized WebP image, or None if failed.
|
||||
"""
|
||||
logger.info("Starting image optimization process...")
|
||||
|
||||
compressed_image = compress_image(image, quality, resize, preserve_exif)
|
||||
if compressed_image is None:
|
||||
return None
|
||||
|
||||
webp_path = convert_to_webp(compressed_image, image_path)
|
||||
if webp_path is None:
|
||||
return None
|
||||
|
||||
if tinify.key:
|
||||
compress_image_tinyfy(webp_path)
|
||||
else:
|
||||
logger.info("Tinyfy key not provided, skipping Tinyfy compression.")
|
||||
|
||||
return webp_path
|
||||
|
||||
def main_img_optimizer() -> None:
|
||||
st.title("ALwrity Image Optimizer")
|
||||
st.markdown("## Upload an image to optimize its size and format.")
|
||||
|
||||
input_tinify_key = st.text_input("Optional: Enter your Tinyfy API Key")
|
||||
if input_tinify_key:
|
||||
tinify.key = input_tinify_key
|
||||
|
||||
uploaded_file = st.file_uploader("Upload an image", type=['jpg', 'jpeg', 'png', 'gif', 'bmp', 'webp'])
|
||||
|
||||
if uploaded_file:
|
||||
image = Image.open(uploaded_file)
|
||||
st.image(image, caption="Original Image", use_column_width=True)
|
||||
|
||||
quality = st.slider("Compression Quality", 1, 100, 45)
|
||||
preserve_exif = st.checkbox("Preserve EXIF Data", value=False)
|
||||
resize = st.checkbox("Resize Image")
|
||||
|
||||
if resize:
|
||||
width = st.number_input("Width", value=image.width)
|
||||
height = st.number_input("Height", value=image.height)
|
||||
resize_dims = (width, height)
|
||||
else:
|
||||
resize_dims = None
|
||||
|
||||
if st.button("Optimize Image"):
|
||||
with st.spinner("Optimizing..."):
|
||||
if tinify.key:
|
||||
st.info("Tinyfy compression will be applied.")
|
||||
|
||||
webp_path = optimize_image(image, uploaded_file.name, quality, resize_dims, preserve_exif)
|
||||
|
||||
if webp_path:
|
||||
st.image(webp_path, caption="Optimized Image (WebP)", use_column_width=True)
|
||||
st.success("Image optimization completed!")
|
||||
|
||||
with open(webp_path, "rb") as file:
|
||||
st.download_button(
|
||||
label="Download Optimized Image",
|
||||
data=file,
|
||||
file_name=os.path.basename(webp_path),
|
||||
mime="image/webp"
|
||||
)
|
||||
340
ToBeMigrated/ai_seo_tools/seo_analyzer_api.py
Normal file
340
ToBeMigrated/ai_seo_tools/seo_analyzer_api.py
Normal file
@@ -0,0 +1,340 @@
|
||||
"""
|
||||
FastAPI endpoint for the Comprehensive SEO Analyzer
|
||||
Provides data for the React SEO Dashboard
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from pydantic import BaseModel, HttpUrl
|
||||
from typing import List, Optional, Dict, Any
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
from .comprehensive_seo_analyzer import ComprehensiveSEOAnalyzer, SEOAnalysisResult
|
||||
|
||||
app = FastAPI(
|
||||
title="Comprehensive SEO Analyzer API",
|
||||
description="API for analyzing website SEO performance with actionable insights",
|
||||
version="1.0.0"
|
||||
)
|
||||
|
||||
# Initialize the analyzer
|
||||
seo_analyzer = ComprehensiveSEOAnalyzer()
|
||||
|
||||
class SEOAnalysisRequest(BaseModel):
|
||||
url: HttpUrl
|
||||
target_keywords: Optional[List[str]] = None
|
||||
|
||||
class SEOAnalysisResponse(BaseModel):
|
||||
url: str
|
||||
timestamp: datetime
|
||||
overall_score: int
|
||||
health_status: str
|
||||
critical_issues: List[str]
|
||||
warnings: List[str]
|
||||
recommendations: List[str]
|
||||
data: Dict[str, Any]
|
||||
success: bool
|
||||
message: str
|
||||
|
||||
@app.post("/analyze-seo", response_model=SEOAnalysisResponse)
|
||||
async def analyze_seo(request: SEOAnalysisRequest):
|
||||
"""
|
||||
Analyze a URL for comprehensive SEO performance
|
||||
|
||||
Args:
|
||||
request: SEOAnalysisRequest containing URL and optional target keywords
|
||||
|
||||
Returns:
|
||||
SEOAnalysisResponse with detailed analysis results
|
||||
"""
|
||||
try:
|
||||
# Convert URL to string
|
||||
url_str = str(request.url)
|
||||
|
||||
# Perform analysis
|
||||
result = seo_analyzer.analyze_url(url_str, request.target_keywords)
|
||||
|
||||
# Convert to response format
|
||||
response_data = {
|
||||
'url': result.url,
|
||||
'timestamp': result.timestamp,
|
||||
'overall_score': result.overall_score,
|
||||
'health_status': result.health_status,
|
||||
'critical_issues': result.critical_issues,
|
||||
'warnings': result.warnings,
|
||||
'recommendations': result.recommendations,
|
||||
'data': result.data,
|
||||
'success': True,
|
||||
'message': f"SEO analysis completed successfully for {result.url}"
|
||||
}
|
||||
|
||||
return SEOAnalysisResponse(**response_data)
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Error analyzing SEO: {str(e)}"
|
||||
)
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"status": "healthy",
|
||||
"timestamp": datetime.now(),
|
||||
"service": "Comprehensive SEO Analyzer API"
|
||||
}
|
||||
|
||||
@app.get("/analysis-summary/{url:path}")
|
||||
async def get_analysis_summary(url: str):
|
||||
"""
|
||||
Get a quick summary of SEO analysis for a URL
|
||||
|
||||
Args:
|
||||
url: The URL to analyze
|
||||
|
||||
Returns:
|
||||
Summary of SEO analysis
|
||||
"""
|
||||
try:
|
||||
# Ensure URL has protocol
|
||||
if not url.startswith(('http://', 'https://')):
|
||||
url = f"https://{url}"
|
||||
|
||||
# Perform analysis
|
||||
result = seo_analyzer.analyze_url(url)
|
||||
|
||||
# Create summary
|
||||
summary = {
|
||||
"url": result.url,
|
||||
"overall_score": result.overall_score,
|
||||
"health_status": result.health_status,
|
||||
"critical_issues_count": len(result.critical_issues),
|
||||
"warnings_count": len(result.warnings),
|
||||
"recommendations_count": len(result.recommendations),
|
||||
"top_issues": result.critical_issues[:3],
|
||||
"top_recommendations": result.recommendations[:3],
|
||||
"analysis_timestamp": result.timestamp.isoformat()
|
||||
}
|
||||
|
||||
return summary
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Error getting analysis summary: {str(e)}"
|
||||
)
|
||||
|
||||
@app.get("/seo-metrics/{url:path}")
|
||||
async def get_seo_metrics(url: str):
|
||||
"""
|
||||
Get detailed SEO metrics for dashboard display
|
||||
|
||||
Args:
|
||||
url: The URL to analyze
|
||||
|
||||
Returns:
|
||||
Detailed SEO metrics for React dashboard
|
||||
"""
|
||||
try:
|
||||
# Ensure URL has protocol
|
||||
if not url.startswith(('http://', 'https://')):
|
||||
url = f"https://{url}"
|
||||
|
||||
# Perform analysis
|
||||
result = seo_analyzer.analyze_url(url)
|
||||
|
||||
# Extract metrics for dashboard
|
||||
metrics = {
|
||||
"overall_score": result.overall_score,
|
||||
"health_status": result.health_status,
|
||||
"url_structure_score": result.data.get('url_structure', {}).get('score', 0),
|
||||
"meta_data_score": result.data.get('meta_data', {}).get('score', 0),
|
||||
"content_score": result.data.get('content_analysis', {}).get('score', 0),
|
||||
"technical_score": result.data.get('technical_seo', {}).get('score', 0),
|
||||
"performance_score": result.data.get('performance', {}).get('score', 0),
|
||||
"accessibility_score": result.data.get('accessibility', {}).get('score', 0),
|
||||
"user_experience_score": result.data.get('user_experience', {}).get('score', 0),
|
||||
"security_score": result.data.get('security_headers', {}).get('score', 0)
|
||||
}
|
||||
|
||||
# Add detailed data for each category
|
||||
dashboard_data = {
|
||||
"metrics": metrics,
|
||||
"critical_issues": result.critical_issues,
|
||||
"warnings": result.warnings,
|
||||
"recommendations": result.recommendations,
|
||||
"detailed_analysis": {
|
||||
"url_structure": result.data.get('url_structure', {}),
|
||||
"meta_data": result.data.get('meta_data', {}),
|
||||
"content_analysis": result.data.get('content_analysis', {}),
|
||||
"technical_seo": result.data.get('technical_seo', {}),
|
||||
"performance": result.data.get('performance', {}),
|
||||
"accessibility": result.data.get('accessibility', {}),
|
||||
"user_experience": result.data.get('user_experience', {}),
|
||||
"security_headers": result.data.get('security_headers', {}),
|
||||
"keyword_analysis": result.data.get('keyword_analysis', {})
|
||||
},
|
||||
"timestamp": result.timestamp.isoformat(),
|
||||
"url": result.url
|
||||
}
|
||||
|
||||
return dashboard_data
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Error getting SEO metrics: {str(e)}"
|
||||
)
|
||||
|
||||
@app.post("/batch-analyze")
|
||||
async def batch_analyze(urls: List[str]):
|
||||
"""
|
||||
Analyze multiple URLs in batch
|
||||
|
||||
Args:
|
||||
urls: List of URLs to analyze
|
||||
|
||||
Returns:
|
||||
Batch analysis results
|
||||
"""
|
||||
try:
|
||||
results = []
|
||||
|
||||
for url in urls:
|
||||
try:
|
||||
# Ensure URL has protocol
|
||||
if not url.startswith(('http://', 'https://')):
|
||||
url = f"https://{url}"
|
||||
|
||||
# Perform analysis
|
||||
result = seo_analyzer.analyze_url(url)
|
||||
|
||||
# Add to results
|
||||
results.append({
|
||||
"url": result.url,
|
||||
"overall_score": result.overall_score,
|
||||
"health_status": result.health_status,
|
||||
"critical_issues_count": len(result.critical_issues),
|
||||
"warnings_count": len(result.warnings),
|
||||
"success": True
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
# Add error result
|
||||
results.append({
|
||||
"url": url,
|
||||
"overall_score": 0,
|
||||
"health_status": "error",
|
||||
"critical_issues_count": 0,
|
||||
"warnings_count": 0,
|
||||
"success": False,
|
||||
"error": str(e)
|
||||
})
|
||||
|
||||
return {
|
||||
"total_urls": len(urls),
|
||||
"successful_analyses": len([r for r in results if r['success']]),
|
||||
"failed_analyses": len([r for r in results if not r['success']]),
|
||||
"results": results
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Error in batch analysis: {str(e)}"
|
||||
)
|
||||
|
||||
# Enhanced prompts for better results
|
||||
ENHANCED_PROMPTS = {
|
||||
"critical_issue": "🚨 CRITICAL: This issue is severely impacting your SEO performance and must be fixed immediately.",
|
||||
"warning": "⚠️ WARNING: This could be improved to boost your search rankings.",
|
||||
"recommendation": "💡 RECOMMENDATION: Implement this to improve your SEO score.",
|
||||
"excellent": "🎉 EXCELLENT: Your SEO is performing very well in this area!",
|
||||
"good": "✅ GOOD: Your SEO is performing well, with room for minor improvements.",
|
||||
"needs_improvement": "🔧 NEEDS IMPROVEMENT: Several areas need attention to boost your SEO.",
|
||||
"poor": "❌ POOR: Significant improvements needed across multiple areas."
|
||||
}
|
||||
|
||||
def enhance_analysis_result(result: SEOAnalysisResult) -> SEOAnalysisResult:
|
||||
"""
|
||||
Enhance analysis results with better prompts and user-friendly language
|
||||
"""
|
||||
# Enhance critical issues
|
||||
enhanced_critical_issues = []
|
||||
for issue in result.critical_issues:
|
||||
enhanced_issue = f"{ENHANCED_PROMPTS['critical_issue']} {issue}"
|
||||
enhanced_critical_issues.append(enhanced_issue)
|
||||
|
||||
# Enhance warnings
|
||||
enhanced_warnings = []
|
||||
for warning in result.warnings:
|
||||
enhanced_warning = f"{ENHANCED_PROMPTS['warning']} {warning}"
|
||||
enhanced_warnings.append(enhanced_warning)
|
||||
|
||||
# Enhance recommendations
|
||||
enhanced_recommendations = []
|
||||
for rec in result.recommendations:
|
||||
enhanced_rec = f"{ENHANCED_PROMPTS['recommendation']} {rec}"
|
||||
enhanced_recommendations.append(enhanced_rec)
|
||||
|
||||
# Create enhanced result
|
||||
enhanced_result = SEOAnalysisResult(
|
||||
url=result.url,
|
||||
timestamp=result.timestamp,
|
||||
overall_score=result.overall_score,
|
||||
health_status=result.health_status,
|
||||
critical_issues=enhanced_critical_issues,
|
||||
warnings=enhanced_warnings,
|
||||
recommendations=enhanced_recommendations,
|
||||
data=result.data
|
||||
)
|
||||
|
||||
return enhanced_result
|
||||
|
||||
@app.post("/analyze-seo-enhanced", response_model=SEOAnalysisResponse)
|
||||
async def analyze_seo_enhanced(request: SEOAnalysisRequest):
|
||||
"""
|
||||
Analyze a URL with enhanced, user-friendly prompts
|
||||
|
||||
Args:
|
||||
request: SEOAnalysisRequest containing URL and optional target keywords
|
||||
|
||||
Returns:
|
||||
SEOAnalysisResponse with enhanced, user-friendly analysis results
|
||||
"""
|
||||
try:
|
||||
# Convert URL to string
|
||||
url_str = str(request.url)
|
||||
|
||||
# Perform analysis
|
||||
result = seo_analyzer.analyze_url(url_str, request.target_keywords)
|
||||
|
||||
# Enhance results
|
||||
enhanced_result = enhance_analysis_result(result)
|
||||
|
||||
# Convert to response format
|
||||
response_data = {
|
||||
'url': enhanced_result.url,
|
||||
'timestamp': enhanced_result.timestamp,
|
||||
'overall_score': enhanced_result.overall_score,
|
||||
'health_status': enhanced_result.health_status,
|
||||
'critical_issues': enhanced_result.critical_issues,
|
||||
'warnings': enhanced_result.warnings,
|
||||
'recommendations': enhanced_result.recommendations,
|
||||
'data': enhanced_result.data,
|
||||
'success': True,
|
||||
'message': f"Enhanced SEO analysis completed successfully for {enhanced_result.url}"
|
||||
}
|
||||
|
||||
return SEOAnalysisResponse(**response_data)
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Error analyzing SEO: {str(e)}"
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
130
ToBeMigrated/ai_seo_tools/seo_structured_data.py
Normal file
130
ToBeMigrated/ai_seo_tools/seo_structured_data.py
Normal file
@@ -0,0 +1,130 @@
|
||||
import streamlit as st
|
||||
import json
|
||||
from datetime import date
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from ..ai_web_researcher.firecrawl_web_crawler import scrape_url
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Define a dictionary for schema types
|
||||
schema_types = {
|
||||
"Article": {
|
||||
"fields": ["Headline", "Author", "Date Published", "Keywords"],
|
||||
"schema_type": "Article",
|
||||
},
|
||||
"Product": {
|
||||
"fields": ["Name", "Description", "Price", "Brand", "Image URL"],
|
||||
"schema_type": "Product",
|
||||
},
|
||||
"Recipe": {
|
||||
"fields": ["Name", "Ingredients", "Cooking Time", "Serving Size", "Image URL"],
|
||||
"schema_type": "Recipe",
|
||||
},
|
||||
"Event": {
|
||||
"fields": ["Name", "Start Date", "End Date", "Location", "Description"],
|
||||
"schema_type": "Event",
|
||||
},
|
||||
"LocalBusiness": {
|
||||
"fields": ["Name", "Address", "Phone Number", "Opening Hours", "Image URL"],
|
||||
"schema_type": "LocalBusiness",
|
||||
},
|
||||
# ... (add more schema types as needed)
|
||||
}
|
||||
|
||||
def generate_json_data(content_type, details, url):
|
||||
"""Generates structured data (JSON-LD) based on user input."""
|
||||
try:
|
||||
scraped_text = scrape_url(url)
|
||||
except Exception as err:
|
||||
st.error(f"Failed to scrape web page from URL: {url} - Error: {err}")
|
||||
return
|
||||
|
||||
schema = schema_types.get(content_type)
|
||||
if not schema:
|
||||
st.error(f"Invalid content type: {content_type}")
|
||||
return
|
||||
|
||||
data = {
|
||||
"@context": "https://schema.org",
|
||||
"@type": schema["schema_type"],
|
||||
}
|
||||
for field in schema["fields"]:
|
||||
value = details.get(field)
|
||||
if isinstance(value, date):
|
||||
value = value.isoformat()
|
||||
data[field] = value if value else "N/A" # Use placeholder values if input is missing
|
||||
|
||||
if url:
|
||||
data['url'] = url
|
||||
|
||||
llm_structured_data = get_llm_structured_data(content_type, data, scraped_text)
|
||||
return llm_structured_data
|
||||
|
||||
def get_llm_structured_data(content_type, data, scraped_text):
|
||||
"""Function to get structured data from LLM."""
|
||||
prompt = f"""Given the following information:
|
||||
|
||||
HTML Content: <<<HTML>>> {scraped_text} <<<END_HTML>>>
|
||||
Content Type: <<<CONTENT_TYPE>>> {content_type} <<<END_CONTENT_TYPE>>>
|
||||
Additional Relevant Data: <<<ADDITIONAL_DATA>>> {data} <<<END_ADDITIONAL_DATA>>>
|
||||
|
||||
Create a detailed structured data (JSON-LD) script for SEO purposes.
|
||||
The structured data should help search engines understand the content and features of the webpage, enhancing its visibility and potential for rich snippets in search results.
|
||||
|
||||
Detailed Steps:
|
||||
Parse the HTML content to extract relevant information like the title, main heading, and body content.
|
||||
Use the contentType to determine the structured data type (e.g., Article, Product, Recipe).
|
||||
Integrate the additional relevant data (e.g., author, datePublished, keywords) into the structured data.
|
||||
Ensure all URLs, images, and other attributes are correctly formatted and included.
|
||||
Validate the generated JSON-LD to ensure it meets schema.org standards and is free of errors.
|
||||
|
||||
Expected Output:
|
||||
Generate a JSON-LD structured data snippet based on the provided inputs."""
|
||||
|
||||
try:
|
||||
response = llm_text_gen(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
st.error(f"Failed to get response from LLM: {err}")
|
||||
return
|
||||
|
||||
def ai_structured_data():
|
||||
st.title("📝 Generate Structured Data for SEO 🚀")
|
||||
st.markdown("**Make your content more discoverable with rich snippets.**")
|
||||
|
||||
content_type = st.selectbox("**Select Content Type**", list(schema_types.keys()))
|
||||
|
||||
details = {}
|
||||
schema_fields = schema_types[content_type]["fields"]
|
||||
num_fields = len(schema_fields)
|
||||
|
||||
url = st.text_input("**URL :**", placeholder="Enter the URL of your webpage")
|
||||
for i in range(0, num_fields, 2):
|
||||
cols = st.columns(2)
|
||||
for j in range(2):
|
||||
if i + j < num_fields:
|
||||
field = schema_fields[i + j]
|
||||
if "Date" in field:
|
||||
details[field] = cols[j].date_input(field)
|
||||
else:
|
||||
details[field] = cols[j].text_input(field, placeholder=f"Enter {field.lower()}")
|
||||
|
||||
if st.button("Generate Structured Data"):
|
||||
if not url:
|
||||
st.error("URL is required to generate structured data.")
|
||||
return
|
||||
|
||||
structured_data = generate_json_data(content_type, details, url)
|
||||
if structured_data:
|
||||
st.subheader("Generated Structured Data (JSON-LD):")
|
||||
st.markdown(structured_data)
|
||||
|
||||
st.download_button(
|
||||
label="Download JSON-LD",
|
||||
data=structured_data,
|
||||
file_name=f"{content_type}_structured_data.json",
|
||||
mime="application/json",
|
||||
)
|
||||
340
ToBeMigrated/ai_seo_tools/sitemap_analysis.py
Normal file
340
ToBeMigrated/ai_seo_tools/sitemap_analysis.py
Normal file
@@ -0,0 +1,340 @@
|
||||
import streamlit as st
|
||||
import advertools as adv
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
from urllib.error import URLError
|
||||
import xml.etree.ElementTree as ET
|
||||
import requests
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Main function to run the Sitemap Analyzer Streamlit app.
|
||||
"""
|
||||
st.title("📊 Sitemap Analyzer")
|
||||
st.write("""
|
||||
This tool analyzes a website's sitemap to understand its content structure and publishing trends.
|
||||
Enter a sitemap URL to start your analysis.
|
||||
""")
|
||||
|
||||
sitemap_url = st.text_input(
|
||||
"Please enter the sitemap URL:",
|
||||
"https://www.example.com/sitemap.xml"
|
||||
)
|
||||
|
||||
if st.button("Analyze Sitemap"):
|
||||
try:
|
||||
sitemap_df = fetch_all_sitemaps(sitemap_url)
|
||||
if sitemap_df is not None and not sitemap_df.empty:
|
||||
sitemap_df = process_lastmod_column(sitemap_df)
|
||||
ppmonth = analyze_content_trends(sitemap_df)
|
||||
sitemap_df = categorize_and_shorten_sitemaps(sitemap_df)
|
||||
|
||||
display_key_metrics(sitemap_df, ppmonth)
|
||||
plot_sitemap_content_distribution(sitemap_df)
|
||||
plot_content_trends(ppmonth)
|
||||
plot_content_type_breakdown(sitemap_df)
|
||||
plot_publishing_frequency(sitemap_df)
|
||||
|
||||
st.success("🎉 Analysis complete!")
|
||||
else:
|
||||
st.error("No valid URLs found in the sitemap.")
|
||||
except URLError as e:
|
||||
st.error(f"Error fetching the sitemap: {e}")
|
||||
except Exception as e:
|
||||
st.error(f"An unexpected error occurred: {e}")
|
||||
|
||||
|
||||
def fetch_all_sitemaps(sitemap_url):
|
||||
"""
|
||||
Fetches all sitemaps from the provided sitemap URL and concatenates their URLs into a DataFrame.
|
||||
|
||||
Parameters:
|
||||
sitemap_url (str): The URL of the sitemap.
|
||||
|
||||
Returns:
|
||||
DataFrame: A DataFrame containing all URLs from the sitemaps.
|
||||
"""
|
||||
st.write(f"🚀 Fetching and analyzing the sitemap: {sitemap_url}...")
|
||||
|
||||
try:
|
||||
sitemap_df = fetch_sitemap(sitemap_url)
|
||||
|
||||
if sitemap_df is not None:
|
||||
all_sitemaps = sitemap_df.loc[
|
||||
sitemap_df['loc'].str.contains('sitemap'),
|
||||
'loc'
|
||||
].tolist()
|
||||
|
||||
if all_sitemaps:
|
||||
st.write(
|
||||
f"🔄 Found {len(all_sitemaps)} additional sitemaps. Fetching data from them..."
|
||||
)
|
||||
all_urls_df = pd.DataFrame()
|
||||
|
||||
for sitemap in all_sitemaps:
|
||||
try:
|
||||
st.write(f"Fetching URLs from {sitemap}...")
|
||||
temp_df = fetch_sitemap(sitemap)
|
||||
if temp_df is not None:
|
||||
all_urls_df = pd.concat(
|
||||
[all_urls_df, temp_df], ignore_index=True
|
||||
)
|
||||
except Exception as e:
|
||||
st.error(f"Error fetching {sitemap}: {e}")
|
||||
|
||||
st.write(
|
||||
f"✅ Successfully fetched {len(all_urls_df)} URLs from all sitemaps."
|
||||
)
|
||||
return all_urls_df
|
||||
|
||||
else:
|
||||
st.write(f"✅ Successfully fetched {len(sitemap_df)} URLs from the main sitemap.")
|
||||
return sitemap_df
|
||||
else:
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error fetching the sitemap: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def fetch_sitemap(url):
|
||||
"""
|
||||
Fetches and parses the sitemap from the provided URL.
|
||||
|
||||
Parameters:
|
||||
url (str): The URL of the sitemap.
|
||||
|
||||
Returns:
|
||||
DataFrame: A DataFrame containing the URLs from the sitemap.
|
||||
"""
|
||||
try:
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
|
||||
ET.fromstring(response.content)
|
||||
|
||||
sitemap_df = adv.sitemap_to_df(url)
|
||||
return sitemap_df
|
||||
|
||||
except requests.RequestException as e:
|
||||
st.error(f"⚠️ Request error: {e}")
|
||||
return None
|
||||
except ET.ParseError as e:
|
||||
st.error(f"⚠️ XML parsing error: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def process_lastmod_column(sitemap_df):
|
||||
"""
|
||||
Processes the 'lastmod' column in the sitemap DataFrame by converting it to DateTime format and setting it as the index.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
|
||||
Returns:
|
||||
DataFrame: The processed sitemap DataFrame with 'lastmod' as the index.
|
||||
"""
|
||||
st.write("📅 Converting 'lastmod' column to DateTime format and setting it as the index...")
|
||||
|
||||
try:
|
||||
sitemap_df = sitemap_df.dropna(subset=['lastmod'])
|
||||
sitemap_df['lastmod'] = pd.to_datetime(sitemap_df['lastmod'])
|
||||
sitemap_df.set_index('lastmod', inplace=True)
|
||||
|
||||
st.write("✅ 'lastmod' column successfully converted to DateTime format and set as the index.")
|
||||
return sitemap_df
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error processing the 'lastmod' column: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def categorize_and_shorten_sitemaps(sitemap_df):
|
||||
"""
|
||||
Categorizes and shortens the sitemap names in the sitemap DataFrame.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
|
||||
Returns:
|
||||
DataFrame: The sitemap DataFrame with categorized and shortened sitemap names.
|
||||
"""
|
||||
st.write("🔍 Categorizing and shortening sitemap names...")
|
||||
|
||||
try:
|
||||
sitemap_df['sitemap_name'] = sitemap_df['sitemap'].str.split('/').str[4]
|
||||
sitemap_df['sitemap_name'] = sitemap_df['sitemap_name'].replace({
|
||||
'sitemap-site-kasko-fiyatlari.xml': 'Kasko',
|
||||
'sitemap-site-bireysel.xml': 'Personal',
|
||||
'sitemap-site-kurumsal.xml': 'Cooperate',
|
||||
'sitemap-site-arac-sigortasi.xml': 'Car',
|
||||
'sitemap-site.xml': 'Others'
|
||||
})
|
||||
|
||||
st.write("✅ Sitemap names categorized and shortened.")
|
||||
return sitemap_df
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error categorizing sitemap names: {e}")
|
||||
return sitemap_df
|
||||
|
||||
|
||||
def analyze_content_trends(sitemap_df):
|
||||
"""
|
||||
Analyzes content publishing trends in the sitemap DataFrame.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
|
||||
Returns:
|
||||
Series: A Series representing the number of contents published each month.
|
||||
"""
|
||||
st.write("📅 Analyzing content publishing trends...")
|
||||
|
||||
try:
|
||||
ppmonth = sitemap_df.resample('M').size()
|
||||
sitemap_df['monthly_count'] = sitemap_df.index.to_period('M').value_counts().sort_index()
|
||||
|
||||
st.write("✅ Content trends analysis completed.")
|
||||
return ppmonth
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error during content trends analysis: {e}")
|
||||
return pd.Series()
|
||||
|
||||
|
||||
def display_key_metrics(sitemap_df, ppmonth):
|
||||
"""
|
||||
Displays key metrics of the sitemap analysis.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
ppmonth (Series): The Series representing the number of contents published each month.
|
||||
"""
|
||||
st.write("### Key Metrics")
|
||||
|
||||
total_urls = len(sitemap_df)
|
||||
total_articles = ppmonth.sum()
|
||||
average_frequency = ppmonth.mean()
|
||||
|
||||
st.write(f"**Total URLs Found:** {total_urls:,}")
|
||||
st.write(f"**Total Articles Published:** {total_articles:,}")
|
||||
st.write(f"**Average Monthly Publishing Frequency:** {average_frequency:.2f} articles/month")
|
||||
|
||||
|
||||
def plot_sitemap_content_distribution(sitemap_df):
|
||||
"""
|
||||
Plots the content distribution by sitemap categories.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
"""
|
||||
st.write("📊 Visualizing content amount by sitemap categories...")
|
||||
|
||||
try:
|
||||
if 'sitemap_name' in sitemap_df.columns:
|
||||
stmc = sitemap_df.groupby('sitemap_name').size()
|
||||
fig = go.Figure()
|
||||
fig.add_bar(x=stmc.index, y=stmc.values, name='Sitemap Categories')
|
||||
fig.update_layout(
|
||||
title='Content Amount by Sitemap Categories',
|
||||
xaxis_title='Sitemap Categories',
|
||||
yaxis_title='Number of Articles',
|
||||
paper_bgcolor='#E5ECF6'
|
||||
)
|
||||
st.plotly_chart(fig)
|
||||
else:
|
||||
st.warning("⚠️ The 'sitemap_name' column is missing in the data.")
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error during sitemap content distribution plotting: {e}")
|
||||
|
||||
|
||||
def plot_content_trends(ppmonth):
|
||||
"""
|
||||
Plots the content publishing trends over time.
|
||||
|
||||
Parameters:
|
||||
ppmonth (Series): The Series representing the number of contents published each month.
|
||||
"""
|
||||
st.write("📈 Plotting content publishing trends over time...")
|
||||
|
||||
try:
|
||||
fig = go.Figure()
|
||||
fig.add_scatter(x=ppmonth.index, y=ppmonth.values, mode='lines+markers', name='Publishing Trends')
|
||||
fig.update_layout(
|
||||
title='Content Publishing Trends Over Time',
|
||||
xaxis_title='Month',
|
||||
yaxis_title='Number of Articles',
|
||||
paper_bgcolor='#E5ECF6'
|
||||
)
|
||||
st.plotly_chart(fig)
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error during content trends plotting: {e}")
|
||||
|
||||
|
||||
def plot_content_type_breakdown(sitemap_df):
|
||||
"""
|
||||
Plots the content type breakdown.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
"""
|
||||
st.write("🔍 Plotting content type breakdown...")
|
||||
|
||||
try:
|
||||
if 'sitemap_name' in sitemap_df.columns and not sitemap_df['sitemap_name'].empty:
|
||||
content_type_counts = sitemap_df['sitemap_name'].value_counts()
|
||||
st.write("Content Type Counts:", content_type_counts)
|
||||
|
||||
if not content_type_counts.empty:
|
||||
fig = go.Figure(data=[go.Pie(labels=content_type_counts.index, values=content_type_counts.values)])
|
||||
fig.update_layout(
|
||||
title='Content Type Breakdown',
|
||||
paper_bgcolor='#E5ECF6'
|
||||
)
|
||||
st.plotly_chart(fig)
|
||||
else:
|
||||
st.warning("⚠️ No content types to display.")
|
||||
else:
|
||||
st.warning("⚠️ The 'sitemap_name' column is missing or empty.")
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error during content type breakdown plotting: {e}")
|
||||
|
||||
|
||||
def plot_publishing_frequency(sitemap_df):
|
||||
"""
|
||||
Plots the publishing frequency by month.
|
||||
|
||||
Parameters:
|
||||
sitemap_df (DataFrame): The sitemap DataFrame.
|
||||
"""
|
||||
st.write("📆 Plotting publishing frequency by month...")
|
||||
|
||||
try:
|
||||
if not sitemap_df.empty:
|
||||
frequency_by_month = sitemap_df.index.to_period('M').value_counts().sort_index()
|
||||
frequency_by_month.index = frequency_by_month.index.astype(str)
|
||||
|
||||
fig = go.Figure()
|
||||
fig.add_bar(x=frequency_by_month.index, y=frequency_by_month.values, name='Publishing Frequency')
|
||||
fig.update_layout(
|
||||
title='Publishing Frequency by Month',
|
||||
xaxis_title='Month',
|
||||
yaxis_title='Number of Articles',
|
||||
paper_bgcolor='#E5ECF6'
|
||||
)
|
||||
st.plotly_chart(fig)
|
||||
else:
|
||||
st.warning("⚠️ No data available to plot publishing frequency.")
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"⚠️ Error during publishing frequency plotting: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
22
ToBeMigrated/ai_seo_tools/technical_seo_crawler/__init__.py
Normal file
22
ToBeMigrated/ai_seo_tools/technical_seo_crawler/__init__.py
Normal file
@@ -0,0 +1,22 @@
|
||||
"""
|
||||
Technical SEO Crawler Package.
|
||||
|
||||
This package provides comprehensive technical SEO analysis capabilities
|
||||
with advertools integration and AI-powered recommendations.
|
||||
|
||||
Components:
|
||||
- TechnicalSEOCrawler: Core crawler with technical analysis
|
||||
- TechnicalSEOCrawlerUI: Streamlit interface for the crawler
|
||||
"""
|
||||
|
||||
from .crawler import TechnicalSEOCrawler
|
||||
from .ui import TechnicalSEOCrawlerUI, render_technical_seo_crawler
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__author__ = "ALwrity"
|
||||
|
||||
__all__ = [
|
||||
'TechnicalSEOCrawler',
|
||||
'TechnicalSEOCrawlerUI',
|
||||
'render_technical_seo_crawler'
|
||||
]
|
||||
709
ToBeMigrated/ai_seo_tools/technical_seo_crawler/crawler.py
Normal file
709
ToBeMigrated/ai_seo_tools/technical_seo_crawler/crawler.py
Normal file
@@ -0,0 +1,709 @@
|
||||
"""
|
||||
Comprehensive Technical SEO Crawler using Advertools Integration.
|
||||
|
||||
This module provides advanced site-wide technical SEO analysis using:
|
||||
- adv.crawl: Complete website crawling and analysis
|
||||
- adv.crawl_headers: HTTP headers and server analysis
|
||||
- adv.crawl_images: Image optimization analysis
|
||||
- adv.url_to_df: URL structure optimization
|
||||
- AI-powered technical recommendations
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
import advertools as adv
|
||||
from typing import Dict, Any, List, Optional, Tuple
|
||||
from urllib.parse import urlparse, urljoin
|
||||
import tempfile
|
||||
import os
|
||||
from datetime import datetime
|
||||
import json
|
||||
from collections import Counter, defaultdict
|
||||
from loguru import logger
|
||||
import numpy as np
|
||||
|
||||
# Import existing modules
|
||||
from lib.gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
from lib.utils.website_analyzer.analyzer import WebsiteAnalyzer
|
||||
|
||||
class TechnicalSEOCrawler:
|
||||
"""Comprehensive technical SEO crawler with advertools integration."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the technical SEO crawler."""
|
||||
self.temp_dir = tempfile.mkdtemp()
|
||||
logger.info("TechnicalSEOCrawler initialized")
|
||||
|
||||
def analyze_website_technical_seo(self, website_url: str, crawl_depth: int = 3,
|
||||
max_pages: int = 500) -> Dict[str, Any]:
|
||||
"""
|
||||
Perform comprehensive technical SEO analysis.
|
||||
|
||||
Args:
|
||||
website_url: Website URL to analyze
|
||||
crawl_depth: How deep to crawl (1-5)
|
||||
max_pages: Maximum pages to crawl (50-1000)
|
||||
|
||||
Returns:
|
||||
Comprehensive technical SEO analysis results
|
||||
"""
|
||||
try:
|
||||
st.info("🚀 Starting Comprehensive Technical SEO Crawl...")
|
||||
|
||||
# Initialize results structure
|
||||
results = {
|
||||
'analysis_timestamp': datetime.utcnow().isoformat(),
|
||||
'website_url': website_url,
|
||||
'crawl_settings': {
|
||||
'depth': crawl_depth,
|
||||
'max_pages': max_pages
|
||||
},
|
||||
'crawl_overview': {},
|
||||
'technical_issues': {},
|
||||
'performance_analysis': {},
|
||||
'content_analysis': {},
|
||||
'url_structure': {},
|
||||
'image_optimization': {},
|
||||
'security_headers': {},
|
||||
'mobile_seo': {},
|
||||
'structured_data': {},
|
||||
'ai_recommendations': {}
|
||||
}
|
||||
|
||||
# Phase 1: Core Website Crawl
|
||||
with st.expander("🕷️ Website Crawling Progress", expanded=True):
|
||||
crawl_data = self._perform_comprehensive_crawl(website_url, crawl_depth, max_pages)
|
||||
results['crawl_overview'] = crawl_data
|
||||
st.success(f"✅ Crawled {crawl_data.get('pages_crawled', 0)} pages")
|
||||
|
||||
# Phase 2: Technical Issues Detection
|
||||
with st.expander("🔍 Technical Issues Analysis", expanded=True):
|
||||
technical_issues = self._analyze_technical_issues(crawl_data)
|
||||
results['technical_issues'] = technical_issues
|
||||
st.success("✅ Identified technical SEO issues")
|
||||
|
||||
# Phase 3: Performance Analysis
|
||||
with st.expander("⚡ Performance Analysis", expanded=True):
|
||||
performance = self._analyze_performance_metrics(crawl_data)
|
||||
results['performance_analysis'] = performance
|
||||
st.success("✅ Analyzed website performance metrics")
|
||||
|
||||
# Phase 4: Content & Structure Analysis
|
||||
with st.expander("📊 Content Structure Analysis", expanded=True):
|
||||
content_analysis = self._analyze_content_structure(crawl_data)
|
||||
results['content_analysis'] = content_analysis
|
||||
st.success("✅ Analyzed content structure and optimization")
|
||||
|
||||
# Phase 5: URL Structure Optimization
|
||||
with st.expander("🔗 URL Structure Analysis", expanded=True):
|
||||
url_analysis = self._analyze_url_structure(crawl_data)
|
||||
results['url_structure'] = url_analysis
|
||||
st.success("✅ Analyzed URL structure and patterns")
|
||||
|
||||
# Phase 6: Image SEO Analysis
|
||||
with st.expander("🖼️ Image SEO Analysis", expanded=True):
|
||||
image_analysis = self._analyze_image_seo(website_url)
|
||||
results['image_optimization'] = image_analysis
|
||||
st.success("✅ Analyzed image optimization")
|
||||
|
||||
# Phase 7: Security & Headers Analysis
|
||||
with st.expander("🛡️ Security Headers Analysis", expanded=True):
|
||||
security_analysis = self._analyze_security_headers(website_url)
|
||||
results['security_headers'] = security_analysis
|
||||
st.success("✅ Analyzed security headers")
|
||||
|
||||
# Phase 8: Mobile SEO Analysis
|
||||
with st.expander("📱 Mobile SEO Analysis", expanded=True):
|
||||
mobile_analysis = self._analyze_mobile_seo(crawl_data)
|
||||
results['mobile_seo'] = mobile_analysis
|
||||
st.success("✅ Analyzed mobile SEO factors")
|
||||
|
||||
# Phase 9: AI-Powered Recommendations
|
||||
with st.expander("🤖 AI Technical Recommendations", expanded=True):
|
||||
ai_recommendations = self._generate_technical_recommendations(results)
|
||||
results['ai_recommendations'] = ai_recommendations
|
||||
st.success("✅ Generated AI-powered technical recommendations")
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error in technical SEO analysis: {str(e)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
st.error(error_msg)
|
||||
return {'error': error_msg}
|
||||
|
||||
def _perform_comprehensive_crawl(self, website_url: str, depth: int, max_pages: int) -> Dict[str, Any]:
|
||||
"""Perform comprehensive website crawl using adv.crawl."""
|
||||
try:
|
||||
st.info("🕷️ Crawling website for comprehensive analysis...")
|
||||
|
||||
# Create crawl output file
|
||||
crawl_file = os.path.join(self.temp_dir, "technical_crawl.jl")
|
||||
|
||||
# Configure crawl settings for technical SEO
|
||||
custom_settings = {
|
||||
'DEPTH_LIMIT': depth,
|
||||
'CLOSESPIDER_PAGECOUNT': max_pages,
|
||||
'DOWNLOAD_DELAY': 0.5, # Be respectful
|
||||
'CONCURRENT_REQUESTS': 8,
|
||||
'ROBOTSTXT_OBEY': True,
|
||||
'USER_AGENT': 'ALwrity-TechnicalSEO-Crawler/1.0',
|
||||
'COOKIES_ENABLED': False,
|
||||
'TELNETCONSOLE_ENABLED': False,
|
||||
'LOG_LEVEL': 'WARNING'
|
||||
}
|
||||
|
||||
# Start crawl
|
||||
adv.crawl(
|
||||
url_list=[website_url],
|
||||
output_file=crawl_file,
|
||||
follow_links=True,
|
||||
custom_settings=custom_settings
|
||||
)
|
||||
|
||||
# Read and process crawl results
|
||||
if os.path.exists(crawl_file):
|
||||
crawl_df = pd.read_json(crawl_file, lines=True)
|
||||
|
||||
# Basic crawl statistics
|
||||
crawl_overview = {
|
||||
'pages_crawled': len(crawl_df),
|
||||
'status_codes': crawl_df['status'].value_counts().to_dict(),
|
||||
'crawl_file_path': crawl_file,
|
||||
'crawl_dataframe': crawl_df,
|
||||
'domains_found': crawl_df['url'].apply(lambda x: urlparse(x).netloc).nunique(),
|
||||
'avg_response_time': crawl_df.get('download_latency', pd.Series()).mean(),
|
||||
'total_content_size': crawl_df.get('size', pd.Series()).sum()
|
||||
}
|
||||
|
||||
return crawl_overview
|
||||
else:
|
||||
st.error("Crawl file not created")
|
||||
return {}
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error in website crawl: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_technical_issues(self, crawl_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze technical SEO issues from crawl data."""
|
||||
try:
|
||||
st.info("🔍 Detecting technical SEO issues...")
|
||||
|
||||
if 'crawl_dataframe' not in crawl_data:
|
||||
return {}
|
||||
|
||||
df = crawl_data['crawl_dataframe']
|
||||
|
||||
technical_issues = {
|
||||
'http_errors': {},
|
||||
'redirect_issues': {},
|
||||
'duplicate_content': {},
|
||||
'missing_elements': {},
|
||||
'page_speed_issues': {},
|
||||
'crawlability_issues': {}
|
||||
}
|
||||
|
||||
# HTTP Status Code Issues
|
||||
error_codes = df[df['status'] >= 400]['status'].value_counts().to_dict()
|
||||
technical_issues['http_errors'] = {
|
||||
'total_errors': len(df[df['status'] >= 400]),
|
||||
'error_breakdown': error_codes,
|
||||
'error_pages': df[df['status'] >= 400][['url', 'status']].to_dict('records')[:50]
|
||||
}
|
||||
|
||||
# Redirect Analysis
|
||||
redirects = df[df['status'].isin([301, 302, 303, 307, 308])]
|
||||
technical_issues['redirect_issues'] = {
|
||||
'total_redirects': len(redirects),
|
||||
'redirect_chains': self._find_redirect_chains(redirects),
|
||||
'redirect_types': redirects['status'].value_counts().to_dict()
|
||||
}
|
||||
|
||||
# Duplicate Content Detection
|
||||
if 'title' in df.columns:
|
||||
duplicate_titles = df['title'].value_counts()
|
||||
duplicate_titles = duplicate_titles[duplicate_titles > 1]
|
||||
|
||||
technical_issues['duplicate_content'] = {
|
||||
'duplicate_titles': len(duplicate_titles),
|
||||
'duplicate_title_groups': duplicate_titles.to_dict(),
|
||||
'pages_with_duplicate_titles': df[df['title'].isin(duplicate_titles.index)][['url', 'title']].to_dict('records')[:20]
|
||||
}
|
||||
|
||||
# Missing Elements Analysis
|
||||
missing_elements = {
|
||||
'missing_titles': len(df[(df['title'].isna()) | (df['title'] == '')]) if 'title' in df.columns else 0,
|
||||
'missing_meta_desc': len(df[(df['meta_desc'].isna()) | (df['meta_desc'] == '')]) if 'meta_desc' in df.columns else 0,
|
||||
'missing_h1': len(df[(df['h1'].isna()) | (df['h1'] == '')]) if 'h1' in df.columns else 0
|
||||
}
|
||||
technical_issues['missing_elements'] = missing_elements
|
||||
|
||||
# Page Speed Issues
|
||||
if 'download_latency' in df.columns:
|
||||
slow_pages = df[df['download_latency'] > 3.0] # Pages taking >3s
|
||||
technical_issues['page_speed_issues'] = {
|
||||
'slow_pages_count': len(slow_pages),
|
||||
'avg_load_time': df['download_latency'].mean(),
|
||||
'slowest_pages': slow_pages.nlargest(10, 'download_latency')[['url', 'download_latency']].to_dict('records')
|
||||
}
|
||||
|
||||
return technical_issues
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing technical issues: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_performance_metrics(self, crawl_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze website performance metrics."""
|
||||
try:
|
||||
st.info("⚡ Analyzing performance metrics...")
|
||||
|
||||
if 'crawl_dataframe' not in crawl_data:
|
||||
return {}
|
||||
|
||||
df = crawl_data['crawl_dataframe']
|
||||
|
||||
performance = {
|
||||
'load_time_analysis': {},
|
||||
'content_size_analysis': {},
|
||||
'server_performance': {},
|
||||
'optimization_opportunities': []
|
||||
}
|
||||
|
||||
# Load Time Analysis
|
||||
if 'download_latency' in df.columns:
|
||||
load_times = df['download_latency'].dropna()
|
||||
performance['load_time_analysis'] = {
|
||||
'avg_load_time': load_times.mean(),
|
||||
'median_load_time': load_times.median(),
|
||||
'p95_load_time': load_times.quantile(0.95),
|
||||
'fastest_page': load_times.min(),
|
||||
'slowest_page': load_times.max(),
|
||||
'pages_over_3s': len(load_times[load_times > 3]),
|
||||
'performance_distribution': {
|
||||
'fast_pages': len(load_times[load_times <= 1]),
|
||||
'moderate_pages': len(load_times[(load_times > 1) & (load_times <= 3)]),
|
||||
'slow_pages': len(load_times[load_times > 3])
|
||||
}
|
||||
}
|
||||
|
||||
# Content Size Analysis
|
||||
if 'size' in df.columns:
|
||||
sizes = df['size'].dropna()
|
||||
performance['content_size_analysis'] = {
|
||||
'avg_page_size': sizes.mean(),
|
||||
'median_page_size': sizes.median(),
|
||||
'largest_page': sizes.max(),
|
||||
'smallest_page': sizes.min(),
|
||||
'pages_over_1mb': len(sizes[sizes > 1048576]), # 1MB
|
||||
'total_content_size': sizes.sum()
|
||||
}
|
||||
|
||||
# Server Performance
|
||||
status_codes = df['status'].value_counts()
|
||||
total_pages = len(df)
|
||||
performance['server_performance'] = {
|
||||
'success_rate': status_codes.get(200, 0) / total_pages * 100,
|
||||
'error_rate': sum(status_codes.get(code, 0) for code in range(400, 600)) / total_pages * 100,
|
||||
'redirect_rate': sum(status_codes.get(code, 0) for code in [301, 302, 303, 307, 308]) / total_pages * 100
|
||||
}
|
||||
|
||||
return performance
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing performance: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_content_structure(self, crawl_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze content structure and SEO elements."""
|
||||
try:
|
||||
st.info("📊 Analyzing content structure...")
|
||||
|
||||
if 'crawl_dataframe' not in crawl_data:
|
||||
return {}
|
||||
|
||||
df = crawl_data['crawl_dataframe']
|
||||
|
||||
content_analysis = {
|
||||
'title_analysis': {},
|
||||
'meta_description_analysis': {},
|
||||
'heading_structure': {},
|
||||
'internal_linking': {},
|
||||
'content_optimization': {}
|
||||
}
|
||||
|
||||
# Title Analysis
|
||||
if 'title' in df.columns:
|
||||
titles = df['title'].dropna()
|
||||
title_lengths = titles.str.len()
|
||||
|
||||
content_analysis['title_analysis'] = {
|
||||
'avg_title_length': title_lengths.mean(),
|
||||
'title_length_distribution': {
|
||||
'too_short': len(title_lengths[title_lengths < 30]),
|
||||
'optimal': len(title_lengths[(title_lengths >= 30) & (title_lengths <= 60)]),
|
||||
'too_long': len(title_lengths[title_lengths > 60])
|
||||
},
|
||||
'duplicate_titles': len(titles.value_counts()[titles.value_counts() > 1]),
|
||||
'missing_titles': len(df) - len(titles)
|
||||
}
|
||||
|
||||
# Meta Description Analysis
|
||||
if 'meta_desc' in df.columns:
|
||||
meta_descs = df['meta_desc'].dropna()
|
||||
meta_lengths = meta_descs.str.len()
|
||||
|
||||
content_analysis['meta_description_analysis'] = {
|
||||
'avg_meta_length': meta_lengths.mean(),
|
||||
'meta_length_distribution': {
|
||||
'too_short': len(meta_lengths[meta_lengths < 120]),
|
||||
'optimal': len(meta_lengths[(meta_lengths >= 120) & (meta_lengths <= 160)]),
|
||||
'too_long': len(meta_lengths[meta_lengths > 160])
|
||||
},
|
||||
'missing_meta_descriptions': len(df) - len(meta_descs)
|
||||
}
|
||||
|
||||
# Heading Structure Analysis
|
||||
heading_cols = [col for col in df.columns if col.startswith('h') and col[1:].isdigit()]
|
||||
if heading_cols:
|
||||
heading_analysis = {}
|
||||
for col in heading_cols:
|
||||
headings = df[col].dropna()
|
||||
heading_analysis[f'{col}_usage'] = {
|
||||
'pages_with_heading': len(headings),
|
||||
'usage_rate': len(headings) / len(df) * 100,
|
||||
'avg_length': headings.str.len().mean() if len(headings) > 0 else 0
|
||||
}
|
||||
content_analysis['heading_structure'] = heading_analysis
|
||||
|
||||
# Internal Linking Analysis
|
||||
if 'links_internal' in df.columns:
|
||||
internal_links = df['links_internal'].apply(lambda x: len(x) if isinstance(x, list) else 0)
|
||||
content_analysis['internal_linking'] = {
|
||||
'avg_internal_links': internal_links.mean(),
|
||||
'pages_with_no_internal_links': len(internal_links[internal_links == 0]),
|
||||
'max_internal_links': internal_links.max(),
|
||||
'internal_link_distribution': internal_links.describe().to_dict()
|
||||
}
|
||||
|
||||
return content_analysis
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing content structure: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_url_structure(self, crawl_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze URL structure and optimization using adv.url_to_df."""
|
||||
try:
|
||||
st.info("🔗 Analyzing URL structure...")
|
||||
|
||||
if 'crawl_dataframe' not in crawl_data:
|
||||
return {}
|
||||
|
||||
df = crawl_data['crawl_dataframe']
|
||||
urls = df['url'].tolist()
|
||||
|
||||
# Use advertools to analyze URL structure
|
||||
url_df = adv.url_to_df(urls)
|
||||
|
||||
url_analysis = {
|
||||
'url_length_analysis': {},
|
||||
'url_structure_patterns': {},
|
||||
'url_optimization': {},
|
||||
'path_analysis': {}
|
||||
}
|
||||
|
||||
# URL Length Analysis
|
||||
url_lengths = url_df['url'].str.len()
|
||||
url_analysis['url_length_analysis'] = {
|
||||
'avg_url_length': url_lengths.mean(),
|
||||
'max_url_length': url_lengths.max(),
|
||||
'long_urls_count': len(url_lengths[url_lengths > 100]),
|
||||
'url_length_distribution': url_lengths.describe().to_dict()
|
||||
}
|
||||
|
||||
# Path Depth Analysis
|
||||
if 'dir_1' in url_df.columns:
|
||||
path_depths = url_df.apply(lambda row: sum(1 for i in range(1, 10) if f'dir_{i}' in row and pd.notna(row[f'dir_{i}'])), axis=1)
|
||||
url_analysis['path_analysis'] = {
|
||||
'avg_path_depth': path_depths.mean(),
|
||||
'max_path_depth': path_depths.max(),
|
||||
'deep_paths_count': len(path_depths[path_depths > 4]),
|
||||
'path_depth_distribution': path_depths.value_counts().to_dict()
|
||||
}
|
||||
|
||||
# URL Structure Patterns
|
||||
domains = url_df['netloc'].value_counts()
|
||||
schemes = url_df['scheme'].value_counts()
|
||||
|
||||
url_analysis['url_structure_patterns'] = {
|
||||
'domains_found': domains.to_dict(),
|
||||
'schemes_used': schemes.to_dict(),
|
||||
'subdomain_usage': len(url_df[url_df['netloc'].str.contains('\.', regex=True)]),
|
||||
'https_usage': schemes.get('https', 0) / len(url_df) * 100
|
||||
}
|
||||
|
||||
# URL Optimization Issues
|
||||
optimization_issues = []
|
||||
|
||||
# Check for non-HTTPS URLs
|
||||
if schemes.get('http', 0) > 0:
|
||||
optimization_issues.append(f"{schemes.get('http', 0)} pages not using HTTPS")
|
||||
|
||||
# Check for long URLs
|
||||
long_urls = len(url_lengths[url_lengths > 100])
|
||||
if long_urls > 0:
|
||||
optimization_issues.append(f"{long_urls} URLs are too long (>100 characters)")
|
||||
|
||||
# Check for deep paths
|
||||
if 'path_analysis' in url_analysis:
|
||||
deep_paths = url_analysis['path_analysis']['deep_paths_count']
|
||||
if deep_paths > 0:
|
||||
optimization_issues.append(f"{deep_paths} URLs have deep path structures (>4 levels)")
|
||||
|
||||
url_analysis['url_optimization'] = {
|
||||
'issues_found': len(optimization_issues),
|
||||
'optimization_recommendations': optimization_issues
|
||||
}
|
||||
|
||||
return url_analysis
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing URL structure: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_image_seo(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Analyze image SEO using adv.crawl_images."""
|
||||
try:
|
||||
st.info("🖼️ Analyzing image SEO...")
|
||||
|
||||
# Create image crawl output file
|
||||
image_file = os.path.join(self.temp_dir, "image_crawl.jl")
|
||||
|
||||
# Crawl images
|
||||
adv.crawl_images(
|
||||
url_list=[website_url],
|
||||
output_file=image_file,
|
||||
custom_settings={
|
||||
'DEPTH_LIMIT': 2,
|
||||
'CLOSESPIDER_PAGECOUNT': 100,
|
||||
'DOWNLOAD_DELAY': 1
|
||||
}
|
||||
)
|
||||
|
||||
image_analysis = {
|
||||
'image_count': 0,
|
||||
'alt_text_analysis': {},
|
||||
'image_format_analysis': {},
|
||||
'image_size_analysis': {},
|
||||
'optimization_opportunities': []
|
||||
}
|
||||
|
||||
if os.path.exists(image_file):
|
||||
image_df = pd.read_json(image_file, lines=True)
|
||||
|
||||
image_analysis['image_count'] = len(image_df)
|
||||
|
||||
# Alt text analysis
|
||||
if 'img_alt' in image_df.columns:
|
||||
alt_texts = image_df['img_alt'].dropna()
|
||||
missing_alt = len(image_df) - len(alt_texts)
|
||||
|
||||
image_analysis['alt_text_analysis'] = {
|
||||
'images_with_alt': len(alt_texts),
|
||||
'images_missing_alt': missing_alt,
|
||||
'alt_text_coverage': len(alt_texts) / len(image_df) * 100,
|
||||
'avg_alt_length': alt_texts.str.len().mean() if len(alt_texts) > 0 else 0
|
||||
}
|
||||
|
||||
# Image format analysis
|
||||
if 'img_src' in image_df.columns:
|
||||
# Extract file extensions
|
||||
extensions = image_df['img_src'].str.extract(r'\.([a-zA-Z]{2,4})(?:\?|$)')
|
||||
format_counts = extensions[0].value_counts()
|
||||
|
||||
image_analysis['image_format_analysis'] = {
|
||||
'format_distribution': format_counts.to_dict(),
|
||||
'modern_format_usage': format_counts.get('webp', 0) + format_counts.get('avif', 0)
|
||||
}
|
||||
|
||||
return image_analysis
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing images: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_security_headers(self, website_url: str) -> Dict[str, Any]:
|
||||
"""Analyze security headers using adv.crawl_headers."""
|
||||
try:
|
||||
st.info("🛡️ Analyzing security headers...")
|
||||
|
||||
# Create headers output file
|
||||
headers_file = os.path.join(self.temp_dir, "security_headers.jl")
|
||||
|
||||
# Crawl headers
|
||||
adv.crawl_headers([website_url], output_file=headers_file)
|
||||
|
||||
security_analysis = {
|
||||
'security_headers_present': {},
|
||||
'security_score': 0,
|
||||
'security_recommendations': []
|
||||
}
|
||||
|
||||
if os.path.exists(headers_file):
|
||||
headers_df = pd.read_json(headers_file, lines=True)
|
||||
|
||||
# Check for important security headers
|
||||
security_headers = {
|
||||
'X-Frame-Options': 'resp_headers_X-Frame-Options',
|
||||
'X-Content-Type-Options': 'resp_headers_X-Content-Type-Options',
|
||||
'X-XSS-Protection': 'resp_headers_X-XSS-Protection',
|
||||
'Strict-Transport-Security': 'resp_headers_Strict-Transport-Security',
|
||||
'Content-Security-Policy': 'resp_headers_Content-Security-Policy',
|
||||
'Referrer-Policy': 'resp_headers_Referrer-Policy'
|
||||
}
|
||||
|
||||
headers_present = {}
|
||||
for header_name, column_name in security_headers.items():
|
||||
is_present = column_name in headers_df.columns and headers_df[column_name].notna().any()
|
||||
headers_present[header_name] = is_present
|
||||
|
||||
security_analysis['security_headers_present'] = headers_present
|
||||
|
||||
# Calculate security score
|
||||
present_count = sum(headers_present.values())
|
||||
security_analysis['security_score'] = (present_count / len(security_headers)) * 100
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = []
|
||||
for header_name, is_present in headers_present.items():
|
||||
if not is_present:
|
||||
recommendations.append(f"Add {header_name} header for improved security")
|
||||
|
||||
security_analysis['security_recommendations'] = recommendations
|
||||
|
||||
return security_analysis
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing security headers: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _analyze_mobile_seo(self, crawl_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Analyze mobile SEO factors."""
|
||||
try:
|
||||
st.info("📱 Analyzing mobile SEO factors...")
|
||||
|
||||
if 'crawl_dataframe' not in crawl_data:
|
||||
return {}
|
||||
|
||||
df = crawl_data['crawl_dataframe']
|
||||
|
||||
mobile_analysis = {
|
||||
'viewport_analysis': {},
|
||||
'mobile_optimization': {},
|
||||
'responsive_design_indicators': {}
|
||||
}
|
||||
|
||||
# Viewport meta tag analysis
|
||||
if 'viewport' in df.columns:
|
||||
viewport_present = df['viewport'].notna().sum()
|
||||
mobile_analysis['viewport_analysis'] = {
|
||||
'pages_with_viewport': viewport_present,
|
||||
'viewport_coverage': viewport_present / len(df) * 100,
|
||||
'pages_missing_viewport': len(df) - viewport_present
|
||||
}
|
||||
|
||||
# Check for mobile-specific meta tags and indicators
|
||||
mobile_indicators = []
|
||||
|
||||
# Check for touch icons
|
||||
if any('touch-icon' in col for col in df.columns):
|
||||
mobile_indicators.append("Touch icons configured")
|
||||
|
||||
# Check for responsive design indicators in content
|
||||
# This is a simplified check - in practice, you'd analyze CSS and page structure
|
||||
mobile_analysis['mobile_optimization'] = {
|
||||
'mobile_indicators_found': len(mobile_indicators),
|
||||
'mobile_indicators': mobile_indicators
|
||||
}
|
||||
|
||||
return mobile_analysis
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error analyzing mobile SEO: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _generate_technical_recommendations(self, results: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate AI-powered technical SEO recommendations."""
|
||||
try:
|
||||
st.info("🤖 Generating technical recommendations...")
|
||||
|
||||
# Prepare technical analysis summary for AI
|
||||
technical_summary = {
|
||||
'website_url': results.get('website_url', ''),
|
||||
'pages_crawled': results.get('crawl_overview', {}).get('pages_crawled', 0),
|
||||
'error_count': results.get('technical_issues', {}).get('http_errors', {}).get('total_errors', 0),
|
||||
'avg_load_time': results.get('performance_analysis', {}).get('load_time_analysis', {}).get('avg_load_time', 0),
|
||||
'security_score': results.get('security_headers', {}).get('security_score', 0),
|
||||
'missing_titles': results.get('content_analysis', {}).get('title_analysis', {}).get('missing_titles', 0),
|
||||
'missing_meta_desc': results.get('content_analysis', {}).get('meta_description_analysis', {}).get('missing_meta_descriptions', 0)
|
||||
}
|
||||
|
||||
# Generate AI recommendations
|
||||
prompt = f"""
|
||||
As a technical SEO expert, analyze this comprehensive website audit and provide prioritized recommendations:
|
||||
|
||||
WEBSITE: {technical_summary['website_url']}
|
||||
PAGES ANALYZED: {technical_summary['pages_crawled']}
|
||||
|
||||
TECHNICAL ISSUES:
|
||||
- HTTP Errors: {technical_summary['error_count']}
|
||||
- Average Load Time: {technical_summary['avg_load_time']:.2f}s
|
||||
- Security Score: {technical_summary['security_score']:.1f}%
|
||||
- Missing Titles: {technical_summary['missing_titles']}
|
||||
- Missing Meta Descriptions: {technical_summary['missing_meta_desc']}
|
||||
|
||||
PROVIDE:
|
||||
1. Critical Issues (Fix Immediately)
|
||||
2. High Priority Optimizations
|
||||
3. Medium Priority Improvements
|
||||
4. Long-term Technical Strategy
|
||||
5. Specific Implementation Steps
|
||||
6. Expected Impact Assessment
|
||||
|
||||
Format as JSON with clear priorities and actionable recommendations.
|
||||
"""
|
||||
|
||||
ai_response = llm_text_gen(
|
||||
prompt=prompt,
|
||||
system_prompt="You are a senior technical SEO specialist with expertise in website optimization, Core Web Vitals, and search engine best practices.",
|
||||
response_format="json_object"
|
||||
)
|
||||
|
||||
if ai_response:
|
||||
return ai_response
|
||||
else:
|
||||
return {'recommendations': ['AI recommendations temporarily unavailable']}
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"Error generating recommendations: {str(e)}")
|
||||
return {}
|
||||
|
||||
def _find_redirect_chains(self, redirects_df: pd.DataFrame) -> List[Dict[str, Any]]:
|
||||
"""Find redirect chains in the crawled data."""
|
||||
# Simplified redirect chain detection
|
||||
# In a full implementation, you'd trace the redirect paths
|
||||
redirect_chains = []
|
||||
|
||||
if len(redirects_df) > 0:
|
||||
# Group redirects by status code
|
||||
for status_code in redirects_df['status'].unique():
|
||||
status_redirects = redirects_df[redirects_df['status'] == status_code]
|
||||
redirect_chains.append({
|
||||
'status_code': int(status_code),
|
||||
'count': len(status_redirects),
|
||||
'examples': status_redirects['url'].head(5).tolist()
|
||||
})
|
||||
|
||||
return redirect_chains
|
||||
968
ToBeMigrated/ai_seo_tools/technical_seo_crawler/ui.py
Normal file
968
ToBeMigrated/ai_seo_tools/technical_seo_crawler/ui.py
Normal file
@@ -0,0 +1,968 @@
|
||||
"""
|
||||
Technical SEO Crawler UI with Comprehensive Analysis Dashboard.
|
||||
|
||||
This module provides a professional Streamlit interface for the Technical SEO Crawler
|
||||
with detailed analysis results, visualization, and export capabilities.
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
from typing import Dict, Any, List
|
||||
import json
|
||||
from datetime import datetime
|
||||
import io
|
||||
import base64
|
||||
import plotly.express as px
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
|
||||
from .crawler import TechnicalSEOCrawler
|
||||
from lib.alwrity_ui.dashboard_styles import apply_dashboard_style, render_dashboard_header
|
||||
|
||||
class TechnicalSEOCrawlerUI:
|
||||
"""Professional UI for Technical SEO Crawler."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the Technical SEO Crawler UI."""
|
||||
self.crawler = TechnicalSEOCrawler()
|
||||
|
||||
# Apply dashboard styling
|
||||
apply_dashboard_style()
|
||||
|
||||
def render(self):
|
||||
"""Render the Technical SEO Crawler interface."""
|
||||
|
||||
# Enhanced dashboard header
|
||||
render_dashboard_header(
|
||||
"🔧 Technical SEO Crawler",
|
||||
"Comprehensive site-wide technical SEO analysis with AI-powered recommendations. Identify and fix technical issues that impact your search rankings."
|
||||
)
|
||||
|
||||
# Main content area
|
||||
with st.container():
|
||||
# Analysis input form
|
||||
self._render_crawler_form()
|
||||
|
||||
# Session state for results
|
||||
if 'technical_seo_results' in st.session_state and st.session_state.technical_seo_results:
|
||||
st.markdown("---")
|
||||
self._render_results_dashboard(st.session_state.technical_seo_results)
|
||||
|
||||
def _render_crawler_form(self):
|
||||
"""Render the crawler configuration form."""
|
||||
st.markdown("## 🚀 Configure Technical SEO Audit")
|
||||
|
||||
with st.form("technical_seo_crawler_form"):
|
||||
# Website URL input
|
||||
col1, col2 = st.columns([3, 1])
|
||||
|
||||
with col1:
|
||||
website_url = st.text_input(
|
||||
"🌐 Website URL to Audit",
|
||||
placeholder="https://yourwebsite.com",
|
||||
help="Enter the website URL for comprehensive technical SEO analysis"
|
||||
)
|
||||
|
||||
with col2:
|
||||
audit_type = st.selectbox(
|
||||
"🎯 Audit Type",
|
||||
options=["Standard", "Deep", "Quick"],
|
||||
help="Choose the depth of analysis"
|
||||
)
|
||||
|
||||
# Crawl configuration
|
||||
st.markdown("### ⚙️ Crawl Configuration")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
if audit_type == "Quick":
|
||||
crawl_depth = st.slider("Crawl Depth", 1, 2, 1)
|
||||
max_pages = st.slider("Max Pages", 10, 100, 50)
|
||||
elif audit_type == "Deep":
|
||||
crawl_depth = st.slider("Crawl Depth", 1, 5, 4)
|
||||
max_pages = st.slider("Max Pages", 100, 1000, 500)
|
||||
else: # Standard
|
||||
crawl_depth = st.slider("Crawl Depth", 1, 4, 3)
|
||||
max_pages = st.slider("Max Pages", 50, 500, 200)
|
||||
|
||||
with col2:
|
||||
analyze_images = st.checkbox(
|
||||
"🖼️ Analyze Images",
|
||||
value=True,
|
||||
help="Include image SEO analysis"
|
||||
)
|
||||
|
||||
analyze_security = st.checkbox(
|
||||
"🛡️ Security Headers",
|
||||
value=True,
|
||||
help="Analyze security headers"
|
||||
)
|
||||
|
||||
with col3:
|
||||
analyze_mobile = st.checkbox(
|
||||
"📱 Mobile SEO",
|
||||
value=True,
|
||||
help="Include mobile SEO analysis"
|
||||
)
|
||||
|
||||
ai_recommendations = st.checkbox(
|
||||
"🤖 AI Recommendations",
|
||||
value=True,
|
||||
help="Generate AI-powered recommendations"
|
||||
)
|
||||
|
||||
# Analysis scope
|
||||
st.markdown("### 🎯 Analysis Scope")
|
||||
|
||||
analysis_options = st.multiselect(
|
||||
"Select Analysis Components",
|
||||
options=[
|
||||
"Technical Issues Detection",
|
||||
"Performance Analysis",
|
||||
"Content Structure Analysis",
|
||||
"URL Structure Optimization",
|
||||
"Internal Linking Analysis",
|
||||
"Duplicate Content Detection"
|
||||
],
|
||||
default=[
|
||||
"Technical Issues Detection",
|
||||
"Performance Analysis",
|
||||
"Content Structure Analysis"
|
||||
],
|
||||
help="Choose which analysis components to include"
|
||||
)
|
||||
|
||||
# Submit button
|
||||
submitted = st.form_submit_button(
|
||||
"🚀 Start Technical SEO Audit",
|
||||
use_container_width=True,
|
||||
type="primary"
|
||||
)
|
||||
|
||||
if submitted:
|
||||
# Validate inputs
|
||||
if not website_url or not website_url.startswith(('http://', 'https://')):
|
||||
st.error("❌ Please enter a valid website URL starting with http:// or https://")
|
||||
return
|
||||
|
||||
# Run technical SEO analysis
|
||||
self._run_technical_analysis(
|
||||
website_url=website_url,
|
||||
crawl_depth=crawl_depth,
|
||||
max_pages=max_pages,
|
||||
options={
|
||||
'analyze_images': analyze_images,
|
||||
'analyze_security': analyze_security,
|
||||
'analyze_mobile': analyze_mobile,
|
||||
'ai_recommendations': ai_recommendations,
|
||||
'analysis_scope': analysis_options
|
||||
}
|
||||
)
|
||||
|
||||
def _run_technical_analysis(self, website_url: str, crawl_depth: int,
|
||||
max_pages: int, options: Dict[str, Any]):
|
||||
"""Run the technical SEO analysis."""
|
||||
|
||||
try:
|
||||
with st.spinner("🔄 Running Comprehensive Technical SEO Audit..."):
|
||||
|
||||
# Initialize progress tracking
|
||||
progress_bar = st.progress(0)
|
||||
status_text = st.empty()
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(10)
|
||||
status_text.text("🚀 Initializing technical SEO crawler...")
|
||||
|
||||
# Run comprehensive analysis
|
||||
results = self.crawler.analyze_website_technical_seo(
|
||||
website_url=website_url,
|
||||
crawl_depth=crawl_depth,
|
||||
max_pages=max_pages
|
||||
)
|
||||
|
||||
progress_bar.progress(100)
|
||||
status_text.text("✅ Technical SEO audit complete!")
|
||||
|
||||
# Store results in session state
|
||||
st.session_state.technical_seo_results = results
|
||||
|
||||
# Clear progress indicators
|
||||
progress_bar.empty()
|
||||
status_text.empty()
|
||||
|
||||
if 'error' in results:
|
||||
st.error(f"❌ Analysis failed: {results['error']}")
|
||||
else:
|
||||
st.success("🎉 Technical SEO Audit completed successfully!")
|
||||
st.balloons()
|
||||
|
||||
# Rerun to show results
|
||||
st.rerun()
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"❌ Error running technical analysis: {str(e)}")
|
||||
|
||||
def _render_results_dashboard(self, results: Dict[str, Any]):
|
||||
"""Render the comprehensive results dashboard."""
|
||||
|
||||
if 'error' in results:
|
||||
st.error(f"❌ Analysis Error: {results['error']}")
|
||||
return
|
||||
|
||||
# Results header
|
||||
st.markdown("## 📊 Technical SEO Audit Results")
|
||||
|
||||
# Key metrics overview
|
||||
self._render_metrics_overview(results)
|
||||
|
||||
# Detailed analysis tabs
|
||||
self._render_detailed_analysis(results)
|
||||
|
||||
# Export functionality
|
||||
self._render_export_options(results)
|
||||
|
||||
def _render_metrics_overview(self, results: Dict[str, Any]):
|
||||
"""Render key metrics overview."""
|
||||
|
||||
st.markdown("### 📈 Audit Overview")
|
||||
|
||||
# Create metrics columns
|
||||
col1, col2, col3, col4, col5, col6 = st.columns(6)
|
||||
|
||||
with col1:
|
||||
pages_crawled = results.get('crawl_overview', {}).get('pages_crawled', 0)
|
||||
st.metric(
|
||||
"🕷️ Pages Crawled",
|
||||
pages_crawled,
|
||||
help="Total pages analyzed"
|
||||
)
|
||||
|
||||
with col2:
|
||||
error_count = results.get('technical_issues', {}).get('http_errors', {}).get('total_errors', 0)
|
||||
st.metric(
|
||||
"❌ HTTP Errors",
|
||||
error_count,
|
||||
delta=f"-{error_count}" if error_count > 0 else None,
|
||||
help="Pages with HTTP errors (4xx, 5xx)"
|
||||
)
|
||||
|
||||
with col3:
|
||||
avg_load_time = results.get('performance_analysis', {}).get('load_time_analysis', {}).get('avg_load_time', 0)
|
||||
st.metric(
|
||||
"⚡ Avg Load Time",
|
||||
f"{avg_load_time:.2f}s",
|
||||
delta=f"+{avg_load_time:.2f}s" if avg_load_time > 3 else None,
|
||||
help="Average page load time"
|
||||
)
|
||||
|
||||
with col4:
|
||||
security_score = results.get('security_headers', {}).get('security_score', 0)
|
||||
st.metric(
|
||||
"🛡️ Security Score",
|
||||
f"{security_score:.0f}%",
|
||||
delta=f"{security_score:.0f}%" if security_score < 100 else None,
|
||||
help="Security headers implementation score"
|
||||
)
|
||||
|
||||
with col5:
|
||||
missing_titles = results.get('content_analysis', {}).get('title_analysis', {}).get('missing_titles', 0)
|
||||
st.metric(
|
||||
"📝 Missing Titles",
|
||||
missing_titles,
|
||||
delta=f"-{missing_titles}" if missing_titles > 0 else None,
|
||||
help="Pages without title tags"
|
||||
)
|
||||
|
||||
with col6:
|
||||
image_count = results.get('image_optimization', {}).get('image_count', 0)
|
||||
st.metric(
|
||||
"🖼️ Images Analyzed",
|
||||
image_count,
|
||||
help="Total images found and analyzed"
|
||||
)
|
||||
|
||||
# Analysis timestamp
|
||||
if results.get('analysis_timestamp'):
|
||||
timestamp = datetime.fromisoformat(results['analysis_timestamp'].replace('Z', '+00:00'))
|
||||
st.caption(f"📅 Audit completed: {timestamp.strftime('%Y-%m-%d %H:%M:%S UTC')}")
|
||||
|
||||
def _render_detailed_analysis(self, results: Dict[str, Any]):
|
||||
"""Render detailed analysis in tabs."""
|
||||
|
||||
# Create main analysis tabs
|
||||
tab1, tab2, tab3, tab4, tab5, tab6, tab7 = st.tabs([
|
||||
"🔍 Technical Issues",
|
||||
"⚡ Performance",
|
||||
"📊 Content Analysis",
|
||||
"🔗 URL Structure",
|
||||
"🖼️ Image SEO",
|
||||
"🛡️ Security",
|
||||
"🤖 AI Recommendations"
|
||||
])
|
||||
|
||||
with tab1:
|
||||
self._render_technical_issues(results.get('technical_issues', {}))
|
||||
|
||||
with tab2:
|
||||
self._render_performance_analysis(results.get('performance_analysis', {}))
|
||||
|
||||
with tab3:
|
||||
self._render_content_analysis(results.get('content_analysis', {}))
|
||||
|
||||
with tab4:
|
||||
self._render_url_structure(results.get('url_structure', {}))
|
||||
|
||||
with tab5:
|
||||
self._render_image_analysis(results.get('image_optimization', {}))
|
||||
|
||||
with tab6:
|
||||
self._render_security_analysis(results.get('security_headers', {}))
|
||||
|
||||
with tab7:
|
||||
self._render_ai_recommendations(results.get('ai_recommendations', {}))
|
||||
|
||||
def _render_technical_issues(self, technical_data: Dict[str, Any]):
|
||||
"""Render technical issues analysis."""
|
||||
|
||||
st.markdown("### 🔍 Technical SEO Issues")
|
||||
|
||||
if not technical_data:
|
||||
st.info("No technical issues data available")
|
||||
return
|
||||
|
||||
# HTTP Errors
|
||||
if technical_data.get('http_errors'):
|
||||
http_errors = technical_data['http_errors']
|
||||
|
||||
st.markdown("#### ❌ HTTP Status Code Errors")
|
||||
|
||||
if http_errors.get('total_errors', 0) > 0:
|
||||
st.error(f"Found {http_errors['total_errors']} pages with HTTP errors!")
|
||||
|
||||
# Error breakdown chart
|
||||
if http_errors.get('error_breakdown'):
|
||||
error_df = pd.DataFrame(
|
||||
list(http_errors['error_breakdown'].items()),
|
||||
columns=['Status Code', 'Count']
|
||||
)
|
||||
|
||||
fig = px.bar(error_df, x='Status Code', y='Count',
|
||||
title="HTTP Error Distribution")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Error pages table
|
||||
if http_errors.get('error_pages'):
|
||||
st.markdown("**Pages with Errors:**")
|
||||
error_pages_df = pd.DataFrame(http_errors['error_pages'])
|
||||
st.dataframe(error_pages_df, use_container_width=True)
|
||||
else:
|
||||
st.success("✅ No HTTP errors found!")
|
||||
|
||||
# Redirect Issues
|
||||
if technical_data.get('redirect_issues'):
|
||||
redirect_data = technical_data['redirect_issues']
|
||||
|
||||
st.markdown("#### 🔄 Redirect Analysis")
|
||||
|
||||
total_redirects = redirect_data.get('total_redirects', 0)
|
||||
|
||||
if total_redirects > 0:
|
||||
st.warning(f"Found {total_redirects} redirect(s)")
|
||||
|
||||
# Redirect types
|
||||
if redirect_data.get('redirect_types'):
|
||||
redirect_df = pd.DataFrame(
|
||||
list(redirect_data['redirect_types'].items()),
|
||||
columns=['Redirect Type', 'Count']
|
||||
)
|
||||
st.bar_chart(redirect_df.set_index('Redirect Type'))
|
||||
else:
|
||||
st.success("✅ No redirects found")
|
||||
|
||||
# Duplicate Content
|
||||
if technical_data.get('duplicate_content'):
|
||||
duplicate_data = technical_data['duplicate_content']
|
||||
|
||||
st.markdown("#### 📋 Duplicate Content Issues")
|
||||
|
||||
duplicate_titles = duplicate_data.get('duplicate_titles', 0)
|
||||
|
||||
if duplicate_titles > 0:
|
||||
st.warning(f"Found {duplicate_titles} duplicate title(s)")
|
||||
|
||||
# Show duplicate title groups
|
||||
if duplicate_data.get('pages_with_duplicate_titles'):
|
||||
duplicate_df = pd.DataFrame(duplicate_data['pages_with_duplicate_titles'])
|
||||
st.dataframe(duplicate_df, use_container_width=True)
|
||||
else:
|
||||
st.success("✅ No duplicate titles found")
|
||||
|
||||
# Missing Elements
|
||||
if technical_data.get('missing_elements'):
|
||||
missing_data = technical_data['missing_elements']
|
||||
|
||||
st.markdown("#### 📝 Missing SEO Elements")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
missing_titles = missing_data.get('missing_titles', 0)
|
||||
if missing_titles > 0:
|
||||
st.error(f"Missing Titles: {missing_titles}")
|
||||
else:
|
||||
st.success("All pages have titles ✅")
|
||||
|
||||
with col2:
|
||||
missing_meta = missing_data.get('missing_meta_desc', 0)
|
||||
if missing_meta > 0:
|
||||
st.error(f"Missing Meta Descriptions: {missing_meta}")
|
||||
else:
|
||||
st.success("All pages have meta descriptions ✅")
|
||||
|
||||
with col3:
|
||||
missing_h1 = missing_data.get('missing_h1', 0)
|
||||
if missing_h1 > 0:
|
||||
st.error(f"Missing H1 tags: {missing_h1}")
|
||||
else:
|
||||
st.success("All pages have H1 tags ✅")
|
||||
|
||||
def _render_performance_analysis(self, performance_data: Dict[str, Any]):
|
||||
"""Render performance analysis."""
|
||||
|
||||
st.markdown("### ⚡ Website Performance Analysis")
|
||||
|
||||
if not performance_data:
|
||||
st.info("No performance data available")
|
||||
return
|
||||
|
||||
# Load Time Analysis
|
||||
if performance_data.get('load_time_analysis'):
|
||||
load_time_data = performance_data['load_time_analysis']
|
||||
|
||||
st.markdown("#### 🚀 Page Load Time Analysis")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
avg_load = load_time_data.get('avg_load_time', 0)
|
||||
st.metric("Average Load Time", f"{avg_load:.2f}s")
|
||||
|
||||
with col2:
|
||||
median_load = load_time_data.get('median_load_time', 0)
|
||||
st.metric("Median Load Time", f"{median_load:.2f}s")
|
||||
|
||||
with col3:
|
||||
p95_load = load_time_data.get('p95_load_time', 0)
|
||||
st.metric("95th Percentile", f"{p95_load:.2f}s")
|
||||
|
||||
# Performance distribution
|
||||
if load_time_data.get('performance_distribution'):
|
||||
perf_dist = load_time_data['performance_distribution']
|
||||
|
||||
# Create pie chart for performance distribution
|
||||
labels = ['Fast (≤1s)', 'Moderate (1-3s)', 'Slow (>3s)']
|
||||
values = [
|
||||
perf_dist.get('fast_pages', 0),
|
||||
perf_dist.get('moderate_pages', 0),
|
||||
perf_dist.get('slow_pages', 0)
|
||||
]
|
||||
|
||||
fig = px.pie(values=values, names=labels,
|
||||
title="Page Load Time Distribution")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Content Size Analysis
|
||||
if performance_data.get('content_size_analysis'):
|
||||
size_data = performance_data['content_size_analysis']
|
||||
|
||||
st.markdown("#### 📦 Content Size Analysis")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
avg_size = size_data.get('avg_page_size', 0)
|
||||
st.metric("Average Page Size", f"{avg_size/1024:.1f} KB")
|
||||
|
||||
with col2:
|
||||
largest_size = size_data.get('largest_page', 0)
|
||||
st.metric("Largest Page", f"{largest_size/1024:.1f} KB")
|
||||
|
||||
with col3:
|
||||
large_pages = size_data.get('pages_over_1mb', 0)
|
||||
st.metric("Pages >1MB", large_pages)
|
||||
|
||||
# Server Performance
|
||||
if performance_data.get('server_performance'):
|
||||
server_data = performance_data['server_performance']
|
||||
|
||||
st.markdown("#### 🖥️ Server Performance")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
success_rate = server_data.get('success_rate', 0)
|
||||
st.metric("Success Rate", f"{success_rate:.1f}%")
|
||||
|
||||
with col2:
|
||||
error_rate = server_data.get('error_rate', 0)
|
||||
st.metric("Error Rate", f"{error_rate:.1f}%")
|
||||
|
||||
with col3:
|
||||
redirect_rate = server_data.get('redirect_rate', 0)
|
||||
st.metric("Redirect Rate", f"{redirect_rate:.1f}%")
|
||||
|
||||
def _render_content_analysis(self, content_data: Dict[str, Any]):
|
||||
"""Render content structure analysis."""
|
||||
|
||||
st.markdown("### 📊 Content Structure Analysis")
|
||||
|
||||
if not content_data:
|
||||
st.info("No content analysis data available")
|
||||
return
|
||||
|
||||
# Title Analysis
|
||||
if content_data.get('title_analysis'):
|
||||
title_data = content_data['title_analysis']
|
||||
|
||||
st.markdown("#### 📝 Title Tag Analysis")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
avg_title_length = title_data.get('avg_title_length', 0)
|
||||
st.metric("Average Title Length", f"{avg_title_length:.0f} chars")
|
||||
|
||||
duplicate_titles = title_data.get('duplicate_titles', 0)
|
||||
st.metric("Duplicate Titles", duplicate_titles)
|
||||
|
||||
with col2:
|
||||
# Title length distribution
|
||||
if title_data.get('title_length_distribution'):
|
||||
length_dist = title_data['title_length_distribution']
|
||||
|
||||
labels = ['Too Short (<30)', 'Optimal (30-60)', 'Too Long (>60)']
|
||||
values = [
|
||||
length_dist.get('too_short', 0),
|
||||
length_dist.get('optimal', 0),
|
||||
length_dist.get('too_long', 0)
|
||||
]
|
||||
|
||||
fig = px.pie(values=values, names=labels,
|
||||
title="Title Length Distribution")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Meta Description Analysis
|
||||
if content_data.get('meta_description_analysis'):
|
||||
meta_data = content_data['meta_description_analysis']
|
||||
|
||||
st.markdown("#### 🏷️ Meta Description Analysis")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
avg_meta_length = meta_data.get('avg_meta_length', 0)
|
||||
st.metric("Average Meta Length", f"{avg_meta_length:.0f} chars")
|
||||
|
||||
missing_meta = meta_data.get('missing_meta_descriptions', 0)
|
||||
st.metric("Missing Meta Descriptions", missing_meta)
|
||||
|
||||
with col2:
|
||||
# Meta length distribution
|
||||
if meta_data.get('meta_length_distribution'):
|
||||
meta_dist = meta_data['meta_length_distribution']
|
||||
|
||||
labels = ['Too Short (<120)', 'Optimal (120-160)', 'Too Long (>160)']
|
||||
values = [
|
||||
meta_dist.get('too_short', 0),
|
||||
meta_dist.get('optimal', 0),
|
||||
meta_dist.get('too_long', 0)
|
||||
]
|
||||
|
||||
fig = px.pie(values=values, names=labels,
|
||||
title="Meta Description Length Distribution")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
# Heading Structure
|
||||
if content_data.get('heading_structure'):
|
||||
heading_data = content_data['heading_structure']
|
||||
|
||||
st.markdown("#### 📋 Heading Structure Analysis")
|
||||
|
||||
# Create heading usage chart
|
||||
heading_usage = []
|
||||
for heading_type, data in heading_data.items():
|
||||
heading_usage.append({
|
||||
'Heading': heading_type.replace('_usage', '').upper(),
|
||||
'Usage Rate': data.get('usage_rate', 0),
|
||||
'Pages': data.get('pages_with_heading', 0)
|
||||
})
|
||||
|
||||
if heading_usage:
|
||||
heading_df = pd.DataFrame(heading_usage)
|
||||
|
||||
fig = px.bar(heading_df, x='Heading', y='Usage Rate',
|
||||
title="Heading Tag Usage Rates")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
st.dataframe(heading_df, use_container_width=True)
|
||||
|
||||
def _render_url_structure(self, url_data: Dict[str, Any]):
|
||||
"""Render URL structure analysis."""
|
||||
|
||||
st.markdown("### 🔗 URL Structure Analysis")
|
||||
|
||||
if not url_data:
|
||||
st.info("No URL structure data available")
|
||||
return
|
||||
|
||||
# URL Length Analysis
|
||||
if url_data.get('url_length_analysis'):
|
||||
length_data = url_data['url_length_analysis']
|
||||
|
||||
st.markdown("#### 📏 URL Length Analysis")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
avg_length = length_data.get('avg_url_length', 0)
|
||||
st.metric("Average URL Length", f"{avg_length:.0f} chars")
|
||||
|
||||
with col2:
|
||||
max_length = length_data.get('max_url_length', 0)
|
||||
st.metric("Longest URL", f"{max_length:.0f} chars")
|
||||
|
||||
with col3:
|
||||
long_urls = length_data.get('long_urls_count', 0)
|
||||
st.metric("URLs >100 chars", long_urls)
|
||||
|
||||
# URL Structure Patterns
|
||||
if url_data.get('url_structure_patterns'):
|
||||
pattern_data = url_data['url_structure_patterns']
|
||||
|
||||
st.markdown("#### 🏗️ URL Structure Patterns")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
https_usage = pattern_data.get('https_usage', 0)
|
||||
st.metric("HTTPS Usage", f"{https_usage:.1f}%")
|
||||
|
||||
with col2:
|
||||
subdomain_usage = pattern_data.get('subdomain_usage', 0)
|
||||
st.metric("Subdomains Found", subdomain_usage)
|
||||
|
||||
# Path Analysis
|
||||
if url_data.get('path_analysis'):
|
||||
path_data = url_data['path_analysis']
|
||||
|
||||
st.markdown("#### 📂 Path Depth Analysis")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
avg_depth = path_data.get('avg_path_depth', 0)
|
||||
st.metric("Average Path Depth", f"{avg_depth:.1f}")
|
||||
|
||||
with col2:
|
||||
max_depth = path_data.get('max_path_depth', 0)
|
||||
st.metric("Maximum Depth", max_depth)
|
||||
|
||||
with col3:
|
||||
deep_paths = path_data.get('deep_paths_count', 0)
|
||||
st.metric("Deep Paths (>4)", deep_paths)
|
||||
|
||||
# Optimization Issues
|
||||
if url_data.get('url_optimization'):
|
||||
opt_data = url_data['url_optimization']
|
||||
|
||||
st.markdown("#### ⚠️ URL Optimization Issues")
|
||||
|
||||
issues_found = opt_data.get('issues_found', 0)
|
||||
recommendations = opt_data.get('optimization_recommendations', [])
|
||||
|
||||
if issues_found > 0:
|
||||
st.warning(f"Found {issues_found} URL optimization issue(s)")
|
||||
|
||||
for rec in recommendations:
|
||||
st.write(f"• {rec}")
|
||||
else:
|
||||
st.success("✅ No URL optimization issues found")
|
||||
|
||||
def _render_image_analysis(self, image_data: Dict[str, Any]):
|
||||
"""Render image SEO analysis."""
|
||||
|
||||
st.markdown("### 🖼️ Image SEO Analysis")
|
||||
|
||||
if not image_data:
|
||||
st.info("No image analysis data available")
|
||||
return
|
||||
|
||||
# Image overview
|
||||
image_count = image_data.get('image_count', 0)
|
||||
st.metric("Total Images Found", image_count)
|
||||
|
||||
if image_count > 0:
|
||||
# Alt text analysis
|
||||
if image_data.get('alt_text_analysis'):
|
||||
alt_data = image_data['alt_text_analysis']
|
||||
|
||||
st.markdown("#### 📝 Alt Text Analysis")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
images_with_alt = alt_data.get('images_with_alt', 0)
|
||||
st.metric("Images with Alt Text", images_with_alt)
|
||||
|
||||
with col2:
|
||||
images_missing_alt = alt_data.get('images_missing_alt', 0)
|
||||
st.metric("Missing Alt Text", images_missing_alt)
|
||||
|
||||
with col3:
|
||||
alt_coverage = alt_data.get('alt_text_coverage', 0)
|
||||
st.metric("Alt Text Coverage", f"{alt_coverage:.1f}%")
|
||||
|
||||
# Image format analysis
|
||||
if image_data.get('image_format_analysis'):
|
||||
format_data = image_data['image_format_analysis']
|
||||
|
||||
st.markdown("#### 🎨 Image Format Analysis")
|
||||
|
||||
if format_data.get('format_distribution'):
|
||||
format_dist = format_data['format_distribution']
|
||||
|
||||
format_df = pd.DataFrame(
|
||||
list(format_dist.items()),
|
||||
columns=['Format', 'Count']
|
||||
)
|
||||
|
||||
fig = px.pie(format_df, values='Count', names='Format',
|
||||
title="Image Format Distribution")
|
||||
st.plotly_chart(fig, use_container_width=True)
|
||||
|
||||
modern_formats = format_data.get('modern_format_usage', 0)
|
||||
st.metric("Modern Formats (WebP/AVIF)", modern_formats)
|
||||
else:
|
||||
st.info("No images found to analyze")
|
||||
|
||||
def _render_security_analysis(self, security_data: Dict[str, Any]):
|
||||
"""Render security analysis."""
|
||||
|
||||
st.markdown("### 🛡️ Security Headers Analysis")
|
||||
|
||||
if not security_data:
|
||||
st.info("No security analysis data available")
|
||||
return
|
||||
|
||||
# Security score
|
||||
security_score = security_data.get('security_score', 0)
|
||||
|
||||
col1, col2 = st.columns([1, 2])
|
||||
|
||||
with col1:
|
||||
st.metric("Security Score", f"{security_score:.0f}%")
|
||||
|
||||
if security_score >= 80:
|
||||
st.success("🔒 Good security posture")
|
||||
elif security_score >= 50:
|
||||
st.warning("⚠️ Moderate security")
|
||||
else:
|
||||
st.error("🚨 Poor security posture")
|
||||
|
||||
with col2:
|
||||
# Security headers status
|
||||
if security_data.get('security_headers_present'):
|
||||
headers_status = security_data['security_headers_present']
|
||||
|
||||
st.markdown("**Security Headers Status:**")
|
||||
|
||||
for header, present in headers_status.items():
|
||||
status = "✅" if present else "❌"
|
||||
st.write(f"{status} {header}")
|
||||
|
||||
# Security recommendations
|
||||
if security_data.get('security_recommendations'):
|
||||
recommendations = security_data['security_recommendations']
|
||||
|
||||
if recommendations:
|
||||
st.markdown("#### 🔧 Security Recommendations")
|
||||
|
||||
for rec in recommendations:
|
||||
st.write(f"• {rec}")
|
||||
else:
|
||||
st.success("✅ All security headers properly configured")
|
||||
|
||||
def _render_ai_recommendations(self, ai_data: Dict[str, Any]):
|
||||
"""Render AI-generated recommendations."""
|
||||
|
||||
st.markdown("### 🤖 AI-Powered Technical Recommendations")
|
||||
|
||||
if not ai_data:
|
||||
st.info("No AI recommendations available")
|
||||
return
|
||||
|
||||
# Critical Issues
|
||||
if ai_data.get('critical_issues'):
|
||||
st.markdown("#### 🚨 Critical Issues (Fix Immediately)")
|
||||
|
||||
critical_issues = ai_data['critical_issues']
|
||||
for issue in critical_issues:
|
||||
st.error(f"🚨 {issue}")
|
||||
|
||||
# High Priority
|
||||
if ai_data.get('high_priority'):
|
||||
st.markdown("#### 🔥 High Priority Optimizations")
|
||||
|
||||
high_priority = ai_data['high_priority']
|
||||
for item in high_priority:
|
||||
st.warning(f"⚡ {item}")
|
||||
|
||||
# Medium Priority
|
||||
if ai_data.get('medium_priority'):
|
||||
st.markdown("#### 📈 Medium Priority Improvements")
|
||||
|
||||
medium_priority = ai_data['medium_priority']
|
||||
for item in medium_priority:
|
||||
st.info(f"📊 {item}")
|
||||
|
||||
# Implementation Steps
|
||||
if ai_data.get('implementation_steps'):
|
||||
st.markdown("#### 🛠️ Implementation Steps")
|
||||
|
||||
steps = ai_data['implementation_steps']
|
||||
for i, step in enumerate(steps, 1):
|
||||
st.write(f"{i}. {step}")
|
||||
|
||||
# Expected Impact
|
||||
if ai_data.get('expected_impact'):
|
||||
st.markdown("#### 📈 Expected Impact Assessment")
|
||||
|
||||
impact = ai_data['expected_impact']
|
||||
st.markdown(impact)
|
||||
|
||||
def _render_export_options(self, results: Dict[str, Any]):
|
||||
"""Render export options for analysis results."""
|
||||
|
||||
st.markdown("---")
|
||||
st.markdown("### 📥 Export Technical SEO Audit")
|
||||
|
||||
col1, col2, col3 = st.columns(3)
|
||||
|
||||
with col1:
|
||||
# JSON export
|
||||
if st.button("📄 Export Full Report (JSON)", use_container_width=True):
|
||||
json_data = json.dumps(results, indent=2, default=str)
|
||||
|
||||
st.download_button(
|
||||
label="⬇️ Download JSON Report",
|
||||
data=json_data,
|
||||
file_name=f"technical_seo_audit_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
||||
mime="application/json",
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
with col2:
|
||||
# CSV export for issues
|
||||
if st.button("📊 Export Issues CSV", use_container_width=True):
|
||||
issues_data = self._prepare_issues_csv(results)
|
||||
|
||||
if issues_data:
|
||||
st.download_button(
|
||||
label="⬇️ Download Issues CSV",
|
||||
data=issues_data,
|
||||
file_name=f"technical_issues_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv",
|
||||
mime="text/csv",
|
||||
use_container_width=True
|
||||
)
|
||||
else:
|
||||
st.info("No issues found to export")
|
||||
|
||||
with col3:
|
||||
# Executive summary
|
||||
if st.button("📋 Executive Summary", use_container_width=True):
|
||||
summary = self._generate_executive_summary(results)
|
||||
|
||||
st.download_button(
|
||||
label="⬇️ Download Summary",
|
||||
data=summary,
|
||||
file_name=f"technical_seo_summary_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt",
|
||||
mime="text/plain",
|
||||
use_container_width=True
|
||||
)
|
||||
|
||||
def _prepare_issues_csv(self, results: Dict[str, Any]) -> str:
|
||||
"""Prepare CSV data for technical issues."""
|
||||
|
||||
issues_list = []
|
||||
|
||||
# HTTP errors
|
||||
http_errors = results.get('technical_issues', {}).get('http_errors', {})
|
||||
if http_errors.get('error_pages'):
|
||||
for error in http_errors['error_pages']:
|
||||
issues_list.append({
|
||||
'Issue Type': 'HTTP Error',
|
||||
'Severity': 'High',
|
||||
'URL': error.get('url', ''),
|
||||
'Status Code': error.get('status', ''),
|
||||
'Description': f"HTTP {error.get('status', '')} error"
|
||||
})
|
||||
|
||||
# Missing elements
|
||||
missing_elements = results.get('technical_issues', {}).get('missing_elements', {})
|
||||
|
||||
# Add more issue types as needed...
|
||||
|
||||
if issues_list:
|
||||
issues_df = pd.DataFrame(issues_list)
|
||||
return issues_df.to_csv(index=False)
|
||||
|
||||
return ""
|
||||
|
||||
def _generate_executive_summary(self, results: Dict[str, Any]) -> str:
|
||||
"""Generate executive summary report."""
|
||||
|
||||
website_url = results.get('website_url', 'Unknown')
|
||||
timestamp = results.get('analysis_timestamp', datetime.now().isoformat())
|
||||
|
||||
summary = f"""
|
||||
TECHNICAL SEO AUDIT - EXECUTIVE SUMMARY
|
||||
======================================
|
||||
|
||||
Website: {website_url}
|
||||
Audit Date: {timestamp}
|
||||
|
||||
AUDIT OVERVIEW
|
||||
--------------
|
||||
Pages Crawled: {results.get('crawl_overview', {}).get('pages_crawled', 0)}
|
||||
HTTP Errors: {results.get('technical_issues', {}).get('http_errors', {}).get('total_errors', 0)}
|
||||
Average Load Time: {results.get('performance_analysis', {}).get('load_time_analysis', {}).get('avg_load_time', 0):.2f}s
|
||||
Security Score: {results.get('security_headers', {}).get('security_score', 0):.0f}%
|
||||
|
||||
CRITICAL FINDINGS
|
||||
-----------------
|
||||
"""
|
||||
|
||||
# Add critical findings
|
||||
error_count = results.get('technical_issues', {}).get('http_errors', {}).get('total_errors', 0)
|
||||
if error_count > 0:
|
||||
summary += f"• {error_count} pages have HTTP errors requiring immediate attention\n"
|
||||
|
||||
avg_load_time = results.get('performance_analysis', {}).get('load_time_analysis', {}).get('avg_load_time', 0)
|
||||
if avg_load_time > 3:
|
||||
summary += f"• Page load times are slow (avg: {avg_load_time:.2f}s), impacting user experience\n"
|
||||
|
||||
security_score = results.get('security_headers', {}).get('security_score', 0)
|
||||
if security_score < 80:
|
||||
summary += f"• Security headers need improvement (current score: {security_score:.0f}%)\n"
|
||||
|
||||
summary += f"\n\nDetailed technical audit completed by ALwrity Technical SEO Crawler\nGenerated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"
|
||||
|
||||
return summary
|
||||
|
||||
# Render function for integration with main dashboard
|
||||
def render_technical_seo_crawler():
|
||||
"""Render the Technical SEO Crawler UI."""
|
||||
ui = TechnicalSEOCrawlerUI()
|
||||
ui.render()
|
||||
58
ToBeMigrated/ai_seo_tools/textstaty.py
Normal file
58
ToBeMigrated/ai_seo_tools/textstaty.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""Text analysis tools using textstat."""
|
||||
|
||||
import streamlit as st
|
||||
from textstat import textstat
|
||||
|
||||
def analyze_text(text):
|
||||
"""Analyze text using textstat metrics."""
|
||||
if not text:
|
||||
st.warning("Please enter some text to analyze.")
|
||||
return
|
||||
|
||||
# Calculate various metrics
|
||||
metrics = {
|
||||
"Flesch Reading Ease": textstat.flesch_reading_ease(text),
|
||||
"Flesch-Kincaid Grade Level": textstat.flesch_kincaid_grade(text),
|
||||
"Gunning Fog Index": textstat.gunning_fog(text),
|
||||
"SMOG Index": textstat.smog_index(text),
|
||||
"Automated Readability Index": textstat.automated_readability_index(text),
|
||||
"Coleman-Liau Index": textstat.coleman_liau_index(text),
|
||||
"Linsear Write Formula": textstat.linsear_write_formula(text),
|
||||
"Dale-Chall Readability Score": textstat.dale_chall_readability_score(text),
|
||||
"Readability Consensus": textstat.readability_consensus(text)
|
||||
}
|
||||
|
||||
# Display metrics in a clean format
|
||||
st.subheader("Text Analysis Results")
|
||||
for metric, value in metrics.items():
|
||||
st.metric(metric, f"{value:.2f}")
|
||||
|
||||
# Add visualizations
|
||||
st.subheader("Visualization")
|
||||
st.bar_chart(metrics)
|
||||
|
||||
st.title("📖 Text Readability Analyzer: Making Your Content Easy to Read")
|
||||
|
||||
st.write("""
|
||||
This tool is your guide to writing content that's easy for your audience to understand.
|
||||
Just paste in a sample of your text, and we'll break down the readability scores and offer actionable tips!
|
||||
""")
|
||||
|
||||
text_input = st.text_area("Paste your text here:", height=200)
|
||||
|
||||
if st.button("Analyze!"):
|
||||
with st.spinner("Analyzing your text..."):
|
||||
test_data = text_input
|
||||
if not test_data.strip():
|
||||
st.error("Please enter text to analyze.")
|
||||
else:
|
||||
analyze_text(test_data)
|
||||
|
||||
st.subheader("Key Takeaways:")
|
||||
st.write("---")
|
||||
st.markdown("""
|
||||
* **Don't Be Afraid to Simplify!** Often, simpler language makes content more impactful and easier to digest.
|
||||
* **Aim for a Reading Level Appropriate for Your Audience:** Consider the education level, background, and familiarity of your readers.
|
||||
* **Use Short Sentences:** This makes your content more scannable and easier to read.
|
||||
* **Write for Everyone:** Accessibility should always be a priority. When in doubt, aim for clear, concise language!
|
||||
""")
|
||||
2
ToBeMigrated/ai_web_researcher/TBD
Normal file
2
ToBeMigrated/ai_web_researcher/TBD
Normal file
@@ -0,0 +1,2 @@
|
||||
1). Replace Firecrawl with scrapy or crawlee : https://crawlee.dev/python/docs/introduction
|
||||
|
||||
980
ToBeMigrated/ai_web_researcher/arxiv_schlorly_research.py
Normal file
980
ToBeMigrated/ai_web_researcher/arxiv_schlorly_research.py
Normal file
@@ -0,0 +1,980 @@
|
||||
####################################################
|
||||
#
|
||||
# FIXME: Gotta use this lib: https://github.com/monk1337/resp/tree/main
|
||||
# https://github.com/danielnsilva/semanticscholar
|
||||
# https://github.com/shauryr/S2QA
|
||||
#
|
||||
####################################################
|
||||
|
||||
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import pandas as pd
|
||||
import arxiv
|
||||
import PyPDF2
|
||||
import requests
|
||||
import networkx as nx
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urlparse
|
||||
from loguru import logger
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
import bibtexparser
|
||||
from pylatexenc.latex2text import LatexNodes2Text
|
||||
from matplotlib import pyplot as plt
|
||||
from collections import defaultdict
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
from sklearn.cluster import KMeans
|
||||
import numpy as np
|
||||
|
||||
logger.remove()
|
||||
logger.add(sys.stdout, colorize=True, format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}")
|
||||
|
||||
def create_arxiv_client(page_size=100, delay_seconds=3.0, num_retries=3):
|
||||
"""
|
||||
Creates a reusable arXiv API client with custom configuration.
|
||||
|
||||
Args:
|
||||
page_size (int): Number of results per page (default: 100)
|
||||
delay_seconds (float): Delay between API requests (default: 3.0)
|
||||
num_retries (int): Number of retries for failed requests (default: 3)
|
||||
|
||||
Returns:
|
||||
arxiv.Client: Configured arXiv API client
|
||||
"""
|
||||
try:
|
||||
client = arxiv.Client(
|
||||
page_size=page_size,
|
||||
delay_seconds=delay_seconds,
|
||||
num_retries=num_retries
|
||||
)
|
||||
return client
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating arXiv client: {e}")
|
||||
raise e
|
||||
|
||||
def expand_search_query(query, research_interests=None):
|
||||
"""
|
||||
Uses AI to expand the search query based on user's research interests.
|
||||
|
||||
Args:
|
||||
query (str): Original search query
|
||||
research_interests (list): List of user's research interests
|
||||
|
||||
Returns:
|
||||
str: Expanded search query
|
||||
"""
|
||||
try:
|
||||
interests_context = "\n".join(research_interests) if research_interests else ""
|
||||
prompt = f"""Given the original arXiv search query: '{query}'
|
||||
{f'And considering these research interests:\n{interests_context}' if interests_context else ''}
|
||||
Generate an expanded arXiv search query that:
|
||||
1. Includes relevant synonyms and related concepts
|
||||
2. Uses appropriate arXiv search operators (AND, OR, etc.)
|
||||
3. Incorporates field-specific tags (ti:, abs:, au:, etc.)
|
||||
4. Maintains focus on the core topic
|
||||
Return only the expanded query without any explanation."""
|
||||
|
||||
expanded_query = llm_text_gen(prompt)
|
||||
logger.info(f"Expanded query: {expanded_query}")
|
||||
return expanded_query
|
||||
except Exception as e:
|
||||
logger.error(f"Error expanding search query: {e}")
|
||||
return query
|
||||
|
||||
def analyze_citation_network(papers):
|
||||
"""
|
||||
Analyzes citation relationships between papers using DOIs and references.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata dictionaries
|
||||
|
||||
Returns:
|
||||
dict: Citation network analysis results
|
||||
"""
|
||||
try:
|
||||
# Create a directed graph for citations
|
||||
G = nx.DiGraph()
|
||||
|
||||
# Add nodes and edges
|
||||
for paper in papers:
|
||||
paper_id = paper['entry_id']
|
||||
G.add_node(paper_id, title=paper['title'])
|
||||
|
||||
# Add edges based on DOIs and references
|
||||
if paper['doi']:
|
||||
for other_paper in papers:
|
||||
if other_paper['doi'] and other_paper['doi'] in paper['summary']:
|
||||
G.add_edge(paper_id, other_paper['entry_id'])
|
||||
|
||||
# Calculate network metrics
|
||||
analysis = {
|
||||
'influential_papers': sorted(nx.pagerank(G).items(), key=lambda x: x[1], reverse=True),
|
||||
'citation_clusters': list(nx.connected_components(G.to_undirected())),
|
||||
'citation_paths': dict(nx.all_pairs_shortest_path_length(G))
|
||||
}
|
||||
return analysis
|
||||
except Exception as e:
|
||||
logger.error(f"Error analyzing citation network: {e}")
|
||||
return {}
|
||||
|
||||
def categorize_papers(papers):
|
||||
"""
|
||||
Uses AI to categorize papers based on their metadata and content.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata dictionaries
|
||||
|
||||
Returns:
|
||||
dict: Paper categorization results
|
||||
"""
|
||||
try:
|
||||
categorized_papers = {}
|
||||
for paper in papers:
|
||||
prompt = f"""Analyze this research paper and provide detailed categorization:
|
||||
Title: {paper['title']}
|
||||
Abstract: {paper['summary']}
|
||||
Primary Category: {paper['primary_category']}
|
||||
Categories: {', '.join(paper['categories'])}
|
||||
|
||||
Provide a JSON response with these fields:
|
||||
1. main_theme: Primary research theme
|
||||
2. sub_themes: List of related sub-themes
|
||||
3. methodology: Research methodology used
|
||||
4. application_domains: Potential application areas
|
||||
5. technical_complexity: Level (Basic/Intermediate/Advanced)"""
|
||||
|
||||
categorization = llm_text_gen(prompt)
|
||||
categorized_papers[paper['entry_id']] = categorization
|
||||
|
||||
return categorized_papers
|
||||
except Exception as e:
|
||||
logger.error(f"Error categorizing papers: {e}")
|
||||
return {}
|
||||
|
||||
def get_paper_recommendations(papers, research_interests):
|
||||
"""
|
||||
Generates personalized paper recommendations based on user's research interests.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata dictionaries
|
||||
research_interests (list): User's research interests
|
||||
|
||||
Returns:
|
||||
dict: Personalized paper recommendations
|
||||
"""
|
||||
try:
|
||||
interests_text = "\n".join(research_interests)
|
||||
recommendations = {}
|
||||
|
||||
for paper in papers:
|
||||
prompt = f"""Evaluate this paper's relevance to the user's research interests:
|
||||
Paper:
|
||||
- Title: {paper['title']}
|
||||
- Abstract: {paper['summary']}
|
||||
- Categories: {', '.join(paper['categories'])}
|
||||
|
||||
User's Research Interests:
|
||||
{interests_text}
|
||||
|
||||
Provide a JSON response with:
|
||||
1. relevance_score: 0-100
|
||||
2. relevance_aspects: List of matching aspects
|
||||
3. potential_value: How this paper could benefit the user's research"""
|
||||
|
||||
evaluation = llm_text_gen(prompt)
|
||||
recommendations[paper['entry_id']] = evaluation
|
||||
|
||||
return recommendations
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating paper recommendations: {e}")
|
||||
return {}
|
||||
|
||||
def fetch_arxiv_data(query, max_results=10, sort_by=arxiv.SortCriterion.SubmittedDate, sort_order=None, client=None, research_interests=None):
|
||||
"""
|
||||
Fetches arXiv data based on a query with advanced search options.
|
||||
|
||||
Args:
|
||||
query (str): The search query (supports advanced syntax, e.g., 'au:einstein AND cat:physics')
|
||||
max_results (int): The maximum number of results to fetch
|
||||
sort_by (arxiv.SortCriterion): Sorting criterion (default: SubmittedDate)
|
||||
sort_order (str): Sort order ('ascending' or 'descending', default: None)
|
||||
client (arxiv.Client): Optional custom client (default: None, creates new client)
|
||||
|
||||
Returns:
|
||||
list: A list of arXiv data with extended metadata
|
||||
"""
|
||||
try:
|
||||
if client is None:
|
||||
client = create_arxiv_client()
|
||||
|
||||
# Expand search query using AI if research interests are provided
|
||||
expanded_query = expand_search_query(query, research_interests) if research_interests else query
|
||||
logger.info(f"Using expanded query: {expanded_query}")
|
||||
|
||||
search = arxiv.Search(
|
||||
query=expanded_query,
|
||||
max_results=max_results,
|
||||
sort_by=sort_by,
|
||||
sort_order=sort_order
|
||||
)
|
||||
|
||||
results = list(client.results(search))
|
||||
all_data = [
|
||||
{
|
||||
'title': result.title,
|
||||
'published': result.published,
|
||||
'updated': result.updated,
|
||||
'entry_id': result.entry_id,
|
||||
'summary': result.summary,
|
||||
'authors': [str(author) for author in result.authors],
|
||||
'pdf_url': result.pdf_url,
|
||||
'journal_ref': getattr(result, 'journal_ref', None),
|
||||
'doi': getattr(result, 'doi', None),
|
||||
'primary_category': getattr(result, 'primary_category', None),
|
||||
'categories': getattr(result, 'categories', []),
|
||||
'links': [link.href for link in getattr(result, 'links', [])]
|
||||
}
|
||||
for result in results
|
||||
]
|
||||
|
||||
# Enhance results with AI-powered analysis
|
||||
if all_data:
|
||||
# Analyze citation network
|
||||
citation_analysis = analyze_citation_network(all_data)
|
||||
|
||||
# Categorize papers using AI
|
||||
paper_categories = categorize_papers(all_data)
|
||||
|
||||
# Generate recommendations if research interests are provided
|
||||
recommendations = get_paper_recommendations(all_data, research_interests) if research_interests else {}
|
||||
|
||||
# Perform content analysis
|
||||
content_analyses = [analyze_paper_content(paper['entry_id']) for paper in all_data]
|
||||
trend_analysis = analyze_research_trends(all_data)
|
||||
concept_mapping = map_cross_paper_concepts(all_data)
|
||||
|
||||
# Generate bibliography data
|
||||
bibliography_data = {
|
||||
'bibtex_entries': [generate_bibtex_entry(paper) for paper in all_data],
|
||||
'citations': {
|
||||
'apa': [convert_citation_format(generate_bibtex_entry(paper), 'apa') for paper in all_data],
|
||||
'mla': [convert_citation_format(generate_bibtex_entry(paper), 'mla') for paper in all_data],
|
||||
'chicago': [convert_citation_format(generate_bibtex_entry(paper), 'chicago') for paper in all_data]
|
||||
},
|
||||
'reference_graph': visualize_reference_graph(all_data),
|
||||
'citation_impact': analyze_citation_impact(all_data)
|
||||
}
|
||||
|
||||
# Add enhanced data to results
|
||||
enhanced_data = {
|
||||
'papers': all_data,
|
||||
'citation_analysis': citation_analysis,
|
||||
'paper_categories': paper_categories,
|
||||
'recommendations': recommendations,
|
||||
'content_analyses': content_analyses,
|
||||
'trend_analysis': trend_analysis,
|
||||
'concept_mapping': concept_mapping,
|
||||
'bibliography': bibliography_data
|
||||
}
|
||||
return enhanced_data
|
||||
|
||||
return {'papers': all_data}
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred while fetching data from arXiv: {e}")
|
||||
raise e
|
||||
|
||||
def create_dataframe(data, column_names):
|
||||
"""
|
||||
Creates a DataFrame from the provided data.
|
||||
|
||||
Args:
|
||||
data (list): The data to convert to a DataFrame.
|
||||
column_names (list): The column names for the DataFrame.
|
||||
|
||||
Returns:
|
||||
DataFrame: The created DataFrame.
|
||||
"""
|
||||
try:
|
||||
df = pd.DataFrame(data, columns=column_names)
|
||||
return df
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred while creating DataFrame: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
def get_arxiv_main_content(url):
|
||||
"""
|
||||
Returns the main content of an arXiv paper.
|
||||
|
||||
Args:
|
||||
url (str): The URL of the arXiv paper.
|
||||
|
||||
Returns:
|
||||
str: The main content of the paper as a string.
|
||||
"""
|
||||
try:
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.content, "html.parser")
|
||||
main_content = soup.find('div', class_='ltx_page_content')
|
||||
if not main_content:
|
||||
logger.warning("Main content not found in the page.")
|
||||
return "Main content not found."
|
||||
alert_section = main_content.find('div', class_='package-alerts ltx_document')
|
||||
if (alert_section):
|
||||
alert_section.decompose()
|
||||
for element_id in ["abs", "authors"]:
|
||||
element = main_content.find(id=element_id)
|
||||
if (element):
|
||||
element.decompose()
|
||||
return main_content.text.strip()
|
||||
except Exception as html_error:
|
||||
logger.warning(f"HTML content not accessible, trying PDF: {html_error}")
|
||||
return get_pdf_content(url)
|
||||
|
||||
def download_paper(paper_id, output_dir="downloads", filename=None, get_source=False):
|
||||
"""
|
||||
Downloads a paper's PDF or source files with enhanced error handling.
|
||||
|
||||
Args:
|
||||
paper_id (str): The arXiv ID of the paper
|
||||
output_dir (str): Directory to save the downloaded file (default: 'downloads')
|
||||
filename (str): Custom filename (default: None, uses paper ID)
|
||||
get_source (bool): If True, downloads source files instead of PDF (default: False)
|
||||
|
||||
Returns:
|
||||
str: Path to the downloaded file or None if download fails
|
||||
"""
|
||||
try:
|
||||
# Create output directory if it doesn't exist
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Get paper metadata
|
||||
client = create_arxiv_client()
|
||||
paper = next(client.results(arxiv.Search(id_list=[paper_id])))
|
||||
|
||||
# Set filename if not provided
|
||||
if not filename:
|
||||
safe_title = re.sub(r'[^\w\-_.]', '_', paper.title[:50])
|
||||
filename = f"{paper_id}_{safe_title}"
|
||||
filename += ".tar.gz" if get_source else ".pdf"
|
||||
|
||||
# Full path for the downloaded file
|
||||
file_path = os.path.join(output_dir, filename)
|
||||
|
||||
# Download the file
|
||||
if get_source:
|
||||
paper.download_source(dirpath=output_dir, filename=filename)
|
||||
else:
|
||||
paper.download_pdf(dirpath=output_dir, filename=filename)
|
||||
|
||||
logger.info(f"Successfully downloaded {'source' if get_source else 'PDF'} to {file_path}")
|
||||
return file_path
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error downloading {'source' if get_source else 'PDF'} for {paper_id}: {e}")
|
||||
return None
|
||||
|
||||
def analyze_paper_content(url_or_id, cleanup=True):
|
||||
"""
|
||||
Analyzes paper content using AI to extract key information and insights.
|
||||
|
||||
Args:
|
||||
url_or_id (str): The arXiv URL or ID of the paper
|
||||
cleanup (bool): Whether to delete the PDF after extraction (default: True)
|
||||
|
||||
Returns:
|
||||
dict: Analysis results including summary, key findings, and concepts
|
||||
"""
|
||||
try:
|
||||
# Get paper content
|
||||
content = get_pdf_content(url_or_id, cleanup)
|
||||
if not content or 'Failed to' in content:
|
||||
return {'error': content}
|
||||
|
||||
# Generate paper summary
|
||||
summary_prompt = f"""Analyze this research paper and provide a comprehensive summary:
|
||||
{content[:8000]} # Limit content length for API
|
||||
|
||||
Provide a JSON response with:
|
||||
1. executive_summary: Brief overview (2-3 sentences)
|
||||
2. key_findings: List of main research findings
|
||||
3. methodology: Research methods used
|
||||
4. implications: Practical implications of the research
|
||||
5. limitations: Study limitations and constraints"""
|
||||
|
||||
summary_analysis = llm_text_gen(summary_prompt)
|
||||
|
||||
# Extract key concepts and relationships
|
||||
concepts_prompt = f"""Analyze this research paper and identify key concepts and relationships:
|
||||
{content[:8000]}
|
||||
|
||||
Provide a JSON response with:
|
||||
1. main_concepts: List of key technical concepts
|
||||
2. concept_relationships: How concepts are related
|
||||
3. novel_contributions: New ideas or approaches introduced
|
||||
4. technical_requirements: Required technologies or methods
|
||||
5. future_directions: Suggested future research"""
|
||||
|
||||
concept_analysis = llm_text_gen(concepts_prompt)
|
||||
|
||||
return {
|
||||
'summary_analysis': summary_analysis,
|
||||
'concept_analysis': concept_analysis,
|
||||
'full_text': content
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error analyzing paper content: {e}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def analyze_research_trends(papers):
|
||||
"""
|
||||
Analyzes research trends across multiple papers.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata and content
|
||||
|
||||
Returns:
|
||||
dict: Trend analysis results
|
||||
"""
|
||||
try:
|
||||
# Collect paper information
|
||||
papers_info = []
|
||||
for paper in papers:
|
||||
content = get_pdf_content(paper['entry_id'], cleanup=True)
|
||||
if content and 'Failed to' not in content:
|
||||
papers_info.append({
|
||||
'title': paper['title'],
|
||||
'abstract': paper['summary'],
|
||||
'content': content[:8000], # Limit content length
|
||||
'year': paper['published'].year
|
||||
})
|
||||
|
||||
if not papers_info:
|
||||
return {'error': 'No valid paper content found for analysis'}
|
||||
|
||||
# Analyze trends
|
||||
trends_prompt = f"""Analyze these research papers and identify key trends:
|
||||
Papers:
|
||||
{str(papers_info)}
|
||||
|
||||
Provide a JSON response with:
|
||||
1. temporal_trends: How research focus evolved over time
|
||||
2. emerging_themes: New and growing research areas
|
||||
3. declining_themes: Decreasing research focus areas
|
||||
4. methodology_trends: Evolution of research methods
|
||||
5. technology_trends: Trends in technology usage
|
||||
6. research_gaps: Identified gaps and opportunities"""
|
||||
|
||||
trend_analysis = llm_text_gen(trends_prompt)
|
||||
return {'trend_analysis': trend_analysis}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error analyzing research trends: {e}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def map_cross_paper_concepts(papers):
|
||||
"""
|
||||
Maps concepts and relationships across multiple papers.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata and content
|
||||
|
||||
Returns:
|
||||
dict: Concept mapping results
|
||||
"""
|
||||
try:
|
||||
# Analyze each paper
|
||||
paper_analyses = []
|
||||
for paper in papers:
|
||||
analysis = analyze_paper_content(paper['entry_id'])
|
||||
if 'error' not in analysis:
|
||||
paper_analyses.append({
|
||||
'paper_id': paper['entry_id'],
|
||||
'title': paper['title'],
|
||||
'analysis': analysis
|
||||
})
|
||||
|
||||
if not paper_analyses:
|
||||
return {'error': 'No valid paper analyses for concept mapping'}
|
||||
|
||||
# Generate cross-paper concept map
|
||||
mapping_prompt = f"""Analyze relationships between concepts across these papers:
|
||||
{str(paper_analyses)}
|
||||
|
||||
Provide a JSON response with:
|
||||
1. shared_concepts: Concepts appearing in multiple papers
|
||||
2. concept_evolution: How concepts developed across papers
|
||||
3. conflicting_views: Different interpretations of same concepts
|
||||
4. complementary_findings: How papers complement each other
|
||||
5. knowledge_gaps: Areas needing more research"""
|
||||
|
||||
concept_mapping = llm_text_gen(mapping_prompt)
|
||||
return {'concept_mapping': concept_mapping}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error mapping cross-paper concepts: {e}")
|
||||
return {'error': str(e)}
|
||||
|
||||
def generate_bibtex_entry(paper):
|
||||
"""
|
||||
Generates a BibTeX entry for a paper with complete metadata.
|
||||
|
||||
Args:
|
||||
paper (dict): Paper metadata dictionary
|
||||
|
||||
Returns:
|
||||
str: BibTeX entry string
|
||||
"""
|
||||
try:
|
||||
# Generate a unique citation key
|
||||
first_author = paper['authors'][0].split()[-1] if paper['authors'] else 'Unknown'
|
||||
year = paper['published'].year if paper['published'] else '0000'
|
||||
citation_key = f"{first_author}{year}{paper['entry_id'].split('/')[-1]}"
|
||||
|
||||
# Format authors for BibTeX
|
||||
authors = ' and '.join(paper['authors'])
|
||||
|
||||
# Create BibTeX entry
|
||||
bibtex = f"@article{{{citation_key},\n"
|
||||
bibtex += f" title = {{{paper['title']}}},\n"
|
||||
bibtex += f" author = {{{authors}}},\n"
|
||||
bibtex += f" year = {{{year}}},\n"
|
||||
bibtex += f" journal = {{arXiv preprint}},\n"
|
||||
bibtex += f" archivePrefix = {{arXiv}},\n"
|
||||
bibtex += f" eprint = {{{paper['entry_id'].split('/')[-1]}}},\n"
|
||||
if paper['doi']:
|
||||
bibtex += f" doi = {{{paper['doi']}}},\n"
|
||||
bibtex += f" url = {{{paper['entry_id']}}},\n"
|
||||
bibtex += f" abstract = {{{paper['summary']}}}\n"
|
||||
bibtex += "}"
|
||||
|
||||
return bibtex
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating BibTeX entry: {e}")
|
||||
return ""
|
||||
|
||||
def convert_citation_format(bibtex_str, target_format):
|
||||
"""
|
||||
Converts BibTeX citations to other formats and validates the output.
|
||||
|
||||
Args:
|
||||
bibtex_str (str): BibTeX entry string
|
||||
target_format (str): Target citation format ('apa', 'mla', 'chicago', etc.)
|
||||
|
||||
Returns:
|
||||
str: Formatted citation string
|
||||
"""
|
||||
try:
|
||||
# Parse BibTeX entry
|
||||
bib_database = bibtexparser.loads(bibtex_str)
|
||||
entry = bib_database.entries[0]
|
||||
|
||||
# Generate citation format prompt
|
||||
prompt = f"""Convert this bibliographic information to {target_format} format:
|
||||
Title: {entry.get('title', '')}
|
||||
Authors: {entry.get('author', '')}
|
||||
Year: {entry.get('year', '')}
|
||||
Journal: {entry.get('journal', '')}
|
||||
DOI: {entry.get('doi', '')}
|
||||
URL: {entry.get('url', '')}
|
||||
|
||||
Return only the formatted citation without any explanation."""
|
||||
|
||||
# Use AI to generate formatted citation
|
||||
formatted_citation = llm_text_gen(prompt)
|
||||
return formatted_citation.strip()
|
||||
except Exception as e:
|
||||
logger.error(f"Error converting citation format: {e}")
|
||||
return ""
|
||||
|
||||
def visualize_reference_graph(papers):
|
||||
"""
|
||||
Creates a visual representation of the citation network.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata dictionaries
|
||||
|
||||
Returns:
|
||||
str: Path to the saved visualization file
|
||||
"""
|
||||
try:
|
||||
# Create directed graph
|
||||
G = nx.DiGraph()
|
||||
|
||||
# Add nodes and edges
|
||||
for paper in papers:
|
||||
paper_id = paper['entry_id']
|
||||
G.add_node(paper_id, title=paper['title'])
|
||||
|
||||
# Add citation edges
|
||||
if paper['doi']:
|
||||
for other_paper in papers:
|
||||
if other_paper['doi'] and other_paper['doi'] in paper['summary']:
|
||||
G.add_edge(paper_id, other_paper['entry_id'])
|
||||
|
||||
# Set up the visualization
|
||||
plt.figure(figsize=(12, 8))
|
||||
pos = nx.spring_layout(G)
|
||||
|
||||
# Draw the graph
|
||||
nx.draw(G, pos, with_labels=False, node_color='lightblue',
|
||||
node_size=1000, arrowsize=20)
|
||||
|
||||
# Add labels
|
||||
labels = nx.get_node_attributes(G, 'title')
|
||||
nx.draw_networkx_labels(G, pos, labels, font_size=8)
|
||||
|
||||
# Save the visualization
|
||||
output_path = 'reference_graph.png'
|
||||
plt.savefig(output_path, dpi=300, bbox_inches='tight')
|
||||
plt.close()
|
||||
|
||||
return output_path
|
||||
except Exception as e:
|
||||
logger.error(f"Error visualizing reference graph: {e}")
|
||||
return ""
|
||||
|
||||
def analyze_citation_impact(papers):
|
||||
"""
|
||||
Analyzes citation impact and influence patterns.
|
||||
|
||||
Args:
|
||||
papers (list): List of paper metadata dictionaries
|
||||
|
||||
Returns:
|
||||
dict: Citation impact analysis results
|
||||
"""
|
||||
try:
|
||||
# Create citation network
|
||||
G = nx.DiGraph()
|
||||
for paper in papers:
|
||||
G.add_node(paper['entry_id'], **paper)
|
||||
if paper['doi']:
|
||||
for other_paper in papers:
|
||||
if other_paper['doi'] and other_paper['doi'] in paper['summary']:
|
||||
G.add_edge(paper_id, other_paper['entry_id'])
|
||||
|
||||
# Calculate impact metrics
|
||||
impact_analysis = {
|
||||
'citation_counts': dict(G.in_degree()),
|
||||
'influence_scores': nx.pagerank(G),
|
||||
'authority_scores': nx.authority_matrix(G).diagonal(),
|
||||
'hub_scores': nx.hub_matrix(G).diagonal(),
|
||||
'citation_paths': dict(nx.all_pairs_shortest_path_length(G))
|
||||
}
|
||||
|
||||
# Add temporal analysis
|
||||
year_citations = defaultdict(int)
|
||||
for paper in papers:
|
||||
if paper['published']:
|
||||
year = paper['published'].year
|
||||
year_citations[year] += G.in_degree(paper['entry_id'])
|
||||
impact_analysis['temporal_trends'] = dict(year_citations)
|
||||
|
||||
return impact_analysis
|
||||
except Exception as e:
|
||||
logger.error(f"Error analyzing citation impact: {e}")
|
||||
return {}
|
||||
|
||||
def get_pdf_content(url_or_id, cleanup=True):
|
||||
"""
|
||||
Extracts text content from a paper's PDF with improved error handling.
|
||||
|
||||
Args:
|
||||
url_or_id (str): The arXiv URL or ID of the paper
|
||||
cleanup (bool): Whether to delete the PDF after extraction (default: True)
|
||||
|
||||
Returns:
|
||||
str: The extracted text content or error message
|
||||
"""
|
||||
try:
|
||||
# Extract arxiv ID from URL if needed
|
||||
arxiv_id = url_or_id.split('/')[-1] if '/' in url_or_id else url_or_id
|
||||
|
||||
# Download PDF
|
||||
pdf_path = download_paper(arxiv_id)
|
||||
if not pdf_path:
|
||||
return "Failed to download PDF."
|
||||
|
||||
# Extract text from PDF
|
||||
pdf_text = ''
|
||||
with open(pdf_path, 'rb') as f:
|
||||
pdf_reader = PyPDF2.PdfReader(f)
|
||||
for page_num, page in enumerate(pdf_reader.pages, 1):
|
||||
try:
|
||||
page_text = page.extract_text()
|
||||
if page_text:
|
||||
pdf_text += f"\n--- Page {page_num} ---\n{page_text}"
|
||||
except Exception as err:
|
||||
logger.error(f"Error extracting text from page {page_num}: {err}")
|
||||
continue
|
||||
|
||||
# Clean up
|
||||
if cleanup:
|
||||
try:
|
||||
os.remove(pdf_path)
|
||||
logger.debug(f"Cleaned up temporary PDF file: {pdf_path}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to cleanup PDF file {pdf_path}: {e}")
|
||||
|
||||
# Process and return text
|
||||
if not pdf_text.strip():
|
||||
return "No text content could be extracted from the PDF."
|
||||
|
||||
return clean_pdf_text(pdf_text)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to process PDF: {e}")
|
||||
return f"Failed to retrieve content: {str(e)}"
|
||||
|
||||
def clean_pdf_text(text):
|
||||
"""
|
||||
Helper function to clean the text extracted from a PDF.
|
||||
|
||||
Args:
|
||||
text (str): The text to clean.
|
||||
|
||||
Returns:
|
||||
str: The cleaned text.
|
||||
"""
|
||||
pattern = r'References\s*.*'
|
||||
text = re.sub(pattern, '', text, flags=re.IGNORECASE | re.DOTALL)
|
||||
sections_to_remove = ['Acknowledgements', 'References', 'Bibliography']
|
||||
for section in sections_to_remove:
|
||||
pattern = r'(' + re.escape(section) + r'\s*.*?)(?=\n[A-Z]{2,}|$)'
|
||||
text = re.sub(pattern, '', text, flags=re.DOTALL | re.IGNORECASE)
|
||||
return text
|
||||
|
||||
def download_image(image_url, base_url, folder="images"):
|
||||
"""
|
||||
Downloads an image from a URL.
|
||||
|
||||
Args:
|
||||
image_url (str): The URL of the image.
|
||||
base_url (str): The base URL of the website.
|
||||
folder (str): The folder to save the image.
|
||||
|
||||
Returns:
|
||||
bool: True if the image was downloaded successfully, False otherwise.
|
||||
"""
|
||||
if image_url.startswith('data:image'):
|
||||
logger.info(f"Skipping download of data URI image: {image_url}")
|
||||
return False
|
||||
if not os.path.exists(folder):
|
||||
os.makedirs(folder)
|
||||
if not urlparse(image_url).scheme:
|
||||
if not base_url.endswith('/'):
|
||||
base_url += '/'
|
||||
image_url = base_url + image_url
|
||||
try:
|
||||
response = requests.get(image_url)
|
||||
response.raise_for_status()
|
||||
image_name = image_url.split("/")[-1]
|
||||
with open(os.path.join(folder, image_name), 'wb') as file:
|
||||
file.write(response.content)
|
||||
return True
|
||||
except requests.RequestException as e:
|
||||
logger.error(f"Error downloading {image_url}: {e}")
|
||||
return False
|
||||
|
||||
def scrape_images_from_arxiv(url):
|
||||
"""
|
||||
Scrapes images from an arXiv page.
|
||||
|
||||
Args:
|
||||
url (str): The URL of the arXiv page.
|
||||
|
||||
Returns:
|
||||
list: A list of image URLs.
|
||||
"""
|
||||
try:
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
images = soup.find_all('img')
|
||||
image_urls = [img['src'] for img in images if 'src' in img.attrs]
|
||||
return image_urls
|
||||
except requests.RequestException as e:
|
||||
logger.error(f"Error fetching page {url}: {e}")
|
||||
return []
|
||||
|
||||
def generate_bibtex(paper_id, client=None):
|
||||
"""
|
||||
Generate a BibTeX entry for an arXiv paper with enhanced metadata.
|
||||
|
||||
Args:
|
||||
paper_id (str): The arXiv ID of the paper
|
||||
client (arxiv.Client): Optional custom client (default: None)
|
||||
|
||||
Returns:
|
||||
str: BibTeX entry as a string
|
||||
"""
|
||||
try:
|
||||
if client is None:
|
||||
client = create_arxiv_client()
|
||||
|
||||
# Fetch paper metadata
|
||||
paper = next(client.results(arxiv.Search(id_list=[paper_id])))
|
||||
|
||||
# Extract author information
|
||||
authors = [str(author) for author in paper.authors]
|
||||
first_author = authors[0].split(', ')[0] if authors else 'Unknown'
|
||||
|
||||
# Format year
|
||||
year = paper.published.year if paper.published else 'Unknown'
|
||||
|
||||
# Create citation key
|
||||
citation_key = f"{first_author}{str(year)[-2:]}"
|
||||
|
||||
# Build BibTeX entry
|
||||
bibtex = [
|
||||
f"@article{{{citation_key},",
|
||||
f" author = {{{' and '.join(authors)}}},",
|
||||
f" title = {{{paper.title}}},",
|
||||
f" year = {{{year}}},",
|
||||
f" eprint = {{{paper_id}}},",
|
||||
f" archivePrefix = {{arXiv}},"
|
||||
]
|
||||
|
||||
# Add optional fields if available
|
||||
if paper.doi:
|
||||
bibtex.append(f" doi = {{{paper.doi}}},")
|
||||
if getattr(paper, 'journal_ref', None):
|
||||
bibtex.append(f" journal = {{{paper.journal_ref}}},")
|
||||
if getattr(paper, 'primary_category', None):
|
||||
bibtex.append(f" primaryClass = {{{paper.primary_category}}},")
|
||||
|
||||
# Add URL and close entry
|
||||
bibtex.extend([
|
||||
f" url = {{https://arxiv.org/abs/{paper_id}}}",
|
||||
"}"
|
||||
])
|
||||
|
||||
return '\n'.join(bibtex)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating BibTeX for {paper_id}: {e}")
|
||||
return ""
|
||||
|
||||
def batch_download_papers(paper_ids, output_dir="downloads", get_source=False):
|
||||
"""
|
||||
Download multiple papers in batch with progress tracking.
|
||||
|
||||
Args:
|
||||
paper_ids (list): List of arXiv IDs to download
|
||||
output_dir (str): Directory to save downloaded files (default: 'downloads')
|
||||
get_source (bool): If True, downloads source files instead of PDFs (default: False)
|
||||
|
||||
Returns:
|
||||
dict: Mapping of paper IDs to their download status and paths
|
||||
"""
|
||||
results = {}
|
||||
client = create_arxiv_client()
|
||||
|
||||
for paper_id in paper_ids:
|
||||
try:
|
||||
file_path = download_paper(paper_id, output_dir, get_source=get_source)
|
||||
results[paper_id] = {
|
||||
'success': bool(file_path),
|
||||
'path': file_path,
|
||||
'error': None
|
||||
}
|
||||
except Exception as e:
|
||||
results[paper_id] = {
|
||||
'success': False,
|
||||
'path': None,
|
||||
'error': str(e)
|
||||
}
|
||||
logger.error(f"Failed to download {paper_id}: {e}")
|
||||
|
||||
return results
|
||||
|
||||
def batch_generate_bibtex(paper_ids):
|
||||
"""
|
||||
Generate BibTeX entries for multiple papers.
|
||||
|
||||
Args:
|
||||
paper_ids (list): List of arXiv IDs
|
||||
|
||||
Returns:
|
||||
dict: Mapping of paper IDs to their BibTeX entries
|
||||
"""
|
||||
results = {}
|
||||
client = create_arxiv_client()
|
||||
|
||||
for paper_id in paper_ids:
|
||||
try:
|
||||
bibtex = generate_bibtex(paper_id, client)
|
||||
results[paper_id] = {
|
||||
'success': bool(bibtex),
|
||||
'bibtex': bibtex,
|
||||
'error': None
|
||||
}
|
||||
except Exception as e:
|
||||
results[paper_id] = {
|
||||
'success': False,
|
||||
'bibtex': '',
|
||||
'error': str(e)
|
||||
}
|
||||
logger.error(f"Failed to generate BibTeX for {paper_id}: {e}")
|
||||
|
||||
return results
|
||||
|
||||
def extract_arxiv_ids_from_line(line):
|
||||
"""
|
||||
Extract the arXiv ID from a given line of text.
|
||||
|
||||
Args:
|
||||
line (str): A line of text potentially containing an arXiv URL.
|
||||
|
||||
Returns:
|
||||
str: The extracted arXiv ID, or None if not found.
|
||||
"""
|
||||
arxiv_id_pattern = re.compile(r'arxiv\.org\/abs\/(\d+\.\d+)(v\d+)?')
|
||||
match = arxiv_id_pattern.search(line)
|
||||
if match:
|
||||
return match.group(1) + (match.group(2) if match.group(2) else '')
|
||||
return None
|
||||
|
||||
def read_written_ids(file_path):
|
||||
"""
|
||||
Read already written arXiv IDs from a file.
|
||||
|
||||
Args:
|
||||
file_path (str): Path to the file containing written IDs.
|
||||
|
||||
Returns:
|
||||
set: A set of arXiv IDs.
|
||||
"""
|
||||
written_ids = set()
|
||||
try:
|
||||
with open(file_path, 'r', encoding="utf-8") as file:
|
||||
for line in file:
|
||||
written_ids.add(line.strip())
|
||||
except FileNotFoundError:
|
||||
logger.error(f"File not found: {file_path}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error while reading the file: {e}")
|
||||
return written_ids
|
||||
|
||||
def append_id_to_file(arxiv_id, output_file_path):
|
||||
"""
|
||||
Append a single arXiv ID to a file. Checks if the file exists and creates it if not.
|
||||
|
||||
Args:
|
||||
arxiv_id (str): The arXiv ID to append.
|
||||
output_file_path (str): Path to the output file.
|
||||
"""
|
||||
try:
|
||||
if not os.path.exists(output_file_path):
|
||||
logger.info(f"File does not exist. Creating new file: {output_file_path}")
|
||||
with open(output_file_path, 'a', encoding="utf-8") as outfile:
|
||||
outfile.write(arxiv_id + '\n')
|
||||
else:
|
||||
logger.info(f"Appending to existing file: {output_file_path}")
|
||||
with open(output_file_path, 'a', encoding="utf-8") as outfile:
|
||||
outfile.write(arxiv_id + '\n')
|
||||
except Exception as e:
|
||||
logger.error(f"Error while appending to file: {e}")
|
||||
100
ToBeMigrated/ai_web_researcher/common_utils.py
Normal file
100
ToBeMigrated/ai_web_researcher/common_utils.py
Normal file
@@ -0,0 +1,100 @@
|
||||
# Common utils for web_researcher
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def cfg_search_param(flag):
|
||||
"""
|
||||
Read values from the main_config.json file and return them as variables and a dictionary.
|
||||
|
||||
Args:
|
||||
flag (str): A flag to determine which configuration values to return.
|
||||
|
||||
Returns:
|
||||
various: The values read from the config file based on the flag.
|
||||
"""
|
||||
try:
|
||||
file_path = Path(os.environ.get("ALWRITY_CONFIG", ""))
|
||||
if not file_path.is_file():
|
||||
raise FileNotFoundError(f"Configuration file not found: {file_path}")
|
||||
logger.info(f"Reading search config params from {file_path}")
|
||||
|
||||
with open(file_path, 'r', encoding='utf-8') as file:
|
||||
config = json.load(file)
|
||||
web_research_section = config["Search Engine Parameters"]
|
||||
|
||||
if 'serperdev' in flag:
|
||||
# Get values as variables
|
||||
geo_location = web_research_section.get("Geographic Location")
|
||||
search_language = web_research_section.get("Search Language")
|
||||
num_results = web_research_section.get("Number of Results")
|
||||
return geo_location, search_language, num_results
|
||||
|
||||
elif 'tavily' in flag:
|
||||
include_urls = web_research_section.get("Include Domains")
|
||||
pattern = re.compile(r"^(https?://[^\s,]+)(,\s*https?://[^\s,]+)*$")
|
||||
if pattern.match(include_urls):
|
||||
include_urls = [url.strip() for url in include_urls.split(',')]
|
||||
else:
|
||||
include_urls = None
|
||||
return include_urls
|
||||
|
||||
elif 'exa' in flag:
|
||||
include_urls = web_research_section.get("Include Domains")
|
||||
pattern = re.compile(r"^(https?://\w+)(,\s*https?://\w+)*$")
|
||||
if pattern.match(include_urls) is not None:
|
||||
include_urls = include_urls.split(',')
|
||||
elif re.match(r"^http?://\w+$", include_urls) is not None:
|
||||
include_urls = include_urls.split(" ")
|
||||
else:
|
||||
include_urls = None
|
||||
|
||||
num_results = web_research_section.get("Number of Results")
|
||||
similar_url = web_research_section.get("Similar URL")
|
||||
time_range = web_research_section.get("Time Range")
|
||||
if time_range == "past day":
|
||||
start_published_date = (datetime.now() - timedelta(days=1)).strftime('%Y-%m-%d')
|
||||
elif time_range == "past week":
|
||||
start_published_date = (datetime.now() - timedelta(days=7)).strftime("%Y-%m-%d")
|
||||
elif time_range == "past month":
|
||||
start_published_date = (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d')
|
||||
elif time_range == "past year":
|
||||
start_published_date = (datetime.now() - timedelta(days=365)).strftime('%Y-%m-%d')
|
||||
elif time_range == "anytime" or not time_range:
|
||||
start_published_date = None
|
||||
time_range = start_published_date
|
||||
return include_urls, time_range, num_results, similar_url
|
||||
|
||||
except FileNotFoundError:
|
||||
logger.error(f"Error: Config file '{file_path}' not found.")
|
||||
return {}, None, None, None
|
||||
except KeyError as e:
|
||||
logger.error(f"Error: Missing section or option in config file: {e}")
|
||||
return {}, None, None, None
|
||||
except ValueError as e:
|
||||
logger.error(f"Error: Invalid value in config file: {e}")
|
||||
return {}, None, None, None
|
||||
|
||||
def save_in_file(table_content):
|
||||
""" Helper function to save search analysis in a file. """
|
||||
file_path = os.environ.get('SEARCH_SAVE_FILE')
|
||||
try:
|
||||
# Save the content to the file
|
||||
with open(file_path, "a+", encoding="utf-8") as file:
|
||||
file.write(table_content)
|
||||
file.write("\n" * 3) # Add three newlines at the end
|
||||
logger.info(f"Search content saved to {file_path}")
|
||||
return file_path
|
||||
except Exception as e:
|
||||
logger.error(f"Error occurred while writing to the file: {e}")
|
||||
256
ToBeMigrated/ai_web_researcher/finance_data_researcher.py
Normal file
256
ToBeMigrated/ai_web_researcher/finance_data_researcher.py
Normal file
@@ -0,0 +1,256 @@
|
||||
import matplotlib.pyplot as plt
|
||||
import pandas as pd
|
||||
import yfinance as yf
|
||||
import pandas_ta as ta
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime, timedelta
|
||||
import logging
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
def calculate_technical_indicators(data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""
|
||||
Calculates a suite of technical indicators using pandas_ta.
|
||||
|
||||
Args:
|
||||
data (pd.DataFrame): DataFrame containing historical stock price data.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: DataFrame with added technical indicators.
|
||||
"""
|
||||
try:
|
||||
# Moving Averages
|
||||
data.ta.macd(append=True)
|
||||
data.ta.sma(length=20, append=True)
|
||||
data.ta.ema(length=50, append=True)
|
||||
|
||||
# Momentum Indicators
|
||||
data.ta.rsi(append=True)
|
||||
data.ta.stoch(append=True)
|
||||
|
||||
# Volatility Indicators
|
||||
data.ta.bbands(append=True)
|
||||
data.ta.adx(append=True)
|
||||
|
||||
# Other Indicators
|
||||
data.ta.obv(append=True)
|
||||
data.ta.willr(append=True)
|
||||
data.ta.cmf(append=True)
|
||||
data.ta.psar(append=True)
|
||||
|
||||
# Custom Calculations
|
||||
data['OBV_in_million'] = data['OBV'] / 1e6
|
||||
data['MACD_histogram_12_26_9'] = data['MACDh_12_26_9']
|
||||
|
||||
logging.info("Technical indicators calculated successfully.")
|
||||
return data
|
||||
except KeyError as e:
|
||||
logging.error(f"Missing key in data: {e}")
|
||||
except ValueError as e:
|
||||
logging.error(f"Value error: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error during technical indicator calculation: {e}")
|
||||
return None
|
||||
|
||||
def get_last_day_summary(data: pd.DataFrame) -> pd.Series:
|
||||
"""
|
||||
Extracts and summarizes technical indicators for the last trading day.
|
||||
|
||||
Args:
|
||||
data (pd.DataFrame): DataFrame with calculated technical indicators.
|
||||
|
||||
Returns:
|
||||
pd.Series: Summary of technical indicators for the last day.
|
||||
"""
|
||||
try:
|
||||
last_day_summary = data.iloc[-1][[
|
||||
'Adj Close', 'MACD_12_26_9', 'MACD_histogram_12_26_9', 'RSI_14',
|
||||
'BBL_5_2.0', 'BBM_5_2.0', 'BBU_5_2.0', 'SMA_20', 'EMA_50',
|
||||
'OBV_in_million', 'STOCHk_14_3_3', 'STOCHd_14_3_3', 'ADX_14',
|
||||
'WILLR_14', 'CMF_20', 'PSARl_0.02_0.2', 'PSARs_0.02_0.2'
|
||||
]]
|
||||
logging.info("Last day summary extracted.")
|
||||
return last_day_summary
|
||||
except KeyError as e:
|
||||
logging.error(f"Missing columns in data: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error extracting last day summary: {e}")
|
||||
return None
|
||||
|
||||
def analyze_stock(ticker_symbol: str, start_date: datetime, end_date: datetime) -> pd.Series:
|
||||
"""
|
||||
Fetches stock data, calculates technical indicators, and provides a summary.
|
||||
|
||||
Args:
|
||||
ticker_symbol (str): The stock symbol.
|
||||
start_date (datetime): Start date for data retrieval.
|
||||
end_date (datetime): End date for data retrieval.
|
||||
|
||||
Returns:
|
||||
pd.Series: Summary of technical indicators for the last day.
|
||||
"""
|
||||
try:
|
||||
# Fetch stock data
|
||||
stock_data = yf.download(ticker_symbol, start=start_date, end=end_date)
|
||||
logging.info(f"Stock data fetched for {ticker_symbol} from {start_date} to {end_date}")
|
||||
|
||||
# Calculate technical indicators
|
||||
stock_data = calculate_technical_indicators(stock_data)
|
||||
|
||||
# Get last day summary
|
||||
if stock_data is not None:
|
||||
last_day_summary = get_last_day_summary(stock_data)
|
||||
if last_day_summary is not None:
|
||||
print("Summary of Technical Indicators for the Last Day:")
|
||||
print(last_day_summary)
|
||||
return last_day_summary
|
||||
else:
|
||||
logging.error("Stock data is None, unable to calculate indicators.")
|
||||
except Exception as e:
|
||||
logging.error(f"Error during analysis: {e}")
|
||||
return None
|
||||
|
||||
def get_finance_data(symbol: str) -> pd.Series:
|
||||
"""
|
||||
Fetches financial data for a given stock symbol.
|
||||
|
||||
Args:
|
||||
symbol (str): The stock symbol.
|
||||
|
||||
Returns:
|
||||
pd.Series: Summary of technical indicators for the last day.
|
||||
"""
|
||||
end_date = datetime.today()
|
||||
start_date = end_date - timedelta(days=120)
|
||||
|
||||
# Perform analysis
|
||||
last_day_summary = analyze_stock(symbol, start_date, end_date)
|
||||
return last_day_summary
|
||||
|
||||
def analyze_options_data(ticker: str, expiry_date: str) -> tuple:
|
||||
"""
|
||||
Analyzes option data for a given ticker and expiry date.
|
||||
|
||||
Args:
|
||||
ticker (str): The stock ticker symbol.
|
||||
expiry_date (str): The option expiry date.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing calculated metrics for call and put options.
|
||||
"""
|
||||
call_df = options.get_calls(ticker, expiry_date)
|
||||
put_df = options.get_puts(ticker, expiry_date)
|
||||
|
||||
# Implied Volatility Analysis:
|
||||
avg_call_iv = call_df["Implied Volatility"].str.rstrip("%").astype(float).mean()
|
||||
avg_put_iv = put_df["Implied Volatility"].str.rstrip("%").astype(float).mean()
|
||||
logging.info(f"Average Implied Volatility for Call Options: {avg_call_iv}%")
|
||||
logging.info(f"Average Implied Volatility for Put Options: {avg_put_iv}%")
|
||||
|
||||
# Option Prices Analysis:
|
||||
avg_call_last_price = call_df["Last Price"].mean()
|
||||
avg_put_last_price = put_df["Last Price"].mean()
|
||||
logging.info(f"Average Last Price for Call Options: {avg_call_last_price}")
|
||||
logging.info(f"Average Last Price for Put Options: {avg_put_last_price}")
|
||||
|
||||
# Strike Price Analysis:
|
||||
min_call_strike = call_df["Strike"].min()
|
||||
max_call_strike = call_df["Strike"].max()
|
||||
min_put_strike = put_df["Strike"].min()
|
||||
max_put_strike = put_df["Strike"].max()
|
||||
logging.info(f"Minimum Strike Price for Call Options: {min_call_strike}")
|
||||
logging.info(f"Maximum Strike Price for Call Options: {max_call_strike}")
|
||||
logging.info(f"Minimum Strike Price for Put Options: {min_put_strike}")
|
||||
logging.info(f"Maximum Strike Price for Put Options: {max_put_strike}")
|
||||
|
||||
# Volume Analysis:
|
||||
total_call_volume = call_df["Volume"].str.replace('-', '0').astype(float).sum()
|
||||
total_put_volume = put_df["Volume"].str.replace('-', '0').astype(float).sum()
|
||||
logging.info(f"Total Volume for Call Options: {total_call_volume}")
|
||||
logging.info(f"Total Volume for Put Options: {total_put_volume}")
|
||||
|
||||
# Open Interest Analysis:
|
||||
call_df['Open Interest'] = call_df['Open Interest'].str.replace('-', '0').astype(float)
|
||||
put_df['Open Interest'] = put_df['Open Interest'].str.replace('-', '0').astype(float)
|
||||
total_call_open_interest = call_df["Open Interest"].sum()
|
||||
total_put_open_interest = put_df["Open Interest"].sum()
|
||||
logging.info(f"Total Open Interest for Call Options: {total_call_open_interest}")
|
||||
logging.info(f"Total Open Interest for Put Options: {total_put_open_interest}")
|
||||
|
||||
# Convert Implied Volatility to float
|
||||
call_df['Implied Volatility'] = call_df['Implied Volatility'].str.replace('%', '').astype(float)
|
||||
put_df['Implied Volatility'] = put_df['Implied Volatility'].str.replace('%', '').astype(float)
|
||||
|
||||
# Calculate Put-Call Ratio
|
||||
put_call_ratio = total_put_volume / total_call_volume
|
||||
logging.info(f"Put-Call Ratio: {put_call_ratio}")
|
||||
|
||||
# Calculate Implied Volatility Percentile
|
||||
call_iv_percentile = (call_df['Implied Volatility'] > call_df['Implied Volatility'].mean()).mean() * 100
|
||||
put_iv_percentile = (put_df['Implied Volatility'] > put_df['Implied Volatility'].mean()).mean() * 100
|
||||
logging.info(f"Call Option Implied Volatility Percentile: {call_iv_percentile}")
|
||||
logging.info(f"Put Option Implied Volatility Percentile: {put_iv_percentile}")
|
||||
|
||||
# Calculate Implied Volatility Skew
|
||||
implied_vol_skew = call_df['Implied Volatility'].mean() - put_df['Implied Volatility'].mean()
|
||||
logging.info(f"Implied Volatility Skew: {implied_vol_skew}")
|
||||
|
||||
# Determine market sentiment
|
||||
is_bullish_sentiment = call_df['Implied Volatility'].mean() > put_df['Implied Volatility'].mean()
|
||||
sentiment = "bullish" if is_bullish_sentiment else "bearish"
|
||||
logging.info(f"The overall sentiment of {ticker} is {sentiment}.")
|
||||
|
||||
return (avg_call_iv, avg_put_iv, avg_call_last_price, avg_put_last_price,
|
||||
min_call_strike, max_call_strike, min_put_strike, max_put_strike,
|
||||
total_call_volume, total_put_volume, total_call_open_interest, total_put_open_interest,
|
||||
put_call_ratio, call_iv_percentile, put_iv_percentile, implied_vol_skew, sentiment)
|
||||
|
||||
def get_fin_options_data(ticker: str) -> list:
|
||||
"""
|
||||
Fetches and analyzes options data for a given stock ticker.
|
||||
|
||||
Args:
|
||||
ticker (str): The stock ticker symbol.
|
||||
|
||||
Returns:
|
||||
list: A list of sentences summarizing the options data.
|
||||
"""
|
||||
current_price = round(stock_info.get_live_price(ticker), 3)
|
||||
option_expiry_dates = options.get_expiration_dates(ticker)
|
||||
nearest_expiry = option_expiry_dates[0]
|
||||
|
||||
results = analyze_options_data(ticker, nearest_expiry)
|
||||
|
||||
# Unpack the results tuple
|
||||
(avg_call_iv, avg_put_iv, avg_call_last_price, avg_put_last_price,
|
||||
min_call_strike, max_call_strike, min_put_strike, max_put_strike,
|
||||
total_call_volume, total_put_volume, total_call_open_interest, total_put_open_interest,
|
||||
put_call_ratio, call_iv_percentile, put_iv_percentile, implied_vol_skew, sentiment) = results
|
||||
|
||||
# Create a list of complete sentences with the results
|
||||
results_sentences = [
|
||||
f"Average Implied Volatility for Call Options: {avg_call_iv}%",
|
||||
f"Average Implied Volatility for Put Options: {avg_put_iv}%",
|
||||
f"Average Last Price for Call Options: {avg_call_last_price}",
|
||||
f"Average Last Price for Put Options: {avg_put_last_price}",
|
||||
f"Minimum Strike Price for Call Options: {min_call_strike}",
|
||||
f"Maximum Strike Price for Call Options: {max_call_strike}",
|
||||
f"Minimum Strike Price for Put Options: {min_put_strike}",
|
||||
f"Maximum Strike Price for Put Options: {max_put_strike}",
|
||||
f"Total Volume for Call Options: {total_call_volume}",
|
||||
f"Total Volume for Put Options: {total_put_volume}",
|
||||
f"Total Open Interest for Call Options: {total_call_open_interest}",
|
||||
f"Total Open Interest for Put Options: {total_put_open_interest}",
|
||||
f"Put-Call Ratio: {put_call_ratio}",
|
||||
f"Call Option Implied Volatility Percentile: {call_iv_percentile}",
|
||||
f"Put Option Implied Volatility Percentile: {put_iv_percentile}",
|
||||
f"Implied Volatility Skew: {implied_vol_skew}",
|
||||
f"The overall sentiment of {ticker} is {sentiment}."
|
||||
]
|
||||
|
||||
# Print each sentence
|
||||
for sentence in results_sentences:
|
||||
logging.info(sentence)
|
||||
|
||||
return results_sentences
|
||||
96
ToBeMigrated/ai_web_researcher/firecrawl_web_crawler.py
Normal file
96
ToBeMigrated/ai_web_researcher/firecrawl_web_crawler.py
Normal file
@@ -0,0 +1,96 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
from firecrawl import FirecrawlApp
|
||||
import logging
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv(Path('../../.env'))
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
def initialize_client() -> FirecrawlApp:
|
||||
"""
|
||||
Initialize and return a Firecrawl client.
|
||||
|
||||
Returns:
|
||||
FirecrawlApp: An instance of the Firecrawl client.
|
||||
"""
|
||||
return FirecrawlApp(api_key=os.getenv("FIRECRAWL_API_KEY"))
|
||||
|
||||
def scrape_website(website_url: str, depth: int = 1, max_pages: int = 10) -> dict:
|
||||
"""
|
||||
Scrape a website starting from the given URL.
|
||||
|
||||
Args:
|
||||
website_url (str): The URL of the website to scrape.
|
||||
depth (int, optional): The depth of crawling. Default is 1.
|
||||
max_pages (int, optional): The maximum number of pages to scrape. Default is 10.
|
||||
|
||||
Returns:
|
||||
dict: The result of the website scraping, or None if an error occurred.
|
||||
"""
|
||||
client = initialize_client()
|
||||
try:
|
||||
result = client.crawl_url({
|
||||
'url': website_url,
|
||||
'depth': depth,
|
||||
'max_pages': max_pages
|
||||
})
|
||||
return result
|
||||
except KeyError as e:
|
||||
logging.error(f"Missing key in data: {e}")
|
||||
except ValueError as e:
|
||||
logging.error(f"Value error: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error scraping website: {e}")
|
||||
return None
|
||||
|
||||
def scrape_url(url: str) -> dict:
|
||||
"""
|
||||
Scrape a specific URL.
|
||||
|
||||
Args:
|
||||
url (str): The URL to scrape.
|
||||
|
||||
Returns:
|
||||
dict: The result of the URL scraping, or None if an error occurred.
|
||||
"""
|
||||
client = initialize_client()
|
||||
try:
|
||||
result = client.scrape_url(url)
|
||||
return result
|
||||
except KeyError as e:
|
||||
logging.error(f"Missing key in data: {e}")
|
||||
except ValueError as e:
|
||||
logging.error(f"Value error: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error scraping URL: {e}")
|
||||
return None
|
||||
|
||||
def extract_data(url: str, schema: dict) -> dict:
|
||||
"""
|
||||
Extract structured data from a URL using the provided schema.
|
||||
|
||||
Args:
|
||||
url (str): The URL to extract data from.
|
||||
schema (dict): The schema to use for data extraction.
|
||||
|
||||
Returns:
|
||||
dict: The extracted data, or None if an error occurred.
|
||||
"""
|
||||
client = initialize_client()
|
||||
try:
|
||||
result = client.extract({
|
||||
'url': url,
|
||||
'schema': schema
|
||||
})
|
||||
return result
|
||||
except KeyError as e:
|
||||
logging.error(f"Missing key in data: {e}")
|
||||
except ValueError as e:
|
||||
logging.error(f"Value error: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error extracting data: {e}")
|
||||
return None
|
||||
339
ToBeMigrated/ai_web_researcher/google_serp_search.py
Normal file
339
ToBeMigrated/ai_web_researcher/google_serp_search.py
Normal file
@@ -0,0 +1,339 @@
|
||||
"""
|
||||
This Python script performs Google searches using various services such as SerpApi, Serper.dev, and more. It displays the search results, including organic results, People Also Ask, and Related Searches, in formatted tables. The script also utilizes GPT to generate titles and FAQs for the Google search results.
|
||||
|
||||
Features:
|
||||
- Utilizes SerpApi, Serper.dev, and other services for Google searches.
|
||||
- Displays organic search results, including position, title, link, and snippet.
|
||||
- Presents People Also Ask questions and snippets in a formatted table.
|
||||
- Includes Related Searches in the combined table with People Also Ask.
|
||||
- Configures logging with Loguru for informative messages.
|
||||
- Uses Rich and Tabulate for visually appealing and formatted tables.
|
||||
|
||||
Usage:
|
||||
- Ensure the necessary API keys are set in the .env file.
|
||||
- Run the script to perform a Google search with the specified query.
|
||||
- View the displayed tables with organic results, People Also Ask, and Related Searches.
|
||||
- Additional information, such as generated titles and FAQs using GPT, is presented.
|
||||
|
||||
Modifications:
|
||||
- Update the environment variables in the .env file with the required API keys.
|
||||
- Customize the search parameters, such as location and language, in the functions as needed.
|
||||
- Adjust logging configurations, table formatting, and other aspects based on preferences.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import configparser
|
||||
from pathlib import Path
|
||||
import pandas as pd
|
||||
import json
|
||||
import requests
|
||||
from clint.textui import progress
|
||||
import streamlit as st
|
||||
|
||||
#from serpapi import GoogleSearch
|
||||
from loguru import logger
|
||||
from tabulate import tabulate
|
||||
#from GoogleNews import GoogleNews
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
from dotenv import load_dotenv
|
||||
# Load environment variables from .env file
|
||||
load_dotenv(Path('../../.env'))
|
||||
logger.add(
|
||||
sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
from .common_utils import save_in_file, cfg_search_param
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
|
||||
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
|
||||
def google_search(query):
|
||||
"""
|
||||
Perform a Google search for the given query.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
flag (str, optional): The search flag (default is "faq").
|
||||
|
||||
Returns:
|
||||
list: List of search results based on the specified flag.
|
||||
"""
|
||||
#try:
|
||||
# perform_serpapi_google_search(query)
|
||||
# logger.info(f"FIXME: Google serapi: {query}")
|
||||
# #return process_search_results(search_result)
|
||||
#except Exception as err:
|
||||
# logger.error(f"ERROR: Check Here: https://serpapi.com/. Your requests may be over. {err}")
|
||||
|
||||
# Retry with serper.dev
|
||||
try:
|
||||
logger.info("Trying Google search with Serper.dev: https://serper.dev/api-key")
|
||||
search_result = perform_serperdev_google_search(query)
|
||||
if search_result:
|
||||
process_search_results(search_result)
|
||||
return(search_result)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed Google search with serper.dev: {err}")
|
||||
return None
|
||||
|
||||
|
||||
# # Retry with BROWSERLESS API
|
||||
# try:
|
||||
# search_result = perform_browserless_google_search(query)
|
||||
# #return process_search_results(search_result, flag)
|
||||
# except Exception as err:
|
||||
# logger.error("FIXME: Failed to do Google search with BROWSERLESS API.")
|
||||
# logger.debug("FIXME: Trying with dataforSEO API.")
|
||||
|
||||
|
||||
|
||||
def perform_serpapi_google_search(query):
|
||||
"""
|
||||
Perform a Google search using the SerpApi service.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
location (str, optional): The location for the search (default is "Austin, Texas").
|
||||
api_key (str, optional): Your secret API key for SerpApi.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the search results.
|
||||
"""
|
||||
try:
|
||||
logger.info("Reading Web search config values from main_config")
|
||||
geo_location, search_language, num_results, time_range, include_domains, similar_url = read_return_config_section('web_research')
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to read web research params: {err}")
|
||||
return
|
||||
try:
|
||||
# Check if API key is provided
|
||||
if not os.getenv("SERPAPI_KEY"):
|
||||
#raise ValueError("SERPAPI_KEY key is required for SerpApi")
|
||||
logger.error("SERPAPI_KEY key is required for SerpApi")
|
||||
return
|
||||
|
||||
|
||||
# Create a GoogleSearch instance
|
||||
search = GoogleSearch({
|
||||
"q": query,
|
||||
"location": location,
|
||||
"api_key": api_key
|
||||
})
|
||||
# Get search results as a dictionary
|
||||
result = search.get_dict()
|
||||
return result
|
||||
|
||||
except ValueError as ve:
|
||||
# Handle missing API key error
|
||||
logger.info(f"SERPAPI ValueError: {ve}")
|
||||
except Exception as e:
|
||||
# Handle other exceptions
|
||||
logger.info(f"SERPAPI An error occurred: {e}")
|
||||
|
||||
|
||||
def perform_serperdev_google_search(query):
|
||||
"""
|
||||
Perform a Google search using the Serper API.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
|
||||
Returns:
|
||||
dict: The JSON response from the Serper API.
|
||||
"""
|
||||
# Get the Serper API key from environment variables
|
||||
logger.info("Doing serper.dev google search.")
|
||||
serper_api_key = os.getenv('SERPER_API_KEY')
|
||||
|
||||
# Check if the API key is available
|
||||
if not serper_api_key:
|
||||
raise ValueError("SERPER_API_KEY is missing. Set it in the .env file.")
|
||||
|
||||
# Serper API endpoint URL
|
||||
url = "https://google.serper.dev/search"
|
||||
|
||||
try:
|
||||
geo_loc, lang, num_results = cfg_search_param('serperdev')
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to read config {err}")
|
||||
|
||||
# Build payload as end user or main_config
|
||||
payload = json.dumps({
|
||||
"q": query,
|
||||
"gl": geo_loc,
|
||||
"hl": lang,
|
||||
"num": num_results,
|
||||
"autocorrect": True,
|
||||
})
|
||||
|
||||
# Request headers with API key
|
||||
headers = {
|
||||
'X-API-KEY': serper_api_key,
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
# Send a POST request to the Serper API with progress bar
|
||||
with progress.Bar(label="Searching", expected_size=100) as bar:
|
||||
response = requests.post(url, headers=headers, data=payload, stream=True)
|
||||
# Check if the request was successful
|
||||
if response.status_code == 200:
|
||||
# Parse and return the JSON response
|
||||
return response.json()
|
||||
else:
|
||||
# Print an error message if the request fails
|
||||
logger.error(f"Error: {response.status_code}, {response.text}")
|
||||
return None
|
||||
|
||||
|
||||
def perform_serper_news_search(news_keywords, news_country, news_language):
|
||||
""" Function for Serper.dev News google search """
|
||||
# Get the Serper API key from environment variables
|
||||
logger.info(f"Doing serper.dev google search. {news_keywords} - {news_country} - {news_language}")
|
||||
serper_api_key = os.getenv('SERPER_API_KEY')
|
||||
|
||||
# Check if the API key is available
|
||||
if not serper_api_key:
|
||||
raise ValueError("SERPER_API_KEY is missing. Set it in the .env file.")
|
||||
|
||||
# Serper API endpoint URL
|
||||
url = "https://google.serper.dev/news"
|
||||
payload = json.dumps({
|
||||
"q": news_keywords,
|
||||
"gl": news_country,
|
||||
"hl": news_language,
|
||||
})
|
||||
# Request headers with API key
|
||||
headers = {
|
||||
'X-API-KEY': serper_api_key,
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
# Send a POST request to the Serper API with progress bar
|
||||
with progress.Bar(label="Searching News", expected_size=100) as bar:
|
||||
response = requests.post(url, headers=headers, data=payload, stream=True)
|
||||
# Check if the request was successful
|
||||
if response.status_code == 200:
|
||||
# Parse and return the JSON response
|
||||
#process_search_results(response, "news")
|
||||
return response.json()
|
||||
else:
|
||||
# Print an error message if the request fails
|
||||
logger.error(f"Error: {response.status_code}, {response.text}")
|
||||
return None
|
||||
|
||||
|
||||
|
||||
def perform_browserless_google_search():
|
||||
return
|
||||
|
||||
def perform_dataforseo_google_search():
|
||||
return
|
||||
|
||||
|
||||
def google_news(search_keywords, news_period="7d", region="IN"):
|
||||
""" Get news articles from google_news"""
|
||||
googlenews = GoogleNews()
|
||||
googlenews.enableException(True)
|
||||
googlenews = GoogleNews(lang='en', region=region)
|
||||
googlenews = GoogleNews(period=news_period)
|
||||
print(googlenews.get_news('APPLE'))
|
||||
print(googlenews.search('APPLE'))
|
||||
|
||||
|
||||
def process_search_results(search_results, search_type="general"):
|
||||
"""
|
||||
Create a Pandas DataFrame from the search results.
|
||||
|
||||
Args:
|
||||
search_results (dict): The search results JSON.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: Pandas DataFrame containing the search results.
|
||||
"""
|
||||
data = []
|
||||
logger.info(f"Google Search Parameters: {search_results.get('searchParameters', {})}")
|
||||
if 'general' in search_type:
|
||||
organic_results = search_results.get("organic", [])
|
||||
if 'news' in search_type:
|
||||
organic_results = search_results.get("news", [])
|
||||
|
||||
# Displaying Organic Results
|
||||
organic_data = []
|
||||
for result in search_results["organic"]:
|
||||
position = result.get("position", "")
|
||||
title = result.get("title", "")
|
||||
link = result.get("link", "")
|
||||
snippet = result.get("snippet", "")
|
||||
organic_data.append([position, title, link, snippet])
|
||||
|
||||
organic_headers = ["Rank", "Title", "Link", "Snippet"]
|
||||
organic_table = tabulate(organic_data,
|
||||
headers=organic_headers,
|
||||
tablefmt="fancy_grid",
|
||||
colalign=["center", "left", "left", "left"],
|
||||
maxcolwidths=[5, 25, 35, 50])
|
||||
|
||||
# Print the tables
|
||||
print("\n\n📢❗🚨 Google search Organic Results:")
|
||||
print(organic_table)
|
||||
|
||||
# Displaying People Also Ask and Related Searches combined
|
||||
combined_data = []
|
||||
try:
|
||||
people_also_ask_data = []
|
||||
if "peopleAlsoAsk" in search_results:
|
||||
for question in search_results["peopleAlsoAsk"]:
|
||||
title = question.get("title", "")
|
||||
snippet = question.get("snippet", "")
|
||||
link = question.get("link", "")
|
||||
people_also_ask_data.append([title, snippet, link])
|
||||
except Exception as people_also_ask_err:
|
||||
logger.error(f"Error processing 'peopleAlsoAsk': {people_also_ask_err}")
|
||||
people_also_ask_data = []
|
||||
|
||||
related_searches_data = []
|
||||
for query in search_results.get("relatedSearches", []):
|
||||
related_searches_data.append([query.get("query", "")])
|
||||
related_searches_headers = ["Related Search"]
|
||||
|
||||
if people_also_ask_data:
|
||||
# Add Related Searches as a column to People Also Ask
|
||||
combined_data = [
|
||||
row + [related_searches_data[i][0] if i < len(related_searches_data) else ""]
|
||||
for i, row in enumerate(people_also_ask_data)
|
||||
]
|
||||
combined_headers = ["Question", "Snippet", "Link", "Related Search"]
|
||||
# Display the combined table
|
||||
combined_table = tabulate(
|
||||
combined_data,
|
||||
headers=combined_headers,
|
||||
tablefmt="fancy_grid",
|
||||
colalign=["left", "left", "left", "left"],
|
||||
maxcolwidths=[20, 50, 20, 30]
|
||||
)
|
||||
else:
|
||||
combined_table = tabulate(
|
||||
related_searches_data,
|
||||
headers=related_searches_headers,
|
||||
tablefmt="fancy_grid",
|
||||
colalign=["left"],
|
||||
maxcolwidths=[60]
|
||||
)
|
||||
|
||||
print("\n\n📢❗🚨 People Also Ask & Related Searches:")
|
||||
print(combined_table)
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
# Display on Alwrity UI
|
||||
st.write(organic_table)
|
||||
st.write(combined_table)
|
||||
save_in_file(organic_table)
|
||||
save_in_file(combined_table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
return search_results
|
||||
500
ToBeMigrated/ai_web_researcher/google_trends_researcher.py
Normal file
500
ToBeMigrated/ai_web_researcher/google_trends_researcher.py
Normal file
@@ -0,0 +1,500 @@
|
||||
"""
|
||||
This Python script analyzes Google search keywords by fetching auto-suggestions, performing keyword clustering, and visualizing Google Trends data. It uses various libraries such as pytrends, requests_html, tqdm, and more.
|
||||
|
||||
Features:
|
||||
- Fetches auto-suggestions for a given search keyword from Google.
|
||||
- Performs keyword clustering using K-means algorithm based on TF-IDF vectors.
|
||||
- Visualizes Google Trends data, including interest over time and interest by region.
|
||||
- Retrieves related queries and topics for a set of search keywords.
|
||||
- Utilizes visualization libraries such as Matplotlib, Plotly, and Rich for displaying results.
|
||||
- Incorporates logger.for error handling and informative messages.
|
||||
|
||||
Usage:
|
||||
- Provide a search term or a list of search terms for analysis.
|
||||
- Run the script to fetch auto-suggestions, perform clustering, and visualize Google Trends data.
|
||||
- Explore the displayed results, including top keywords in each cluster and related topics.
|
||||
|
||||
Modifications:
|
||||
- Customize the search terms in the 'do_google_trends_analysis' function.
|
||||
- Adjust the number of clusters for keyword clustering and other parameters as needed.
|
||||
- Explore further visualizations and analyses based on the generated data.
|
||||
|
||||
Note: Ensure that the required libraries are installed using 'pip install pytrends requests_html tqdm tabulate plotly rich'.
|
||||
"""
|
||||
|
||||
import os
|
||||
import time # I wish
|
||||
import random
|
||||
import requests
|
||||
import numpy as np
|
||||
import sys
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.cluster import KMeans
|
||||
import matplotlib.pyplot as plt
|
||||
from sklearn.metrics import silhouette_score, silhouette_samples
|
||||
from rich.console import Console
|
||||
from rich.progress import Progress
|
||||
import urllib
|
||||
import json
|
||||
import pandas as pd
|
||||
import matplotlib.pyplot as plt
|
||||
import plotly.express as px
|
||||
import plotly.io as pio
|
||||
from requests_html import HTML, HTMLSession
|
||||
from urllib.parse import quote_plus
|
||||
from tqdm import tqdm
|
||||
from tabulate import tabulate
|
||||
from pytrends.request import TrendReq
|
||||
from loguru import logger
|
||||
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def fetch_google_trends_interest_overtime(keyword):
|
||||
try:
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
pytrends.build_payload([keyword], timeframe='today 1-y', geo='US')
|
||||
|
||||
# 1. Interest Over Time
|
||||
data = pytrends.interest_over_time()
|
||||
data = data.reset_index()
|
||||
|
||||
# Visualization using Matplotlib
|
||||
plt.figure(figsize=(10, 6))
|
||||
plt.plot(data['date'], data[keyword], label=keyword)
|
||||
plt.title(f'Interest Over Time for "{keyword}"')
|
||||
plt.xlabel('Date')
|
||||
plt.ylabel('Interest')
|
||||
plt.legend()
|
||||
plt.show()
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.error(f"Error in fetch_google_trends_data: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
|
||||
def plot_interest_by_region(kw_list):
|
||||
try:
|
||||
from pytrends.request import TrendReq
|
||||
import matplotlib.pyplot as plt
|
||||
trends = TrendReq()
|
||||
trends.build_payload(kw_list=kw_list)
|
||||
kw_list = ' '.join(kw_list)
|
||||
data = trends.interest_by_region() #sorting by region
|
||||
data = data.sort_values(by=f"{kw_list}", ascending=False)
|
||||
print("\n📢❗🚨 ")
|
||||
print(f"Top 10 regions with highest interest for keyword: {kw_list}")
|
||||
data = data.head(10) #Top 10
|
||||
print(data)
|
||||
data.reset_index().plot(x="geoName", y=f"{kw_list}",
|
||||
figsize=(20,15), kind="bar")
|
||||
plt.style.use('fivethirtyeight')
|
||||
plt.show()
|
||||
# FIXME: Send this image to vision GPT for analysis.
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error plotting interest by region: {e}")
|
||||
return None
|
||||
|
||||
|
||||
|
||||
|
||||
def get_related_topics_and_save_csv(search_keywords):
|
||||
search_keywords = [f"{search_keywords}"]
|
||||
try:
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
pytrends.build_payload(kw_list=search_keywords, timeframe='today 12-m')
|
||||
|
||||
# Get related topics - this returns a dictionary
|
||||
topics_data = pytrends.related_topics()
|
||||
|
||||
# Extract data for the first keyword
|
||||
if topics_data and search_keywords[0] in topics_data:
|
||||
keyword_data = topics_data[search_keywords[0]]
|
||||
|
||||
# Create two separate dataframes for top and rising
|
||||
top_df = keyword_data.get('top', pd.DataFrame())
|
||||
rising_df = keyword_data.get('rising', pd.DataFrame())
|
||||
|
||||
return {
|
||||
'top': top_df[['topic_title', 'value']] if not top_df.empty else pd.DataFrame(),
|
||||
'rising': rising_df[['topic_title', 'value']] if not rising_df.empty else pd.DataFrame()
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in related topics: {e}")
|
||||
return {'top': pd.DataFrame(), 'rising': pd.DataFrame()}
|
||||
|
||||
def get_related_queries_and_save_csv(search_keywords):
|
||||
search_keywords = [f"{search_keywords}"]
|
||||
try:
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
pytrends.build_payload(kw_list=search_keywords, timeframe='today 12-m')
|
||||
|
||||
# Get related queries - this returns a dictionary
|
||||
queries_data = pytrends.related_queries()
|
||||
|
||||
# Extract data for the first keyword
|
||||
if queries_data and search_keywords[0] in queries_data:
|
||||
keyword_data = queries_data[search_keywords[0]]
|
||||
|
||||
# Create two separate dataframes for top and rising
|
||||
top_df = keyword_data.get('top', pd.DataFrame())
|
||||
rising_df = keyword_data.get('rising', pd.DataFrame())
|
||||
|
||||
return {
|
||||
'top': top_df if not top_df.empty else pd.DataFrame(),
|
||||
'rising': rising_df if not rising_df.empty else pd.DataFrame()
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in related queries: {e}")
|
||||
return {'top': pd.DataFrame(), 'rising': pd.DataFrame()}
|
||||
|
||||
|
||||
def get_source(url):
|
||||
try:
|
||||
session = HTMLSession()
|
||||
response = session.get(url)
|
||||
response.raise_for_status() # Raise an HTTPError for bad responses
|
||||
return response
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"Error during HTTP request: {e}")
|
||||
return None
|
||||
|
||||
|
||||
|
||||
def get_results(query):
|
||||
try:
|
||||
query = urllib.parse.quote_plus(query)
|
||||
response = get_source(f"https://suggestqueries.google.com/complete/search?output=chrome&hl=en&q={query}")
|
||||
time.sleep(random.uniform(0.1, 0.6))
|
||||
|
||||
if response:
|
||||
response.raise_for_status()
|
||||
results = json.loads(response.text)
|
||||
return results
|
||||
else:
|
||||
return None
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Error decoding JSON response: {e}")
|
||||
return None
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"Error during HTTP request: {e}")
|
||||
return None
|
||||
|
||||
|
||||
|
||||
def format_results(results):
|
||||
try:
|
||||
suggestions = []
|
||||
for index, value in enumerate(results[1]):
|
||||
suggestion = {'term': value, 'relevance': results[4]['google:suggestrelevance'][index]}
|
||||
suggestions.append(suggestion)
|
||||
return suggestions
|
||||
except (KeyError, IndexError) as e:
|
||||
logger.error(f"Error parsing search results: {e}")
|
||||
return []
|
||||
|
||||
|
||||
|
||||
def get_expanded_term_suffixes():
|
||||
return ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm','n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
|
||||
|
||||
|
||||
|
||||
def get_expanded_term_prefixes():
|
||||
# For shopping, review type blogs.
|
||||
#return ['discount *', 'pricing *', 'cheap', 'best price *', 'lowest price', 'best value', 'sale', 'affordable', 'promo', 'budget''what *', 'where *', 'how to *', 'why *', 'buy*', 'how much*','best *', 'worse *', 'rent*', 'sale*', 'offer*','vs*','or*']
|
||||
return ['what *', 'where *', 'how to *', 'why *','best *', 'vs*', 'or*']
|
||||
|
||||
|
||||
|
||||
def get_expanded_terms(query):
|
||||
try:
|
||||
expanded_term_prefixes = get_expanded_term_prefixes()
|
||||
expanded_term_suffixes = get_expanded_term_suffixes()
|
||||
|
||||
terms = [query]
|
||||
|
||||
for term in expanded_term_prefixes:
|
||||
terms.append(f"{term} {query}")
|
||||
|
||||
for term in expanded_term_suffixes:
|
||||
terms.append(f"{query} {term}")
|
||||
|
||||
return terms
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_expanded_terms: {e}")
|
||||
return []
|
||||
|
||||
|
||||
|
||||
def get_expanded_suggestions(query):
|
||||
try:
|
||||
all_results = []
|
||||
|
||||
expanded_terms = get_expanded_terms(query)
|
||||
for term in tqdm(expanded_terms, desc="📢❗🚨 Fetching Google AutoSuggestions", unit="term"):
|
||||
results = get_results(term)
|
||||
if results:
|
||||
formatted_results = format_results(results)
|
||||
all_results += formatted_results
|
||||
all_results = sorted(all_results, key=lambda k: k.get('relevance', 0), reverse=True)
|
||||
|
||||
return all_results
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_expanded_suggestions: {e}")
|
||||
return []
|
||||
|
||||
|
||||
|
||||
def get_suggestions_for_keyword(search_term):
|
||||
""" """
|
||||
try:
|
||||
expanded_results = get_expanded_suggestions(search_term)
|
||||
expanded_results_df = pd.DataFrame(expanded_results)
|
||||
expanded_results_df.columns = ['Keywords', 'Relevance']
|
||||
#expanded_results_df.to_csv('results.csv', index=False)
|
||||
pd.set_option('display.max_rows', expanded_results_df.shape[0]+1)
|
||||
expanded_results_df.drop_duplicates('Keywords', inplace=True)
|
||||
table = tabulate(expanded_results_df, headers=['Keywords', 'Relevance'], tablefmt='fancy_grid')
|
||||
# FIXME: Too much data for LLM context window. We will need to embed it.
|
||||
#try:
|
||||
# save_in_file(table)
|
||||
#except Exception as save_results_err:
|
||||
# logger.error(f"Failed to save search results: {save_results_err}")
|
||||
return expanded_results_df
|
||||
except Exception as e:
|
||||
logger.error(f"get_suggestions_for_keyword: Error in main: {e}")
|
||||
|
||||
|
||||
|
||||
def perform_keyword_clustering(expanded_results_df, num_clusters=5):
|
||||
try:
|
||||
# Preprocessing: Convert the keywords to lowercase
|
||||
expanded_results_df['Keywords'] = expanded_results_df['Keywords'].str.lower()
|
||||
|
||||
# Vectorization: Create a TF-IDF vectorizer
|
||||
vectorizer = TfidfVectorizer()
|
||||
|
||||
# Fit the vectorizer to the keywords
|
||||
tfidf_vectors = vectorizer.fit_transform(expanded_results_df['Keywords'])
|
||||
|
||||
# Applying K-means clustering
|
||||
kmeans = KMeans(n_clusters=num_clusters, random_state=42)
|
||||
cluster_labels = kmeans.fit_predict(tfidf_vectors)
|
||||
|
||||
# Add cluster labels to the DataFrame
|
||||
expanded_results_df['cluster_label'] = cluster_labels
|
||||
|
||||
# Assessing cluster quality through silhouette score
|
||||
silhouette_avg = silhouette_score(tfidf_vectors, cluster_labels)
|
||||
print(f"Silhouette Score: {silhouette_avg}")
|
||||
|
||||
# Visualize cluster quality using a silhouette plot
|
||||
#visualize_silhouette(tfidf_vectors, cluster_labels)
|
||||
|
||||
return expanded_results_df
|
||||
except Exception as e:
|
||||
logger.error(f"Error in perform_keyword_clustering: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
|
||||
|
||||
def visualize_silhouette(X, labels):
|
||||
try:
|
||||
silhouette_avg = silhouette_score(X, labels)
|
||||
print(f"Silhouette Score: {silhouette_avg}")
|
||||
|
||||
# Create a subplot with 1 row and 2 columns
|
||||
fig, ax1 = plt.subplots(1, 1, figsize=(8, 6))
|
||||
|
||||
# The 1st subplot is the silhouette plot
|
||||
ax1.set_xlim([-0.1, 1])
|
||||
ax1.set_ylim([0, X.shape[0] + (len(set(labels)) + 1) * 10])
|
||||
|
||||
# Compute the silhouette scores for each sample
|
||||
sample_silhouette_values = silhouette_samples(X, labels)
|
||||
|
||||
y_lower = 10
|
||||
for i in set(labels):
|
||||
# Aggregate the silhouette scores for samples belonging to the cluster
|
||||
ith_cluster_silhouette_values = sample_silhouette_values[labels == i]
|
||||
ith_cluster_silhouette_values.sort()
|
||||
|
||||
size_cluster_i = ith_cluster_silhouette_values.shape[0]
|
||||
y_upper = y_lower + size_cluster_i
|
||||
|
||||
color = plt.cm.nipy_spectral(float(i) / len(set(labels)))
|
||||
ax1.fill_betweenx(np.arange(y_lower, y_upper),
|
||||
0, ith_cluster_silhouette_values,
|
||||
facecolor=color, edgecolor=color, alpha=0.7)
|
||||
|
||||
# Label the silhouette plots with their cluster numbers at the middle
|
||||
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
|
||||
|
||||
# Compute the new y_lower for the next plot
|
||||
y_lower = y_upper + 10 # 10 for the 0 samples
|
||||
|
||||
ax1.set_title("Silhouette plot for KMeans clustering")
|
||||
ax1.set_xlabel("Silhouette coefficient values")
|
||||
ax1.set_ylabel("Cluster label")
|
||||
|
||||
# The vertical line for the average silhouette score of all the values
|
||||
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
|
||||
|
||||
plt.show()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in visualize_silhouette: {e}")
|
||||
|
||||
|
||||
|
||||
def print_and_return_top_keywords(expanded_results_df, num_clusters=5):
|
||||
"""
|
||||
Display and return top keywords in each cluster.
|
||||
|
||||
Args:
|
||||
expanded_results_df (pd.DataFrame): DataFrame containing expanded keywords, relevance, and cluster labels.
|
||||
num_clusters (int or str): Number of clusters or 'all'.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: DataFrame with top keywords for each cluster.
|
||||
"""
|
||||
top_keywords_df = pd.DataFrame()
|
||||
|
||||
if num_clusters == 'all':
|
||||
unique_clusters = expanded_results_df['cluster_label'].unique()
|
||||
else:
|
||||
unique_clusters = range(int(num_clusters))
|
||||
|
||||
for i in unique_clusters:
|
||||
cluster_df = expanded_results_df[expanded_results_df['cluster_label'] == i]
|
||||
top_keywords = cluster_df.sort_values(by='Relevance', ascending=False).head(5)
|
||||
top_keywords_df = pd.concat([top_keywords_df, top_keywords])
|
||||
|
||||
print(f"\n📢❗🚨 GTop Keywords for All Clusters:")
|
||||
table = tabulate(top_keywords_df, headers='keys', tablefmt='fancy_grid')
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"🚨 Failed to save search results: {save_results_err}")
|
||||
print(table)
|
||||
return top_keywords_df
|
||||
|
||||
|
||||
def generate_wordcloud(keywords):
|
||||
"""
|
||||
Generate and display a word cloud from a list of keywords.
|
||||
|
||||
Args:
|
||||
keywords (list): List of keywords.
|
||||
"""
|
||||
# Convert the list of keywords to a string
|
||||
text = ' '.join(keywords)
|
||||
|
||||
# Generate word cloud
|
||||
wordcloud = WordCloud(width=800, height=400, background_color='white').generate(text)
|
||||
|
||||
# Display the word cloud using matplotlib
|
||||
plt.figure(figsize=(600, 200))
|
||||
plt.imshow(wordcloud, interpolation='bilinear')
|
||||
plt.axis('off')
|
||||
plt.show()
|
||||
|
||||
|
||||
|
||||
def save_in_file(table_content):
|
||||
""" Helper function to save search analysis in a file. """
|
||||
file_path = os.environ.get('SEARCH_SAVE_FILE')
|
||||
try:
|
||||
# Save the content to the file
|
||||
with open(file_path, "a+", encoding="utf-8") as file:
|
||||
file.write(table_content)
|
||||
file.write("\n" * 3) # Add three newlines at the end
|
||||
logger.info(f"Search content saved to {file_path}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error occurred while writing to the file: {e}")
|
||||
|
||||
|
||||
def do_google_trends_analysis(search_term):
|
||||
""" Get a google search keywords, get its stats."""
|
||||
search_term = [f"{search_term}"]
|
||||
all_the_keywords = []
|
||||
try:
|
||||
for asearch_term in search_term:
|
||||
#FIXME: Lets work with a single root keyword.
|
||||
suggestions_df = get_suggestions_for_keyword(asearch_term)
|
||||
if len(suggestions_df['Keywords']) > 10:
|
||||
result_df = perform_keyword_clustering(suggestions_df)
|
||||
# Display top keywords in each cluster
|
||||
top_keywords = print_and_return_top_keywords(result_df)
|
||||
all_the_keywords.append(top_keywords['Keywords'].tolist())
|
||||
else:
|
||||
all_the_keywords.append(suggestions_df['Keywords'].tolist())
|
||||
all_the_keywords = ','.join([', '.join(filter(None, map(str, sublist))) for sublist in all_the_keywords])
|
||||
|
||||
# Generate a random sleep time between 2 and 3 seconds
|
||||
time.sleep(random.uniform(2, 3))
|
||||
|
||||
# Display additional information
|
||||
try:
|
||||
result_df = get_related_topics_and_save_csv(search_term)
|
||||
logger.info(f"Related topics:: result_df: {result_df}")
|
||||
# Extract 'Top' topic_title
|
||||
if result_df:
|
||||
top_topic_title = result_df['top']['topic_title'].values.tolist()
|
||||
# Join each sublist into one string separated by comma
|
||||
#top_topic_title = [','.join(filter(None, map(str, sublist))) for sublist in top_topic_title]
|
||||
top_topic_title = ','.join([', '.join(filter(None, map(str, sublist))) for sublist in top_topic_title])
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to get results from google trends related topics: {err}")
|
||||
|
||||
# TBD: Not getting great results OR unable to understand them.
|
||||
#all_the_keywords += top_topic_title
|
||||
all_the_keywords = all_the_keywords.split(',')
|
||||
# Split the list into chunks of 5 keywords
|
||||
chunk_size = 4
|
||||
chunks = [all_the_keywords[i:i + chunk_size] for i in range(0, len(all_the_keywords), chunk_size)]
|
||||
# Create a DataFrame with columns named 'Keyword 1', 'Keyword 2', etc.
|
||||
combined_df = pd.DataFrame(chunks, columns=[f'K📢eyword Col{i + 1}' for i in range(chunk_size)])
|
||||
|
||||
# Print the table
|
||||
table = tabulate(combined_df, headers='keys', tablefmt='fancy_grid')
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
print(table)
|
||||
|
||||
#generate_wordcloud(all_the_keywords)
|
||||
return(all_the_keywords)
|
||||
except Exception as e:
|
||||
logger.error(f"Error in Google Trends Analysis: {e}")
|
||||
|
||||
|
||||
def get_trending_searches(country='united_states'):
|
||||
"""Get trending searches for a specific country."""
|
||||
try:
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
trending_searches = pytrends.trending_searches(pn=country)
|
||||
return trending_searches
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting trending searches: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
def get_realtime_trends(country='US'):
|
||||
"""Get realtime trending searches for a specific country."""
|
||||
try:
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
realtime_trends = pytrends.realtime_trending_searches(pn=country)
|
||||
return realtime_trends
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting realtime trends: {e}")
|
||||
return pd.DataFrame()
|
||||
803
ToBeMigrated/ai_web_researcher/gpt_online_researcher.py
Normal file
803
ToBeMigrated/ai_web_researcher/gpt_online_researcher.py
Normal file
@@ -0,0 +1,803 @@
|
||||
################################################################
|
||||
#
|
||||
# ## Features
|
||||
#
|
||||
# - **Web Research**: Alwrity enables users to conduct web research efficiently.
|
||||
# By providing keywords or topics of interest, users can initiate searches across multiple platforms simultaneously.
|
||||
#
|
||||
# - **Google SERP Search**: The tool integrates with Google Search Engine Results Pages (SERP)
|
||||
# to retrieve relevant information based on user queries. It offers insights into organic search results,
|
||||
# People Also Ask, and related searches.
|
||||
#
|
||||
# - **Tavily AI Integration**: Alwrity leverages Tavily AI's capabilities to enhance web research.
|
||||
# It utilizes advanced algorithms to search for information and extract relevant data from various sources.
|
||||
#
|
||||
# - **Metaphor AI Semantic Search**: Alwrity employs Metaphor AI's semantic search technology to find related articles and content.
|
||||
# By analyzing context and meaning, it delivers precise and accurate results.
|
||||
#
|
||||
# - **Google Trends Analysis**: The tool provides Google Trends analysis for user-defined keywords.
|
||||
# It helps users understand the popularity and trends associated with specific topics over time.
|
||||
#
|
||||
##############################################################
|
||||
|
||||
import os
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
import sys
|
||||
from datetime import datetime
|
||||
import streamlit as st
|
||||
import pandas as pd
|
||||
import random
|
||||
import numpy as np
|
||||
|
||||
from lib.alwrity_ui.display_google_serp_results import (
|
||||
process_research_results,
|
||||
process_search_results,
|
||||
display_research_results
|
||||
)
|
||||
from lib.alwrity_ui.google_trends_ui import display_google_trends_data, process_trends_data
|
||||
|
||||
from .tavily_ai_search import do_tavily_ai_search
|
||||
from .metaphor_basic_neural_web_search import metaphor_search_articles, streamlit_display_metaphor_results
|
||||
from .google_serp_search import google_search
|
||||
from .google_trends_researcher import do_google_trends_analysis
|
||||
#from .google_gemini_web_researcher import do_gemini_web_research
|
||||
|
||||
from loguru import logger
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def gpt_web_researcher(search_keywords, search_mode, **kwargs):
|
||||
"""Keyword based web researcher with progress tracking."""
|
||||
|
||||
logger.info(f"Starting web research - Keywords: {search_keywords}, Mode: {search_mode}")
|
||||
logger.debug(f"Additional parameters: {kwargs}")
|
||||
|
||||
try:
|
||||
# Reset session state variables for this research operation
|
||||
if 'metaphor_results_displayed' in st.session_state:
|
||||
del st.session_state.metaphor_results_displayed
|
||||
|
||||
# Initialize result container
|
||||
research_results = None
|
||||
|
||||
# Create status containers
|
||||
status_container = st.empty()
|
||||
progress_bar = st.progress(0)
|
||||
|
||||
def update_progress(message, progress=None, level="info"):
|
||||
if progress is not None:
|
||||
progress_bar.progress(progress)
|
||||
if level == "error":
|
||||
status_container.error(f"🚫 {message}")
|
||||
elif level == "warning":
|
||||
status_container.warning(f"⚠️ {message}")
|
||||
else:
|
||||
status_container.info(f"🔄 {message}")
|
||||
logger.debug(f"Progress update [{level}]: {message}")
|
||||
|
||||
if search_mode == "google":
|
||||
logger.info("Starting Google research pipeline")
|
||||
|
||||
try:
|
||||
# First try Google SERP
|
||||
update_progress("Initiating SERP search...", progress=10)
|
||||
serp_results = do_google_serp_search(search_keywords, **kwargs)
|
||||
|
||||
if serp_results and serp_results.get('organic'):
|
||||
logger.info("SERP search successful")
|
||||
update_progress("SERP search completed", progress=40)
|
||||
research_results = serp_results
|
||||
else:
|
||||
logger.warning("SERP search returned no results, falling back to Gemini")
|
||||
update_progress("No SERP results, trying Gemini...", progress=45)
|
||||
|
||||
# Keep it commented. Fallback to Gemini
|
||||
#try:
|
||||
# gemini_results = do_gemini_web_research(search_keywords)
|
||||
# if gemini_results:
|
||||
# logger.info("Gemini research successful")
|
||||
# update_progress("Gemini research completed", progress=80)
|
||||
# research_results = {
|
||||
# 'source': 'gemini',
|
||||
# 'results': gemini_results
|
||||
# }
|
||||
#except Exception as gemini_err:
|
||||
# logger.error(f"Gemini research failed: {gemini_err}")
|
||||
# update_progress("Gemini research failed", level="warning")
|
||||
|
||||
if research_results:
|
||||
update_progress("Processing final results...", progress=90)
|
||||
processed_results = process_research_results(research_results)
|
||||
|
||||
if processed_results:
|
||||
update_progress("Research completed!", progress=100, level="success")
|
||||
display_research_results(processed_results)
|
||||
return processed_results
|
||||
else:
|
||||
error_msg = "Failed to process research results"
|
||||
logger.warning(error_msg)
|
||||
update_progress(error_msg, level="warning")
|
||||
return None
|
||||
else:
|
||||
error_msg = "No results from either SERP or Gemini"
|
||||
logger.warning(error_msg)
|
||||
update_progress(error_msg, level="warning")
|
||||
return None
|
||||
|
||||
except Exception as search_err:
|
||||
error_msg = f"Research pipeline failed: {str(search_err)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
update_progress(error_msg, level="error")
|
||||
raise
|
||||
|
||||
elif search_mode == "ai":
|
||||
logger.info("Starting AI research pipeline")
|
||||
|
||||
try:
|
||||
# Do Tavily AI Search
|
||||
update_progress("Initiating Tavily AI search...", progress=10)
|
||||
|
||||
# Extract relevant parameters for Tavily search
|
||||
include_domains = kwargs.pop('include_domains', None)
|
||||
search_depth = kwargs.pop('search_depth', 'advanced')
|
||||
|
||||
# Pass the parameters to do_tavily_ai_search
|
||||
t_results = do_tavily_ai_search(
|
||||
search_keywords, # Pass as positional argument
|
||||
max_results=kwargs.get('num_results', 10),
|
||||
include_domains=include_domains,
|
||||
search_depth=search_depth,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
# Do Metaphor AI Search
|
||||
update_progress("Initiating Metaphor AI search...", progress=50)
|
||||
metaphor_results, metaphor_titles = do_metaphor_ai_research(search_keywords)
|
||||
|
||||
if metaphor_results is None:
|
||||
update_progress("Metaphor AI search failed, continuing with Tavily results only...", level="warning")
|
||||
else:
|
||||
update_progress("Metaphor AI search completed successfully", progress=75)
|
||||
# Add debug logging to check the structure of metaphor_results
|
||||
logger.debug(f"Metaphor results structure: {type(metaphor_results)}")
|
||||
if isinstance(metaphor_results, dict):
|
||||
logger.debug(f"Metaphor results keys: {metaphor_results.keys()}")
|
||||
if 'data' in metaphor_results:
|
||||
logger.debug(f"Metaphor data keys: {metaphor_results['data'].keys()}")
|
||||
if 'results' in metaphor_results['data']:
|
||||
logger.debug(f"Number of results: {len(metaphor_results['data']['results'])}")
|
||||
|
||||
# Display Metaphor results only if not already displayed
|
||||
if 'metaphor_results_displayed' not in st.session_state:
|
||||
st.session_state.metaphor_results_displayed = True
|
||||
# Make sure to pass the correct parameters to streamlit_display_metaphor_results
|
||||
streamlit_display_metaphor_results(metaphor_results, search_keywords)
|
||||
|
||||
# Add Google Trends Analysis
|
||||
update_progress("Initiating Google Trends analysis...", progress=80)
|
||||
try:
|
||||
# Add an informative message about Google Trends
|
||||
with st.expander("ℹ️ About Google Trends Analysis", expanded=False):
|
||||
st.markdown("""
|
||||
**What is Google Trends Analysis?**
|
||||
|
||||
Google Trends Analysis provides insights into how often a particular search-term is entered relative to the total search-volume across various regions of the world, and in various languages.
|
||||
|
||||
**What data will be shown?**
|
||||
|
||||
- **Related Keywords**: Terms that are frequently searched together with your keyword
|
||||
- **Interest Over Time**: How interest in your keyword has changed over the past 12 months
|
||||
- **Regional Interest**: Where in the world your keyword is most popular
|
||||
- **Related Queries**: What people search for before and after searching for your keyword
|
||||
- **Related Topics**: Topics that are closely related to your keyword
|
||||
|
||||
**How to use this data:**
|
||||
|
||||
- Identify trending topics in your industry
|
||||
- Understand seasonal patterns in search behavior
|
||||
- Discover related keywords for content planning
|
||||
- Target content to specific regions with high interest
|
||||
""")
|
||||
|
||||
trends_results = do_google_pytrends_analysis(search_keywords)
|
||||
if trends_results:
|
||||
update_progress("Google Trends analysis completed successfully", progress=90)
|
||||
# Store trends results in the research_results
|
||||
if metaphor_results:
|
||||
metaphor_results['trends_data'] = trends_results
|
||||
else:
|
||||
# If metaphor_results is None, create a new container for results
|
||||
metaphor_results = {'trends_data': trends_results}
|
||||
|
||||
# Display Google Trends data using the new UI module
|
||||
display_google_trends_data(trends_results, search_keywords)
|
||||
else:
|
||||
update_progress("Google Trends analysis returned no results", level="warning")
|
||||
except Exception as trends_err:
|
||||
logger.error(f"Google Trends analysis failed: {trends_err}")
|
||||
update_progress("Google Trends analysis failed", level="warning")
|
||||
st.error(f"Error in Google Trends analysis: {str(trends_err)}")
|
||||
|
||||
# Return the combined results
|
||||
update_progress("Research completed!", progress=100, level="success")
|
||||
return metaphor_results or t_results
|
||||
|
||||
except Exception as ai_err:
|
||||
error_msg = f"AI research pipeline failed: {str(ai_err)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
update_progress(error_msg, level="error")
|
||||
raise
|
||||
|
||||
else:
|
||||
error_msg = f"Unsupported search mode: {search_mode}"
|
||||
logger.error(error_msg)
|
||||
update_progress(error_msg, level="error")
|
||||
raise ValueError(error_msg)
|
||||
|
||||
except Exception as err:
|
||||
error_msg = f"Failed in gpt_web_researcher: {str(err)}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
if 'update_progress' in locals():
|
||||
update_progress(error_msg, level="error")
|
||||
raise
|
||||
|
||||
|
||||
def do_google_serp_search(search_keywords, status_container, update_progress, **kwargs):
|
||||
"""Perform Google SERP analysis with sidebar progress tracking."""
|
||||
|
||||
logger.info("="*50)
|
||||
logger.info("Starting Google SERP Search")
|
||||
logger.info("="*50)
|
||||
|
||||
try:
|
||||
# Validate parameters
|
||||
update_progress("Validating search parameters", progress=0.1)
|
||||
status_container.info("📝 Validating parameters...")
|
||||
|
||||
if not search_keywords or not isinstance(search_keywords, str):
|
||||
logger.error(f"Invalid search keywords: {search_keywords}")
|
||||
raise ValueError("Search keywords must be a non-empty string")
|
||||
|
||||
# Update search initiation
|
||||
update_progress(f"Initiating search for: '{search_keywords}'", progress=0.2)
|
||||
status_container.info("🌐 Querying search API...")
|
||||
logger.info(f"Search params: {kwargs}")
|
||||
|
||||
# Execute search
|
||||
g_results = google_search(search_keywords)
|
||||
|
||||
if g_results:
|
||||
# Log success
|
||||
update_progress("Search completed successfully", progress=0.8, level="success")
|
||||
|
||||
# Update statistics
|
||||
stats = f"""Found:
|
||||
- {len(g_results.get('organic', []))} organic results
|
||||
- {len(g_results.get('peopleAlsoAsk', []))} related questions
|
||||
- {len(g_results.get('relatedSearches', []))} related searches"""
|
||||
update_progress(stats, progress=0.9)
|
||||
|
||||
# Process results
|
||||
update_progress("Processing search results", progress=0.95)
|
||||
status_container.info("⚡ Processing results...")
|
||||
processed_results = process_search_results(g_results)
|
||||
|
||||
# Extract titles
|
||||
update_progress("Extracting information", progress=0.98)
|
||||
g_titles = extract_info(g_results, 'titles')
|
||||
|
||||
# Final success
|
||||
update_progress("Analysis completed successfully", progress=1.0, level="success")
|
||||
status_container.success("✨ Research completed!")
|
||||
|
||||
# Clear main status after delay
|
||||
time.sleep(1)
|
||||
status_container.empty()
|
||||
|
||||
return {
|
||||
'results': g_results,
|
||||
'titles': g_titles,
|
||||
'summary': processed_results,
|
||||
'stats': {
|
||||
'organic_count': len(g_results.get('organic', [])),
|
||||
'questions_count': len(g_results.get('peopleAlsoAsk', [])),
|
||||
'related_count': len(g_results.get('relatedSearches', []))
|
||||
}
|
||||
}
|
||||
|
||||
else:
|
||||
update_progress("No results found", progress=0.5, level="warning")
|
||||
status_container.warning("⚠️ No results found")
|
||||
return None
|
||||
|
||||
except Exception as err:
|
||||
error_msg = f"Search failed: {str(err)}"
|
||||
update_progress(error_msg, progress=0.5, level="error")
|
||||
logger.error(error_msg)
|
||||
logger.debug("Stack trace:", exc_info=True)
|
||||
raise
|
||||
|
||||
finally:
|
||||
logger.info("="*50)
|
||||
logger.info("Google SERP Search function completed")
|
||||
logger.info("="*50)
|
||||
|
||||
|
||||
def do_tavily_ai_search(search_keywords, max_results=10, **kwargs):
|
||||
""" Common function to do Tavily AI web research."""
|
||||
try:
|
||||
logger.info(f"Doing Tavily AI search for: {search_keywords}")
|
||||
|
||||
# Prepare Tavily search parameters
|
||||
tavily_params = {
|
||||
'max_results': max_results,
|
||||
'search_depth': 'advanced' if kwargs.get('search_depth', 3) > 2 else 'basic',
|
||||
'time_range': kwargs.get('time_range', 'year'),
|
||||
'include_domains': kwargs.get('include_domains', [""]) if kwargs.get('include_domains') else [""]
|
||||
}
|
||||
|
||||
# Import the Tavily search function directly
|
||||
from .tavily_ai_search import do_tavily_ai_search as tavily_search
|
||||
|
||||
# Call the actual Tavily search function
|
||||
t_results = tavily_search(
|
||||
keywords=search_keywords,
|
||||
**tavily_params
|
||||
)
|
||||
|
||||
if t_results:
|
||||
t_titles = tavily_extract_information(t_results, 'titles')
|
||||
t_answer = tavily_extract_information(t_results, 'answer')
|
||||
return(t_results, t_titles, t_answer)
|
||||
else:
|
||||
logger.warning("No results returned from Tavily AI search")
|
||||
return None, None, None
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to do Tavily AI Search: {err}")
|
||||
return None, None, None
|
||||
|
||||
|
||||
def do_metaphor_ai_research(search_keywords):
|
||||
"""
|
||||
Perform Metaphor AI research and return results with titles.
|
||||
|
||||
Args:
|
||||
search_keywords (str): Keywords to search for
|
||||
|
||||
Returns:
|
||||
tuple: (response_articles, titles) or (None, None) if search fails
|
||||
"""
|
||||
try:
|
||||
logger.info(f"Start Semantic/Neural web search with Metaphor: {search_keywords}")
|
||||
response_articles = metaphor_search_articles(search_keywords)
|
||||
|
||||
if response_articles and 'data' in response_articles:
|
||||
m_titles = [result.get('title', '') for result in response_articles['data'].get('results', [])]
|
||||
return response_articles, m_titles
|
||||
else:
|
||||
logger.warning("No valid results from Metaphor search")
|
||||
return None, None
|
||||
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to do Metaphor search: {err}")
|
||||
return None, None
|
||||
|
||||
|
||||
def do_google_pytrends_analysis(keywords):
|
||||
"""
|
||||
Perform Google Trends analysis for the given keywords.
|
||||
|
||||
Args:
|
||||
keywords (str): The search keywords to analyze
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing formatted Google Trends data with the following keys:
|
||||
- related_keywords: List of related keywords
|
||||
- interest_over_time: DataFrame with date and interest columns
|
||||
- regional_interest: DataFrame with country_code, country, and interest columns
|
||||
- related_queries: DataFrame with query and value columns
|
||||
- related_topics: DataFrame with topic and value columns
|
||||
"""
|
||||
logger.info(f"Performing Google Trends analysis for keywords: {keywords}")
|
||||
|
||||
# Create a progress container for Streamlit
|
||||
progress_container = st.empty()
|
||||
progress_bar = st.progress(0)
|
||||
|
||||
def update_progress(message, progress=None, level="info"):
|
||||
"""Helper function to update progress in Streamlit UI"""
|
||||
if progress is not None:
|
||||
progress_bar.progress(progress)
|
||||
|
||||
if level == "error":
|
||||
progress_container.error(f"🚫 {message}")
|
||||
elif level == "warning":
|
||||
progress_container.warning(f"⚠️ {message}")
|
||||
else:
|
||||
progress_container.info(f"🔄 {message}")
|
||||
logger.debug(f"Progress update [{level}]: {message}")
|
||||
|
||||
try:
|
||||
# Initialize the formatted data dictionary
|
||||
formatted_data = {
|
||||
'related_keywords': [],
|
||||
'interest_over_time': pd.DataFrame(),
|
||||
'regional_interest': pd.DataFrame(),
|
||||
'related_queries': pd.DataFrame(),
|
||||
'related_topics': pd.DataFrame()
|
||||
}
|
||||
|
||||
# Get raw trends data from google_trends_researcher
|
||||
update_progress("Fetching Google Trends data...", progress=10)
|
||||
raw_trends_data = do_google_trends_analysis(keywords)
|
||||
|
||||
if not raw_trends_data:
|
||||
logger.warning("No Google Trends data returned")
|
||||
update_progress("No Google Trends data returned", level="warning", progress=20)
|
||||
return formatted_data
|
||||
|
||||
# Process related keywords from the raw data
|
||||
update_progress("Processing related keywords...", progress=30)
|
||||
if isinstance(raw_trends_data, list):
|
||||
formatted_data['related_keywords'] = raw_trends_data
|
||||
elif isinstance(raw_trends_data, dict):
|
||||
if 'keywords' in raw_trends_data:
|
||||
formatted_data['related_keywords'] = raw_trends_data['keywords']
|
||||
if 'interest_over_time' in raw_trends_data:
|
||||
formatted_data['interest_over_time'] = raw_trends_data['interest_over_time']
|
||||
if 'regional_interest' in raw_trends_data:
|
||||
formatted_data['regional_interest'] = raw_trends_data['regional_interest']
|
||||
if 'related_queries' in raw_trends_data:
|
||||
formatted_data['related_queries'] = raw_trends_data['related_queries']
|
||||
if 'related_topics' in raw_trends_data:
|
||||
formatted_data['related_topics'] = raw_trends_data['related_topics']
|
||||
|
||||
# If we have keywords but missing other data, try to fetch them using pytrends directly
|
||||
if formatted_data['related_keywords'] and (
|
||||
formatted_data['interest_over_time'].empty or
|
||||
formatted_data['regional_interest'].empty or
|
||||
formatted_data['related_queries'].empty or
|
||||
formatted_data['related_topics'].empty
|
||||
):
|
||||
try:
|
||||
update_progress("Fetching additional data from Google Trends API...", progress=40)
|
||||
from pytrends.request import TrendReq
|
||||
pytrends = TrendReq(hl='en-US', tz=360)
|
||||
|
||||
# Build payload with the main keyword
|
||||
update_progress("Building search payload...", progress=45)
|
||||
pytrends.build_payload([keywords], timeframe='today 12-m', geo='')
|
||||
|
||||
# Get interest over time if missing
|
||||
if formatted_data['interest_over_time'].empty:
|
||||
try:
|
||||
update_progress("Fetching interest over time data...", progress=50)
|
||||
interest_df = pytrends.interest_over_time()
|
||||
if not interest_df.empty:
|
||||
formatted_data['interest_over_time'] = interest_df.reset_index()
|
||||
update_progress(f"Successfully fetched interest over time data with {len(formatted_data['interest_over_time'])} data points", progress=55)
|
||||
else:
|
||||
update_progress("No interest over time data available", level="warning", progress=55)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching interest over time: {e}")
|
||||
update_progress(f"Error fetching interest over time: {str(e)}", level="warning", progress=55)
|
||||
|
||||
# Get regional interest if missing
|
||||
if formatted_data['regional_interest'].empty:
|
||||
try:
|
||||
update_progress("Fetching regional interest data...", progress=60)
|
||||
regional_df = pytrends.interest_by_region()
|
||||
if not regional_df.empty:
|
||||
formatted_data['regional_interest'] = regional_df.reset_index()
|
||||
update_progress(f"Successfully fetched regional interest data for {len(formatted_data['regional_interest'])} regions", progress=65)
|
||||
else:
|
||||
update_progress("No regional interest data available", level="warning", progress=65)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching regional interest: {e}")
|
||||
update_progress(f"Error fetching regional interest: {str(e)}", level="warning", progress=65)
|
||||
|
||||
# Get related queries if missing
|
||||
if formatted_data['related_queries'].empty:
|
||||
try:
|
||||
update_progress("Fetching related queries data...", progress=70)
|
||||
# Get related queries data
|
||||
related_queries = pytrends.related_queries()
|
||||
|
||||
# Create empty DataFrame as fallback
|
||||
formatted_data['related_queries'] = pd.DataFrame(columns=['query', 'value'])
|
||||
|
||||
# Simple direct approach to avoid list index errors
|
||||
if related_queries and isinstance(related_queries, dict):
|
||||
# Check if our keyword exists in the results
|
||||
if keywords in related_queries:
|
||||
keyword_data = related_queries[keywords]
|
||||
|
||||
# Process top queries if available
|
||||
if 'top' in keyword_data and keyword_data['top'] is not None:
|
||||
try:
|
||||
update_progress("Processing top related queries...", progress=75)
|
||||
# Convert to DataFrame if it's not already
|
||||
if isinstance(keyword_data['top'], pd.DataFrame):
|
||||
top_df = keyword_data['top']
|
||||
else:
|
||||
# Try to convert to DataFrame
|
||||
top_df = pd.DataFrame(keyword_data['top'])
|
||||
|
||||
# Ensure it has the right columns
|
||||
if not top_df.empty:
|
||||
# Rename columns if needed
|
||||
if 'query' in top_df.columns:
|
||||
# Already has the right column name
|
||||
pass
|
||||
elif len(top_df.columns) > 0:
|
||||
# Use first column as query
|
||||
top_df = top_df.rename(columns={top_df.columns[0]: 'query'})
|
||||
|
||||
# Add to our results
|
||||
formatted_data['related_queries'] = top_df
|
||||
update_progress(f"Successfully processed {len(top_df)} top related queries", progress=80)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error processing top queries: {e}")
|
||||
update_progress(f"Error processing top queries: {str(e)}", level="warning", progress=80)
|
||||
|
||||
# Process rising queries if available
|
||||
if 'rising' in keyword_data and keyword_data['rising'] is not None:
|
||||
try:
|
||||
update_progress("Processing rising related queries...", progress=85)
|
||||
# Convert to DataFrame if it's not already
|
||||
if isinstance(keyword_data['rising'], pd.DataFrame):
|
||||
rising_df = keyword_data['rising']
|
||||
else:
|
||||
# Try to convert to DataFrame
|
||||
rising_df = pd.DataFrame(keyword_data['rising'])
|
||||
|
||||
# Ensure it has the right columns
|
||||
if not rising_df.empty:
|
||||
# Rename columns if needed
|
||||
if 'query' in rising_df.columns:
|
||||
# Already has the right column name
|
||||
pass
|
||||
elif len(rising_df.columns) > 0:
|
||||
# Use first column as query
|
||||
rising_df = rising_df.rename(columns={rising_df.columns[0]: 'query'})
|
||||
|
||||
# Combine with existing data if we have any
|
||||
if not formatted_data['related_queries'].empty:
|
||||
formatted_data['related_queries'] = pd.concat([formatted_data['related_queries'], rising_df])
|
||||
update_progress(f"Successfully processed {len(rising_df)} rising related queries", progress=90)
|
||||
else:
|
||||
formatted_data['related_queries'] = rising_df
|
||||
update_progress(f"Successfully processed {len(rising_df)} rising related queries", progress=90)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error processing rising queries: {e}")
|
||||
update_progress(f"Error processing rising queries: {str(e)}", level="warning", progress=90)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching related queries: {e}")
|
||||
update_progress(f"Error fetching related queries: {str(e)}", level="warning", progress=90)
|
||||
# Ensure we have an empty DataFrame with the right columns
|
||||
formatted_data['related_queries'] = pd.DataFrame(columns=['query', 'value'])
|
||||
|
||||
# Get related topics if missing
|
||||
if formatted_data['related_topics'].empty:
|
||||
try:
|
||||
update_progress("Fetching related topics data...", progress=95)
|
||||
# Get related topics data
|
||||
related_topics = pytrends.related_topics()
|
||||
|
||||
# Create empty DataFrame as fallback
|
||||
formatted_data['related_topics'] = pd.DataFrame(columns=['topic', 'value'])
|
||||
|
||||
# Simple direct approach to avoid list index errors
|
||||
if related_topics and isinstance(related_topics, dict):
|
||||
# Check if our keyword exists in the results
|
||||
if keywords in related_topics:
|
||||
keyword_data = related_topics[keywords]
|
||||
|
||||
# Process top topics if available
|
||||
if 'top' in keyword_data and keyword_data['top'] is not None:
|
||||
try:
|
||||
update_progress("Processing top related topics...", progress=97)
|
||||
# Convert to DataFrame if it's not already
|
||||
if isinstance(keyword_data['top'], pd.DataFrame):
|
||||
top_df = keyword_data['top']
|
||||
else:
|
||||
# Try to convert to DataFrame
|
||||
top_df = pd.DataFrame(keyword_data['top'])
|
||||
|
||||
# Ensure it has the right columns
|
||||
if not top_df.empty:
|
||||
# Rename columns if needed
|
||||
if 'topic_title' in top_df.columns:
|
||||
top_df = top_df.rename(columns={'topic_title': 'topic'})
|
||||
elif len(top_df.columns) > 0 and 'topic' not in top_df.columns:
|
||||
# Use first column as topic
|
||||
top_df = top_df.rename(columns={top_df.columns[0]: 'topic'})
|
||||
|
||||
# Add to our results
|
||||
formatted_data['related_topics'] = top_df
|
||||
update_progress(f"Successfully processed {len(top_df)} top related topics", progress=98)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error processing top topics: {e}")
|
||||
update_progress(f"Error processing top topics: {str(e)}", level="warning", progress=98)
|
||||
|
||||
# Process rising topics if available
|
||||
if 'rising' in keyword_data and keyword_data['rising'] is not None:
|
||||
try:
|
||||
update_progress("Processing rising related topics...", progress=99)
|
||||
# Convert to DataFrame if it's not already
|
||||
if isinstance(keyword_data['rising'], pd.DataFrame):
|
||||
rising_df = keyword_data['rising']
|
||||
else:
|
||||
# Try to convert to DataFrame
|
||||
rising_df = pd.DataFrame(keyword_data['rising'])
|
||||
|
||||
# Ensure it has the right columns
|
||||
if not rising_df.empty:
|
||||
# Rename columns if needed
|
||||
if 'topic_title' in rising_df.columns:
|
||||
rising_df = rising_df.rename(columns={'topic_title': 'topic'})
|
||||
elif len(rising_df.columns) > 0 and 'topic' not in rising_df.columns:
|
||||
# Use first column as topic
|
||||
rising_df = rising_df.rename(columns={rising_df.columns[0]: 'topic'})
|
||||
|
||||
# Combine with existing data if we have any
|
||||
if not formatted_data['related_topics'].empty:
|
||||
formatted_data['related_topics'] = pd.concat([formatted_data['related_topics'], rising_df])
|
||||
update_progress(f"Successfully processed {len(rising_df)} rising related topics", progress=100)
|
||||
else:
|
||||
formatted_data['related_topics'] = rising_df
|
||||
update_progress(f"Successfully processed {len(rising_df)} rising related topics", progress=100)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error processing rising topics: {e}")
|
||||
update_progress(f"Error processing rising topics: {str(e)}", level="warning", progress=100)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching related topics: {e}")
|
||||
update_progress(f"Error fetching related topics: {str(e)}", level="warning", progress=100)
|
||||
# Ensure we have an empty DataFrame with the right columns
|
||||
formatted_data['related_topics'] = pd.DataFrame(columns=['topic', 'value'])
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching additional trends data: {e}")
|
||||
update_progress(f"Error fetching additional trends data: {str(e)}", level="warning", progress=100)
|
||||
|
||||
# Ensure all DataFrames have the correct column names for the UI
|
||||
update_progress("Finalizing data formatting...", progress=100)
|
||||
|
||||
if not formatted_data['interest_over_time'].empty:
|
||||
if 'date' not in formatted_data['interest_over_time'].columns:
|
||||
formatted_data['interest_over_time'] = formatted_data['interest_over_time'].reset_index()
|
||||
if 'interest' not in formatted_data['interest_over_time'].columns and keywords in formatted_data['interest_over_time'].columns:
|
||||
formatted_data['interest_over_time'] = formatted_data['interest_over_time'].rename(columns={keywords: 'interest'})
|
||||
|
||||
if not formatted_data['regional_interest'].empty:
|
||||
if 'country_code' not in formatted_data['regional_interest'].columns and 'geoName' in formatted_data['regional_interest'].columns:
|
||||
formatted_data['regional_interest'] = formatted_data['regional_interest'].rename(columns={'geoName': 'country_code'})
|
||||
if 'interest' not in formatted_data['regional_interest'].columns and keywords in formatted_data['regional_interest'].columns:
|
||||
formatted_data['regional_interest'] = formatted_data['regional_interest'].rename(columns={keywords: 'interest'})
|
||||
|
||||
if not formatted_data['related_queries'].empty:
|
||||
# Handle different column names that might be present in the related queries DataFrame
|
||||
if 'query' not in formatted_data['related_queries'].columns:
|
||||
if 'Top query' in formatted_data['related_queries'].columns:
|
||||
formatted_data['related_queries'] = formatted_data['related_queries'].rename(columns={'Top query': 'query'})
|
||||
elif 'Rising query' in formatted_data['related_queries'].columns:
|
||||
formatted_data['related_queries'] = formatted_data['related_queries'].rename(columns={'Rising query': 'query'})
|
||||
elif 'query' not in formatted_data['related_queries'].columns and len(formatted_data['related_queries'].columns) > 0:
|
||||
# If we have a DataFrame but no 'query' column, use the first column as 'query'
|
||||
first_col = formatted_data['related_queries'].columns[0]
|
||||
formatted_data['related_queries'] = formatted_data['related_queries'].rename(columns={first_col: 'query'})
|
||||
|
||||
if 'value' not in formatted_data['related_queries'].columns and len(formatted_data['related_queries'].columns) > 1:
|
||||
# If we have a second column, use it as 'value'
|
||||
second_col = formatted_data['related_queries'].columns[1]
|
||||
formatted_data['related_queries'] = formatted_data['related_queries'].rename(columns={second_col: 'value'})
|
||||
elif 'value' not in formatted_data['related_queries'].columns:
|
||||
# If no 'value' column exists, add one with default values
|
||||
formatted_data['related_queries']['value'] = 0
|
||||
|
||||
if not formatted_data['related_topics'].empty:
|
||||
# Handle different column names that might be present in the related topics DataFrame
|
||||
if 'topic' not in formatted_data['related_topics'].columns:
|
||||
if 'topic_title' in formatted_data['related_topics'].columns:
|
||||
formatted_data['related_topics'] = formatted_data['related_topics'].rename(columns={'topic_title': 'topic'})
|
||||
elif 'topic' not in formatted_data['related_topics'].columns and len(formatted_data['related_topics'].columns) > 0:
|
||||
# If we have a DataFrame but no 'topic' column, use the first column as 'topic'
|
||||
first_col = formatted_data['related_topics'].columns[0]
|
||||
formatted_data['related_topics'] = formatted_data['related_topics'].rename(columns={first_col: 'topic'})
|
||||
|
||||
if 'value' not in formatted_data['related_topics'].columns and len(formatted_data['related_topics'].columns) > 1:
|
||||
# If we have a second column, use it as 'value'
|
||||
second_col = formatted_data['related_topics'].columns[1]
|
||||
formatted_data['related_topics'] = formatted_data['related_topics'].rename(columns={second_col: 'value'})
|
||||
elif 'value' not in formatted_data['related_topics'].columns:
|
||||
# If no 'value' column exists, add one with default values
|
||||
formatted_data['related_topics']['value'] = 0
|
||||
|
||||
# Clear the progress container after completion
|
||||
progress_container.empty()
|
||||
progress_bar.empty()
|
||||
|
||||
return formatted_data
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in Google Trends analysis: {e}")
|
||||
update_progress(f"Error in Google Trends analysis: {str(e)}", level="error", progress=100)
|
||||
# Clear the progress container after error
|
||||
progress_container.empty()
|
||||
progress_bar.empty()
|
||||
return {
|
||||
'related_keywords': [],
|
||||
'interest_over_time': pd.DataFrame(),
|
||||
'regional_interest': pd.DataFrame(),
|
||||
'related_queries': pd.DataFrame(),
|
||||
'related_topics': pd.DataFrame()
|
||||
}
|
||||
|
||||
|
||||
def metaphor_extract_titles_or_text(json_data, return_titles=True):
|
||||
"""
|
||||
Extract either titles or text from the given JSON structure.
|
||||
|
||||
Args:
|
||||
json_data (list): List of Result objects in JSON format.
|
||||
return_titles (bool): If True, return titles. If False, return text.
|
||||
|
||||
Returns:
|
||||
list: List of titles or text.
|
||||
"""
|
||||
if return_titles:
|
||||
return [(result.title) for result in json_data]
|
||||
else:
|
||||
return [result.text for result in json_data]
|
||||
|
||||
|
||||
def extract_info(json_data, info_type):
|
||||
"""
|
||||
Extract information (titles, peopleAlsoAsk, or relatedSearches) from the given JSON.
|
||||
|
||||
Args:
|
||||
json_data (dict): The JSON data.
|
||||
info_type (str): The type of information to extract (titles, peopleAlsoAsk, relatedSearches).
|
||||
|
||||
Returns:
|
||||
list or None: A list containing the requested information, or None if the type is invalid.
|
||||
"""
|
||||
if info_type == "titles":
|
||||
return [result.get("title") for result in json_data.get("organic", [])]
|
||||
elif info_type == "peopleAlsoAsk":
|
||||
return [item.get("question") for item in json_data.get("peopleAlsoAsk", [])]
|
||||
elif info_type == "relatedSearches":
|
||||
return [item.get("query") for item in json_data.get("relatedSearches", [])]
|
||||
else:
|
||||
print("Invalid info_type. Please use 'titles', 'peopleAlsoAsk', or 'relatedSearches'.")
|
||||
return None
|
||||
|
||||
|
||||
def tavily_extract_information(json_data, keyword):
|
||||
"""
|
||||
Extract information from the given JSON based on the specified keyword.
|
||||
|
||||
Args:
|
||||
json_data (dict): The JSON data.
|
||||
keyword (str): The keyword (title, content, answer, follow-query).
|
||||
|
||||
Returns:
|
||||
list or str: The extracted information based on the keyword.
|
||||
"""
|
||||
if keyword == 'titles':
|
||||
return [result['title'] for result in json_data['results']]
|
||||
elif keyword == 'content':
|
||||
return [result['content'] for result in json_data['results']]
|
||||
elif keyword == 'answer':
|
||||
return json_data['answer']
|
||||
elif keyword == 'follow-query':
|
||||
return json_data['follow_up_questions']
|
||||
else:
|
||||
return f"Invalid keyword: {keyword}"
|
||||
@@ -0,0 +1,623 @@
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
from metaphor_python import Metaphor
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import streamlit as st
|
||||
from loguru import logger
|
||||
from tqdm import tqdm
|
||||
from tabulate import tabulate
|
||||
from collections import namedtuple
|
||||
import textwrap
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv(Path('../../.env'))
|
||||
|
||||
from exa_py import Exa
|
||||
|
||||
from tenacity import (retry, stop_after_attempt, wait_random_exponential,)# for exponential backoff
|
||||
from .gpt_summarize_web_content import summarize_web_content
|
||||
from .gpt_competitor_analysis import summarize_competitor_content
|
||||
from .common_utils import save_in_file, cfg_search_param
|
||||
|
||||
|
||||
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
|
||||
def get_metaphor_client():
|
||||
"""
|
||||
Get the Metaphor client.
|
||||
|
||||
Returns:
|
||||
Metaphor: An instance of the Metaphor client.
|
||||
"""
|
||||
METAPHOR_API_KEY = os.environ.get('METAPHOR_API_KEY')
|
||||
if not METAPHOR_API_KEY:
|
||||
logger.error("METAPHOR_API_KEY environment variable not set!")
|
||||
st.error("METAPHOR_API_KEY environment variable not set!")
|
||||
raise ValueError("METAPHOR_API_KEY environment variable not set!")
|
||||
return Exa(METAPHOR_API_KEY)
|
||||
|
||||
|
||||
def metaphor_rag_search():
|
||||
""" Mainly used for researching blog sections. """
|
||||
metaphor = get_metaphor_client()
|
||||
query = "blog research" # Example query, this can be parameterized as needed
|
||||
results = metaphor.search(query)
|
||||
if not results:
|
||||
logger.error("No results found for the query.")
|
||||
st.error("No results found for the query.")
|
||||
return None
|
||||
|
||||
# Process the results (this is a placeholder, actual processing logic will depend on requirements)
|
||||
processed_results = [result['title'] for result in results]
|
||||
|
||||
# Display the results
|
||||
st.write("Search Results:")
|
||||
st.write(processed_results)
|
||||
|
||||
return processed_results
|
||||
|
||||
def metaphor_find_similar(similar_url, usecase, num_results=5, start_published_date=None, end_published_date=None,
|
||||
include_domains=None, exclude_domains=None, include_text=None, exclude_text=None,
|
||||
summary_query=None, progress_bar=None):
|
||||
"""Find similar content using Metaphor API."""
|
||||
|
||||
try:
|
||||
# Initialize progress if not provided
|
||||
if progress_bar is None:
|
||||
progress_bar = st.progress(0.0)
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(0.1, text="Initializing search...")
|
||||
|
||||
# Get Metaphor client
|
||||
metaphor = get_metaphor_client()
|
||||
logger.info(f"Initialized Metaphor client for URL: {similar_url}")
|
||||
|
||||
# Prepare search parameters
|
||||
search_params = {
|
||||
"highlights": True,
|
||||
"num_results": num_results,
|
||||
}
|
||||
|
||||
# Add optional parameters if provided
|
||||
if start_published_date:
|
||||
search_params["start_published_date"] = start_published_date
|
||||
if end_published_date:
|
||||
search_params["end_published_date"] = end_published_date
|
||||
if include_domains:
|
||||
search_params["include_domains"] = include_domains
|
||||
if exclude_domains:
|
||||
search_params["exclude_domains"] = exclude_domains
|
||||
if include_text:
|
||||
search_params["include_text"] = include_text
|
||||
if exclude_text:
|
||||
search_params["exclude_text"] = exclude_text
|
||||
|
||||
# Add summary query
|
||||
if summary_query:
|
||||
search_params["summary"] = summary_query
|
||||
else:
|
||||
search_params["summary"] = {"query": f"Find {usecase} similar to the given URL."}
|
||||
|
||||
logger.debug(f"Search parameters: {search_params}")
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(0.2, text="Preparing search parameters...")
|
||||
|
||||
# Make API call
|
||||
logger.info("Calling Metaphor API find_similar_and_contents...")
|
||||
search_response = metaphor.find_similar_and_contents(
|
||||
similar_url,
|
||||
**search_params
|
||||
)
|
||||
|
||||
if search_response and hasattr(search_response, 'results'):
|
||||
competitors = search_response.results
|
||||
total_results = len(competitors)
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(0.3, text=f"Found {total_results} results...")
|
||||
|
||||
# Process results
|
||||
processed_results = []
|
||||
for i, result in enumerate(competitors):
|
||||
# Calculate progress as decimal (0.0-1.0)
|
||||
progress = 0.3 + (0.6 * (i / total_results))
|
||||
progress_text = f"Processing result {i+1}/{total_results}..."
|
||||
progress_bar.progress(progress, text=progress_text)
|
||||
|
||||
# Process each result
|
||||
processed_result = {
|
||||
"Title": result.title,
|
||||
"URL": result.url,
|
||||
"Content Summary": result.text if hasattr(result, 'text') else "No content available"
|
||||
}
|
||||
processed_results.append(processed_result)
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(0.9, text="Finalizing results...")
|
||||
|
||||
# Create DataFrame
|
||||
df = pd.DataFrame(processed_results)
|
||||
|
||||
# Update progress
|
||||
progress_bar.progress(1.0, text="Analysis completed!")
|
||||
|
||||
return df, search_response
|
||||
|
||||
else:
|
||||
logger.warning("No results found in search response")
|
||||
progress_bar.progress(1.0, text="No results found")
|
||||
return pd.DataFrame(), search_response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in metaphor_find_similar: {str(e)}", exc_info=True)
|
||||
if progress_bar:
|
||||
progress_bar.progress(1.0, text="Error occurred during analysis")
|
||||
raise
|
||||
|
||||
|
||||
def calculate_date_range(time_range: str) -> tuple:
|
||||
"""
|
||||
Calculate start and end dates based on time range selection.
|
||||
|
||||
Args:
|
||||
time_range (str): One of 'past_day', 'past_week', 'past_month', 'past_year', 'anytime'
|
||||
|
||||
Returns:
|
||||
tuple: (start_date, end_date) in ISO format with milliseconds
|
||||
"""
|
||||
now = datetime.utcnow()
|
||||
end_date = now.strftime('%Y-%m-%dT%H:%M:%S.999Z')
|
||||
|
||||
if time_range == 'past_day':
|
||||
start_date = (now - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%S.000Z')
|
||||
elif time_range == 'past_week':
|
||||
start_date = (now - timedelta(weeks=1)).strftime('%Y-%m-%dT%H:%M:%S.000Z')
|
||||
elif time_range == 'past_month':
|
||||
start_date = (now - timedelta(days=30)).strftime('%Y-%m-%dT%H:%M:%S.000Z')
|
||||
elif time_range == 'past_year':
|
||||
start_date = (now - timedelta(days=365)).strftime('%Y-%m-%dT%H:%M:%S.000Z')
|
||||
else: # anytime
|
||||
start_date = None
|
||||
end_date = None
|
||||
|
||||
return start_date, end_date
|
||||
|
||||
def metaphor_search_articles(query, search_options: dict = None):
|
||||
"""
|
||||
Search for articles using the Metaphor/Exa API.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
search_options (dict): Search configuration options including:
|
||||
- num_results (int): Number of results to retrieve
|
||||
- use_autoprompt (bool): Whether to use autoprompt
|
||||
- include_domains (list): List of domains to include
|
||||
- time_range (str): One of 'past_day', 'past_week', 'past_month', 'past_year', 'anytime'
|
||||
- exclude_domains (list): List of domains to exclude
|
||||
|
||||
Returns:
|
||||
dict: Search results and metadata
|
||||
"""
|
||||
exa = get_metaphor_client()
|
||||
try:
|
||||
# Initialize default search options
|
||||
if search_options is None:
|
||||
search_options = {}
|
||||
|
||||
# Get config parameters or use defaults
|
||||
try:
|
||||
include_domains, _, num_results, _ = cfg_search_param('exa')
|
||||
except Exception as cfg_err:
|
||||
logger.warning(f"Failed to load config parameters: {cfg_err}. Using defaults.")
|
||||
include_domains = None
|
||||
num_results = 10
|
||||
|
||||
# Calculate date range based on time_range option
|
||||
time_range = search_options.get('time_range', 'anytime')
|
||||
start_published_date, end_published_date = calculate_date_range(time_range)
|
||||
|
||||
# Prepare search parameters
|
||||
search_params = {
|
||||
'num_results': search_options.get('num_results', num_results),
|
||||
'summary': True, # Always get summaries
|
||||
'include_domains': search_options.get('include_domains', include_domains),
|
||||
'use_autoprompt': search_options.get('use_autoprompt', True),
|
||||
}
|
||||
|
||||
# Add date parameters only if they are not None
|
||||
if start_published_date:
|
||||
search_params['start_published_date'] = start_published_date
|
||||
if end_published_date:
|
||||
search_params['end_published_date'] = end_published_date
|
||||
|
||||
logger.info(f"Exa web search with params: {search_params} and Query: {query}")
|
||||
|
||||
# Execute search
|
||||
search_response = exa.search_and_contents(
|
||||
query,
|
||||
**search_params
|
||||
)
|
||||
|
||||
if not search_response or not hasattr(search_response, 'results'):
|
||||
logger.warning("No results returned from Exa search")
|
||||
return None
|
||||
|
||||
# Get cost information safely
|
||||
try:
|
||||
cost_dollars = {
|
||||
'total': float(search_response.cost_dollars['total']),
|
||||
} if hasattr(search_response, 'cost_dollars') else None
|
||||
except Exception as cost_err:
|
||||
logger.warning(f"Error processing cost information: {cost_err}")
|
||||
cost_dollars = None
|
||||
|
||||
# Format response to match expected structure
|
||||
formatted_response = {
|
||||
"data": {
|
||||
"requestId": getattr(search_response, 'request_id', None),
|
||||
"resolvedSearchType": "neural",
|
||||
"results": [
|
||||
{
|
||||
"id": result.url,
|
||||
"title": result.title,
|
||||
"url": result.url,
|
||||
"publishedDate": result.published_date if hasattr(result, 'published_date') else None,
|
||||
"author": getattr(result, 'author', None),
|
||||
"score": getattr(result, 'score', 0),
|
||||
"summary": result.summary if hasattr(result, 'summary') else None,
|
||||
"text": result.text if hasattr(result, 'text') else None,
|
||||
"image": getattr(result, 'image', None),
|
||||
"favicon": getattr(result, 'favicon', None)
|
||||
}
|
||||
for result in search_response.results
|
||||
],
|
||||
"costDollars": cost_dollars
|
||||
}
|
||||
}
|
||||
|
||||
# Get AI-generated answer from Metaphor
|
||||
try:
|
||||
exa_answer = get_exa_answer(query)
|
||||
if exa_answer:
|
||||
formatted_response.update(exa_answer)
|
||||
except Exception as exa_err:
|
||||
logger.warning(f"Error getting Exa answer: {exa_err}")
|
||||
|
||||
# Get AI-generated answer from Tavily
|
||||
try:
|
||||
# Import the function directly from the module
|
||||
import importlib
|
||||
tavily_module = importlib.import_module('lib.ai_web_researcher.tavily_ai_search')
|
||||
if hasattr(tavily_module, 'do_tavily_ai_search'):
|
||||
tavily_response = tavily_module.do_tavily_ai_search(query)
|
||||
if tavily_response and 'answer' in tavily_response:
|
||||
formatted_response.update({
|
||||
"tavily_answer": tavily_response.get("answer"),
|
||||
"tavily_citations": tavily_response.get("citations", []),
|
||||
"tavily_cost_dollars": tavily_response.get("costDollars", {"total": 0})
|
||||
})
|
||||
else:
|
||||
logger.warning("do_tavily_ai_search function not found in tavily_ai_search module")
|
||||
except Exception as tavily_err:
|
||||
logger.warning(f"Error getting Tavily answer: {tavily_err}")
|
||||
|
||||
# Return the formatted response without displaying it
|
||||
# The display will be handled by gpt_web_researcher
|
||||
return formatted_response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in Exa searching articles: {e}")
|
||||
return None
|
||||
|
||||
def streamlit_display_metaphor_results(metaphor_response, search_keywords=None):
|
||||
"""Display Metaphor search results in Streamlit."""
|
||||
|
||||
if not metaphor_response:
|
||||
st.error("No search results found.")
|
||||
return
|
||||
|
||||
# Add debug logging
|
||||
logger.debug(f"Displaying Metaphor results. Type: {type(metaphor_response)}")
|
||||
if isinstance(metaphor_response, dict):
|
||||
logger.debug(f"Metaphor response keys: {metaphor_response.keys()}")
|
||||
|
||||
# Initialize session state variables if they don't exist
|
||||
if 'search_insights' not in st.session_state:
|
||||
st.session_state.search_insights = None
|
||||
if 'metaphor_response' not in st.session_state:
|
||||
st.session_state.metaphor_response = None
|
||||
if 'insights_generated' not in st.session_state:
|
||||
st.session_state.insights_generated = False
|
||||
|
||||
# Store the current response in session state
|
||||
st.session_state.metaphor_response = metaphor_response
|
||||
|
||||
# Display search results
|
||||
st.subheader("🔍 Search Results")
|
||||
|
||||
# Calculate metrics - handle different data structures
|
||||
results = []
|
||||
if isinstance(metaphor_response, dict):
|
||||
if 'data' in metaphor_response and 'results' in metaphor_response['data']:
|
||||
results = metaphor_response['data']['results']
|
||||
elif 'results' in metaphor_response:
|
||||
results = metaphor_response['results']
|
||||
|
||||
total_results = len(results)
|
||||
avg_relevance = sum(r.get('score', 0) for r in results) / total_results if total_results > 0 else 0
|
||||
|
||||
# Display metrics
|
||||
col1, col2 = st.columns(2)
|
||||
with col1:
|
||||
st.metric("Total Results", total_results)
|
||||
with col2:
|
||||
st.metric("Average Relevance Score", f"{avg_relevance:.2f}")
|
||||
|
||||
# Display AI-generated answers if available
|
||||
if 'tavily_answer' in metaphor_response or 'metaphor_answer' in metaphor_response:
|
||||
st.subheader("🤖 AI-Generated Answers")
|
||||
|
||||
if 'tavily_answer' in metaphor_response:
|
||||
st.markdown("**Tavily AI Answer:**")
|
||||
st.write(metaphor_response['tavily_answer'])
|
||||
|
||||
if 'metaphor_answer' in metaphor_response:
|
||||
st.markdown("**Metaphor AI Answer:**")
|
||||
st.write(metaphor_response['metaphor_answer'])
|
||||
|
||||
# Get Search Insights button
|
||||
if st.button("Generate Search Insights", key="metaphor_generate_insights_button"):
|
||||
st.session_state.insights_generated = True
|
||||
st.rerun()
|
||||
|
||||
# Display insights if they exist in session state
|
||||
if st.session_state.search_insights:
|
||||
st.subheader("🔍 Search Insights")
|
||||
st.write(st.session_state.search_insights)
|
||||
|
||||
# Display search results in a data editor
|
||||
st.subheader("📊 Detailed Results")
|
||||
|
||||
# Prepare data for display
|
||||
results_data = []
|
||||
for result in results:
|
||||
result_data = {
|
||||
'Title': result.get('title', ''),
|
||||
'URL': result.get('url', ''),
|
||||
'Snippet': result.get('summary', ''),
|
||||
'Relevance Score': result.get('score', 0),
|
||||
'Published Date': result.get('publishedDate', '')
|
||||
}
|
||||
results_data.append(result_data)
|
||||
|
||||
# Create DataFrame
|
||||
df = pd.DataFrame(results_data)
|
||||
|
||||
# Display the DataFrame if it's not empty
|
||||
if not df.empty:
|
||||
# Configure columns
|
||||
st.dataframe(
|
||||
df,
|
||||
column_config={
|
||||
"Title": st.column_config.TextColumn(
|
||||
"Title",
|
||||
help="Title of the search result",
|
||||
width="large",
|
||||
),
|
||||
"URL": st.column_config.LinkColumn(
|
||||
"URL",
|
||||
help="Link to the search result",
|
||||
width="medium",
|
||||
display_text="Visit Article",
|
||||
),
|
||||
"Snippet": st.column_config.TextColumn(
|
||||
"Snippet",
|
||||
help="Summary of the search result",
|
||||
width="large",
|
||||
),
|
||||
"Relevance Score": st.column_config.NumberColumn(
|
||||
"Relevance Score",
|
||||
help="Relevance score of the search result",
|
||||
format="%.2f",
|
||||
width="small",
|
||||
),
|
||||
"Published Date": st.column_config.DateColumn(
|
||||
"Published Date",
|
||||
help="Publication date of the search result",
|
||||
width="medium",
|
||||
),
|
||||
},
|
||||
hide_index=True,
|
||||
)
|
||||
|
||||
# Add popover for snippets
|
||||
st.markdown("""
|
||||
<style>
|
||||
.snippet-popover {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
}
|
||||
.snippet-popover .snippet-content {
|
||||
visibility: hidden;
|
||||
width: 300px;
|
||||
background-color: #f9f9f9;
|
||||
color: #333;
|
||||
text-align: left;
|
||||
border-radius: 6px;
|
||||
padding: 10px;
|
||||
position: absolute;
|
||||
z-index: 1;
|
||||
bottom: 125%;
|
||||
left: 50%;
|
||||
margin-left: -150px;
|
||||
opacity: 0;
|
||||
transition: opacity 0.3s;
|
||||
box-shadow: 0 2px 5px rgba(0,0,0,0.2);
|
||||
}
|
||||
.snippet-popover:hover .snippet-content {
|
||||
visibility: visible;
|
||||
opacity: 1;
|
||||
}
|
||||
</style>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Display snippets with popover
|
||||
st.subheader("📝 Snippets")
|
||||
for i, result in enumerate(results):
|
||||
snippet = result.get('summary', '')
|
||||
if snippet:
|
||||
st.markdown(f"""
|
||||
<div class="snippet-popover">
|
||||
<strong>{result.get('title', '')}</strong>
|
||||
<div class="snippet-content">
|
||||
{snippet}
|
||||
</div>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
else:
|
||||
st.info("No detailed results available.")
|
||||
|
||||
# Add a collapsible section for the raw JSON data
|
||||
with st.expander("Research Results (JSON)", expanded=False):
|
||||
st.json(metaphor_response)
|
||||
|
||||
|
||||
def metaphor_news_summarizer(news_keywords):
|
||||
""" build a LLM-based news summarizer app with the Exa API to keep us up-to-date
|
||||
with the latest news on a given topic.
|
||||
"""
|
||||
exa = get_metaphor_client()
|
||||
|
||||
# FIXME: Needs to be user defined.
|
||||
one_week_ago = (datetime.now() - timedelta(days=7))
|
||||
date_cutoff = one_week_ago.strftime("%Y-%m-%d")
|
||||
|
||||
search_response = exa.search_and_contents(
|
||||
news_keywords, use_autoprompt=True, start_published_date=date_cutoff
|
||||
)
|
||||
|
||||
urls = [result.url for result in search_response.results]
|
||||
print("URLs:")
|
||||
for url in urls:
|
||||
print(url)
|
||||
|
||||
|
||||
def print_search_result(contents_response):
|
||||
# Define the Result namedtuple
|
||||
Result = namedtuple("Result", ["url", "title", "text"])
|
||||
# Tabulate the data
|
||||
table_headers = ["URL", "Title", "Summary"]
|
||||
table_data = [(result.url, result.title, result.text) for result in contents_response]
|
||||
|
||||
table = tabulate(table_data,
|
||||
headers=table_headers,
|
||||
tablefmt="fancy_grid",
|
||||
colalign=["left", "left", "left"],
|
||||
maxcolwidths=[20, 20, 70])
|
||||
|
||||
# Convert table_data to DataFrame
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(table_data, columns=["URL", "Title", "Summary"])
|
||||
import streamlit as st
|
||||
st.table(df)
|
||||
print(table)
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
|
||||
|
||||
def metaphor_scholar_search(query, include_domains=None, time_range="anytime"):
|
||||
"""
|
||||
Search for papers using the Metaphor API.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
include_domains (list): List of domains to include.
|
||||
time_range (str): Time range for published articles ("day", "week", "month", "year", "anytime").
|
||||
|
||||
Returns:
|
||||
MetaphorResponse: The response from the Metaphor API.
|
||||
"""
|
||||
client = get_metaphor_client()
|
||||
try:
|
||||
if time_range == "day":
|
||||
start_published_date = (datetime.utcnow() - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ')
|
||||
elif time_range == "week":
|
||||
start_published_date = (datetime.utcnow() - timedelta(weeks=1)).strftime('%Y-%m-%dT%H:%M:%SZ')
|
||||
elif time_range == "month":
|
||||
start_published_date = (datetime.utcnow() - timedelta(weeks=4)).strftime('%Y-%m-%dT%H:%M:%SZ')
|
||||
elif time_range == "year":
|
||||
start_published_date = (datetime.utcnow() - timedelta(days=365)).strftime('%Y-%m-%dT%H:%M:%SZ')
|
||||
else:
|
||||
start_published_date = None
|
||||
|
||||
response = client.search(query, include_domains=include_domains, start_published_date=start_published_date, use_autoprompt=True)
|
||||
return response
|
||||
except Exception as e:
|
||||
logger.error(f"Error in searching papers: {e}")
|
||||
|
||||
def get_exa_answer(query: str, system_prompt: str = None) -> dict:
|
||||
"""
|
||||
Get an AI-generated answer for a query using Exa's answer endpoint.
|
||||
|
||||
Args:
|
||||
query (str): The search query to get an answer for
|
||||
system_prompt (str, optional): Custom system prompt for the LLM. If None, uses default prompt.
|
||||
|
||||
Returns:
|
||||
dict: Response containing answer, citations, and cost information
|
||||
{
|
||||
"answer": str,
|
||||
"citations": list[dict],
|
||||
"costDollars": dict
|
||||
}
|
||||
"""
|
||||
exa = get_metaphor_client()
|
||||
try:
|
||||
# Use default system prompt if none provided
|
||||
if system_prompt is None:
|
||||
system_prompt = (
|
||||
"I am doing research to write factual content. "
|
||||
"Help me find answers for content generation task. "
|
||||
"Provide detailed, well-structured answers with clear citations."
|
||||
)
|
||||
|
||||
logger.info(f"Getting Exa answer for query: {query}")
|
||||
logger.debug(f"Using system prompt: {system_prompt}")
|
||||
|
||||
# Make API call to get answer with system_prompt parameter
|
||||
result = exa.answer(
|
||||
query,
|
||||
model="exa",
|
||||
text=True # Include full text in citations
|
||||
)
|
||||
|
||||
if not result or not result.get('answer'):
|
||||
logger.warning("No answer received from Exa")
|
||||
return None
|
||||
|
||||
# Format response to match expected structure
|
||||
response = {
|
||||
"answer": result.get('answer'),
|
||||
"citations": result.get('citations', []),
|
||||
"costDollars": result.get('costDollars', {"total": 0})
|
||||
}
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting Exa answer: {e}")
|
||||
return None
|
||||
218
ToBeMigrated/ai_web_researcher/tavily_ai_search.py
Normal file
218
ToBeMigrated/ai_web_researcher/tavily_ai_search.py
Normal file
@@ -0,0 +1,218 @@
|
||||
"""
|
||||
This Python script uses the Tavily AI service to perform advanced searches based on specified keywords and options. It retrieves Tavily AI search results, pretty-prints them using Rich and Tabulate, and provides additional information such as the answer to the search query and follow-up questions.
|
||||
|
||||
Features:
|
||||
- Utilizes the Tavily AI service for advanced searches.
|
||||
- Retrieves API keys from the environment variables loaded from a .env file.
|
||||
- Configures logging with Loguru for informative messages.
|
||||
- Implements a retry mechanism using Tenacity to handle transient failures during Tavily searches.
|
||||
- Displays search results, including titles, snippets, and links, in a visually appealing table using Tabulate and Rich.
|
||||
|
||||
Usage:
|
||||
- Ensure the necessary API keys are set in the .env file.
|
||||
- Run the script to perform a Tavily AI search with specified keywords and options.
|
||||
- The search results, including titles, snippets, and links, are displayed in a formatted table.
|
||||
- Additional information, such as the answer to the search query and follow-up questions, is presented in separate tables.
|
||||
|
||||
Modifications:
|
||||
- To modify the script, update the environment variables in the .env file with the required API keys.
|
||||
- Adjust the search parameters, such as keywords and search depth, in the `do_tavily_ai_search` function as needed.
|
||||
- Customize logging configurations and table formatting according to preferences.
|
||||
|
||||
To-Do (TBD):
|
||||
- Consider adding further enhancements or customization based on specific use cases.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sys
|
||||
from dotenv import load_dotenv
|
||||
from loguru import logger
|
||||
from tavily import TavilyClient
|
||||
from rich import print
|
||||
from tabulate import tabulate
|
||||
# Load environment variables from .env file
|
||||
load_dotenv(Path('../../.env'))
|
||||
from rich import print
|
||||
import streamlit as st
|
||||
# Configure logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
from .common_utils import save_in_file, cfg_search_param
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
|
||||
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
|
||||
def do_tavily_ai_search(keywords, max_results=5, include_domains=None, search_depth="advanced", **kwargs):
|
||||
"""
|
||||
Get Tavily AI search results based on specified keywords and options.
|
||||
"""
|
||||
# Run Tavily search
|
||||
logger.info(f"Running Tavily search on: {keywords}")
|
||||
|
||||
# Retrieve API keys
|
||||
api_key = os.getenv('TAVILY_API_KEY')
|
||||
if not api_key:
|
||||
raise ValueError("API keys for Tavily is Not set.")
|
||||
|
||||
# Initialize Tavily client
|
||||
try:
|
||||
client = TavilyClient(api_key=api_key)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to create Tavily client. Check TAVILY_API_KEY: {err}")
|
||||
raise
|
||||
|
||||
try:
|
||||
# Create search parameters exactly matching Tavily's API format
|
||||
tavily_search_result = client.search(
|
||||
query=keywords,
|
||||
search_depth="advanced",
|
||||
time_range="year",
|
||||
include_answer="advanced",
|
||||
include_domains=[""] if not include_domains else include_domains,
|
||||
max_results=max_results
|
||||
)
|
||||
|
||||
if tavily_search_result:
|
||||
print_result_table(tavily_search_result)
|
||||
streamlit_display_results(tavily_search_result)
|
||||
return tavily_search_result
|
||||
return None
|
||||
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to do Tavily Research: {err}")
|
||||
raise
|
||||
|
||||
|
||||
def streamlit_display_results(output_data):
|
||||
"""Display Tavily AI search results in Streamlit UI with enhanced visualization."""
|
||||
|
||||
# Display the 'answer' in Streamlit with enhanced styling
|
||||
answer = output_data.get("answer", "No answer available")
|
||||
st.markdown("### 🤖 AI-Generated Answer")
|
||||
st.markdown(f"""
|
||||
<div style="background-color: #f0f2f6; padding: 20px; border-radius: 10px; border-left: 5px solid #4CAF50;">
|
||||
{answer}
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# Display follow-up questions if available
|
||||
follow_up_questions = output_data.get("follow_up_questions", [])
|
||||
if follow_up_questions:
|
||||
st.markdown("### ❓ Follow-up Questions")
|
||||
for i, question in enumerate(follow_up_questions, 1):
|
||||
st.markdown(f"**{i}.** {question}")
|
||||
|
||||
# Prepare data for display with dataeditor
|
||||
st.markdown("### 📊 Search Results")
|
||||
|
||||
# Create a DataFrame for the results
|
||||
import pandas as pd
|
||||
results_data = []
|
||||
|
||||
for item in output_data.get("results", []):
|
||||
title = item.get("title", "")
|
||||
snippet = item.get("content", "")
|
||||
link = item.get("url", "")
|
||||
results_data.append({
|
||||
"Title": title,
|
||||
"Content": snippet,
|
||||
"Link": link
|
||||
})
|
||||
|
||||
if results_data:
|
||||
df = pd.DataFrame(results_data)
|
||||
|
||||
# Display the data editor
|
||||
st.data_editor(
|
||||
df,
|
||||
column_config={
|
||||
"Title": st.column_config.TextColumn(
|
||||
"Title",
|
||||
help="Article title",
|
||||
width="medium",
|
||||
),
|
||||
"Content": st.column_config.TextColumn(
|
||||
"Content",
|
||||
help="Click the button below to view full content",
|
||||
width="large",
|
||||
),
|
||||
"Link": st.column_config.LinkColumn(
|
||||
"Link",
|
||||
help="Click to visit the website",
|
||||
width="small",
|
||||
display_text="Visit Site"
|
||||
),
|
||||
},
|
||||
hide_index=True,
|
||||
use_container_width=True,
|
||||
)
|
||||
|
||||
# Add popovers for full content display
|
||||
for item in output_data.get("results", []):
|
||||
with st.popover(f"View content: {item.get('title', '')[:50]}..."):
|
||||
st.markdown(item.get("content", ""))
|
||||
else:
|
||||
st.info("No results found for your search query.")
|
||||
|
||||
|
||||
def print_result_table(output_data):
|
||||
""" Pretty print the tavily AI search result. """
|
||||
# Prepare data for tabulate
|
||||
table_data = []
|
||||
for item in output_data.get("results"):
|
||||
title = item.get("title", "")
|
||||
snippet = item.get("content", "")
|
||||
link = item.get("url", "")
|
||||
table_data.append([title, snippet, link])
|
||||
|
||||
# Define table headers
|
||||
table_headers = ["Title", "Snippet", "Link"]
|
||||
# Display the table using tabulate
|
||||
table = tabulate(table_data,
|
||||
headers=table_headers,
|
||||
tablefmt="fancy_grid",
|
||||
colalign=["left", "left", "left"],
|
||||
maxcolwidths=[30, 60, 30])
|
||||
# Print the table
|
||||
print(table)
|
||||
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
|
||||
# Display the 'answer' in a table
|
||||
table_headers = [f"The answer to search query: {output_data.get('query')}"]
|
||||
table_data = [[output_data.get("answer")]]
|
||||
table = tabulate(table_data,
|
||||
headers=table_headers,
|
||||
tablefmt="fancy_grid",
|
||||
maxcolwidths=[80])
|
||||
print(table)
|
||||
# Save the combined table to a file
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
|
||||
# Display the 'follow_up_questions' in a table
|
||||
if output_data.get("follow_up_questions"):
|
||||
table_headers = [f"Search Engine follow up questions for query: {output_data.get('query')}"]
|
||||
table_data = [[output_data.get("follow_up_questions")]]
|
||||
table = tabulate(table_data,
|
||||
headers=table_headers,
|
||||
tablefmt="fancy_grid",
|
||||
maxcolwidths=[80])
|
||||
print(table)
|
||||
try:
|
||||
save_in_file(table)
|
||||
except Exception as save_results_err:
|
||||
logger.error(f"Failed to save search results: {save_results_err}")
|
||||
184
ToBeMigrated/ai_writers/ai_essay_writer.py
Normal file
184
ToBeMigrated/ai_writers/ai_essay_writer.py
Normal file
@@ -0,0 +1,184 @@
|
||||
#####################################################
|
||||
#
|
||||
# Alwrity, AI essay writer - Essay_Writing_with_Prompt_Chaining
|
||||
#
|
||||
#####################################################
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
from pprint import pprint
|
||||
from loguru import logger
|
||||
import sys
|
||||
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
def generate_with_retry(prompt, system_prompt=None):
|
||||
"""
|
||||
Generates content using the llm_text_gen function with retry handling for errors.
|
||||
|
||||
Parameters:
|
||||
prompt (str): The prompt to generate content from.
|
||||
system_prompt (str, optional): Custom system prompt to use instead of the default one.
|
||||
|
||||
Returns:
|
||||
str: The generated content.
|
||||
"""
|
||||
try:
|
||||
# Use llm_text_gen instead of directly calling the model
|
||||
return llm_text_gen(prompt, system_prompt)
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating content: {e}")
|
||||
return ""
|
||||
|
||||
|
||||
def ai_essay_generator(essay_title, selected_essay_type, selected_education_level, selected_num_pages):
|
||||
"""
|
||||
Write an Essay using prompt chaining and iterative generation.
|
||||
|
||||
Parameters:
|
||||
essay_title (str): The title or topic of the essay.
|
||||
selected_essay_type (str): The type of essay to write.
|
||||
selected_education_level (str): The education level of the target audience.
|
||||
selected_num_pages (int): The number of pages or words for the essay.
|
||||
"""
|
||||
logger.info(f"Starting to write Essay on {essay_title}..")
|
||||
try:
|
||||
# Define persona and writing guidelines
|
||||
guidelines = f'''\
|
||||
Writing Guidelines
|
||||
|
||||
As an expert Essay writer and academic researcher, demostrate your world class essay writing skills.
|
||||
|
||||
Follow the below writing guidelines for writing your essay:
|
||||
1). You specialize in {selected_essay_type} essay writing.
|
||||
2). Your target audiences include readers from {selected_education_level} level.
|
||||
3). The title of the essay is {essay_title}.
|
||||
5). The final essay should of {selected_num_pages} words/pages.
|
||||
3). Plant the seeds of subplots or potential character arc shifts that can be expanded later.
|
||||
|
||||
Remember, your main goal is to write as much as you can. If you get through
|
||||
the story too fast, that is bad. Expand, never summarize.
|
||||
'''
|
||||
# Generate prompts
|
||||
premise_prompt = f'''\
|
||||
As an expert essay writer, specilizing in {selected_essay_type} essay writing.
|
||||
|
||||
Write an Essay title for given keywords {essay_title}.
|
||||
The title should appeal to audience level of {selected_education_level}.
|
||||
'''
|
||||
|
||||
outline_prompt = f'''\
|
||||
As an expert essay writer, specilizing in {selected_essay_type} essay writing.
|
||||
|
||||
Your Essay title is:
|
||||
|
||||
{{premise}}
|
||||
|
||||
Write an outline for the essay.
|
||||
'''
|
||||
|
||||
starting_prompt = f'''\
|
||||
As an expert essay writer, specilizing in {selected_essay_type} essay writing.
|
||||
|
||||
Your essay title is:
|
||||
|
||||
{{premise}}
|
||||
|
||||
The outline of the Essay is:
|
||||
|
||||
{{outline}}
|
||||
|
||||
First, silently review the outline and the essay title. Consider how to start the Essay.
|
||||
Start to write the very beginning of the Essay. You are not expected to finish
|
||||
the whole Essay now. Your writing should be detailed enough that you are only
|
||||
scratching the surface of the first bullet of your outline. Try to write AT
|
||||
MINIMUM 1000 WORDS.
|
||||
|
||||
{guidelines}
|
||||
'''
|
||||
|
||||
continuation_prompt = f'''\
|
||||
As an expert essay writer, specilizing in {selected_essay_type} essay writing.
|
||||
|
||||
Your essay title is:
|
||||
|
||||
{{premise}}
|
||||
|
||||
The outline of the Essay is:
|
||||
|
||||
{{outline}}
|
||||
|
||||
You've begun to write the essay and continue to do so.
|
||||
Here's what you've written so far:
|
||||
|
||||
{{story_text}}
|
||||
|
||||
=====
|
||||
|
||||
First, silently review the outline and essay so far.
|
||||
Identify what the single next part of your outline you should write.
|
||||
|
||||
Your task is to continue where you left off and write the next part of the Essay.
|
||||
You are not expected to finish the whole essay now. Your writing should be
|
||||
detailed enough that you are only scratching the surface of the next part of
|
||||
your outline. Try to write AT MINIMUM 1000 WORDS. However, only once the essay
|
||||
is COMPLETELY finished, write IAMDONE. Remember, do NOT write a whole chapter
|
||||
right now.
|
||||
|
||||
{guidelines}
|
||||
'''
|
||||
|
||||
# Generate prompts
|
||||
try:
|
||||
premise = generate_with_retry(premise_prompt)
|
||||
logger.info(f"The title of the Essay is: {premise}")
|
||||
except Exception as err:
|
||||
logger.error(f"Essay title Generation Error: {err}")
|
||||
return
|
||||
|
||||
outline = generate_with_retry(outline_prompt.format(premise=premise))
|
||||
logger.info(f"The Outline of the essay is: {outline}\n\n")
|
||||
if not outline:
|
||||
logger.error("Failed to generate Essay outline. Exiting...")
|
||||
return
|
||||
|
||||
try:
|
||||
starting_draft = generate_with_retry(
|
||||
starting_prompt.format(premise=premise, outline=outline))
|
||||
pprint(starting_draft)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to Generate Essay draft: {err}")
|
||||
return
|
||||
|
||||
try:
|
||||
draft = starting_draft
|
||||
continuation = generate_with_retry(
|
||||
continuation_prompt.format(premise=premise, outline=outline, story_text=draft))
|
||||
pprint(continuation)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to write the initial draft: {err}")
|
||||
|
||||
# Add the continuation to the initial draft, keep building the story until we see 'IAMDONE'
|
||||
try:
|
||||
draft += '\n\n' + continuation
|
||||
except Exception as err:
|
||||
logger.error(f"Failed as: {err} and {continuation}")
|
||||
while 'IAMDONE' not in continuation:
|
||||
try:
|
||||
continuation = generate_with_retry(
|
||||
continuation_prompt.format(premise=premise, outline=outline, story_text=draft))
|
||||
draft += '\n\n' + continuation
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to continually write the Essay: {err}")
|
||||
return
|
||||
|
||||
# Remove 'IAMDONE' and print the final story
|
||||
final = draft.replace('IAMDONE', '').strip()
|
||||
pprint(final)
|
||||
return final
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Main Essay writing: An error occurred: {e}")
|
||||
return ""
|
||||
102
ToBeMigrated/ai_writers/ai_news_article_writer.py
Normal file
102
ToBeMigrated/ai_writers/ai_news_article_writer.py
Normal file
@@ -0,0 +1,102 @@
|
||||
######################################################
|
||||
#
|
||||
# Alwrity, as an AI news writer, will have to be factually correct.
|
||||
# We will do multiple rounds of web research and cite our sources.
|
||||
# 'include_urls' will focus news articles only from well known sources.
|
||||
# Choosing a country will help us get better results.
|
||||
#
|
||||
######################################################
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
from textwrap import dedent
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv(Path('../../.env'))
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
from ..ai_web_researcher.google_serp_search import perform_serper_news_search
|
||||
|
||||
|
||||
def ai_news_generation(news_keywords, news_country, news_language):
|
||||
""" Generate news aritcle based on given keywords. """
|
||||
# Use to store the blog in a string, to save in a *.md file.
|
||||
blog_markdown_str = ""
|
||||
|
||||
logger.info(f"Researching and Writing News Article on keywords: {news_keywords}")
|
||||
# Call on the got-researcher, tavily apis for this. Do google search for organic competition.
|
||||
try:
|
||||
google_news_result = perform_serper_news_search(news_keywords, news_country, news_language)
|
||||
blog_markdown_str = write_news_google_search(news_keywords, news_country, news_language, google_news_result)
|
||||
#print(blog_markdown_str)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed in Google News web research: {err}")
|
||||
logger.info("\n######### Draft1: Finished News article from Google web search: ###########\n\n")
|
||||
return blog_markdown_str
|
||||
|
||||
|
||||
def write_news_google_search(news_keywords, news_country, news_language, search_results):
|
||||
"""Combine the given online research and gpt blog content"""
|
||||
news_language = get_language_name(news_language)
|
||||
news_country = get_country_name(news_country)
|
||||
|
||||
prompt = f"""
|
||||
As an experienced {news_language} news journalist and editor,
|
||||
I will provide you with my 'News keywords' and its 'google search results'.
|
||||
Your goal is to write a News report, backed by given google search results.
|
||||
Important, as a news report, its imperative that your content is factually correct and cited.
|
||||
|
||||
Follow below guidelines:
|
||||
1). Understand and utilize the provided google search result json.
|
||||
2). Always provide in-line citations and provide referance links.
|
||||
3). Understand the given news item and adapt your tone accordingly.
|
||||
4). Always include the dates when then news was reported.
|
||||
6). Do not explain, describe your response.
|
||||
7). Your blog should be highly formatted in markdown style and highly readable.
|
||||
8). Important: Please read the entire prompt before writing anything. Follow the prompt exactly as I instructed.
|
||||
|
||||
\n\nNews Keywords: "{news_keywords}"\n\n
|
||||
Google search Result: "{search_results}"
|
||||
"""
|
||||
logger.info("Generating blog and FAQs from Google web search results.")
|
||||
try:
|
||||
response = llm_text_gen(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
logger.error(f"Exit: Failed to get response from LLM: {err}")
|
||||
exit(1)
|
||||
|
||||
|
||||
def get_language_name(language_code):
|
||||
languages = {
|
||||
"es": "Spanish",
|
||||
"vn": "Vietnamese",
|
||||
"en": "English",
|
||||
"ar": "Arabic",
|
||||
"hi": "Hindi",
|
||||
"de": "German",
|
||||
"zh-cn": "Chinese (Simplified)"
|
||||
# Add more language codes and corresponding names as needed
|
||||
}
|
||||
return languages.get(language_code, "Unknown")
|
||||
|
||||
def get_country_name(country_code):
|
||||
countries = {
|
||||
"es": "Spain",
|
||||
"vn": "Vietnam",
|
||||
"pk": "Pakistan",
|
||||
"in": "India",
|
||||
"de": "Germany",
|
||||
"cn": "China"
|
||||
# Add more country codes and corresponding names as needed
|
||||
}
|
||||
return countries.get(country_code, "Unknown")
|
||||
115
ToBeMigrated/ai_writers/ai_product_description_writer.py
Normal file
115
ToBeMigrated/ai_writers/ai_product_description_writer.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import streamlit as st
|
||||
import json
|
||||
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
def generate_product_description(title, details, audience, tone, length, keywords):
|
||||
"""
|
||||
Generates a product description using OpenAI's API.
|
||||
|
||||
Args:
|
||||
title (str): The title of the product.
|
||||
details (list): A list of product details (features, benefits, etc.).
|
||||
audience (list): A list of target audience segments.
|
||||
tone (str): The desired tone of the description (e.g., "Formal", "Informal").
|
||||
length (str): The desired length of the description (e.g., "short", "medium", "long").
|
||||
keywords (str): Keywords related to the product (comma-separated).
|
||||
|
||||
Returns:
|
||||
str: The generated product description.
|
||||
"""
|
||||
prompt = f"""
|
||||
Write a compelling product description for {title}.
|
||||
|
||||
Highlight these key features: {', '.join(details)}
|
||||
|
||||
Emphasize the benefits of these features for the target audience ({audience}).
|
||||
Maintain a {tone} tone and aim for a length of approximately {length} words.
|
||||
|
||||
Use these keywords naturally throughout the description: {', '.join(keywords)}.
|
||||
|
||||
Remember to be persuasive and focus on the value proposition.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = llm_text_gen(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
logger.error(f"Exit: Failed to get response from LLM: {err}")
|
||||
exit(1)
|
||||
|
||||
|
||||
def display_inputs():
|
||||
st.title("📝 AI Product Description Writer 🚀")
|
||||
st.markdown("**Generate compelling and accurate product descriptions with AI.**")
|
||||
|
||||
col1, col2 = st.columns(2)
|
||||
|
||||
with col1:
|
||||
product_title = st.text_input("🏷️ **Product Title**", placeholder="Enter the product title (e.g., Wireless Bluetooth Headphones)")
|
||||
with col2:
|
||||
product_details = st.text_area("📄 **Product Details**", placeholder="Enter features, benefits, specifications, materials, etc. (e.g., Noise Cancellation, Long Battery Life, Water Resistant, Comfortable Design)")
|
||||
|
||||
col3, col4 = st.columns(2)
|
||||
|
||||
with col3:
|
||||
keywords = st.text_input("🔑 **Keywords**", placeholder="Enter keywords, comma-separated (e.g., wireless headphones, noise cancelling, Bluetooth 5.0)")
|
||||
with col4:
|
||||
target_audience = st.multiselect(
|
||||
"🎯 **Target Audience**",
|
||||
["Teens", "Adults", "Seniors", "Music Lovers", "Fitness Enthusiasts", "Tech Savvy", "Busy Professionals", "Travelers", "Casual Users"],
|
||||
placeholder="Select target audience (optional)"
|
||||
)
|
||||
|
||||
col5, col6 = st.columns(2)
|
||||
|
||||
with col5:
|
||||
description_length = st.selectbox(
|
||||
"📏 **Desired Description Length**",
|
||||
["Short (1-2 sentences)", "Medium (3-5 sentences)", "Long (6+ sentences)"],
|
||||
help="Select the desired length of the product description"
|
||||
)
|
||||
with col6:
|
||||
brand_tone = st.selectbox(
|
||||
"🎨 **Brand Tone**",
|
||||
["Formal", "Informal", "Fun & Energetic"],
|
||||
help="Select the desired tone for the description"
|
||||
)
|
||||
|
||||
return product_title, product_details, target_audience, brand_tone, description_length, keywords
|
||||
|
||||
|
||||
def display_output(description):
|
||||
if description:
|
||||
st.subheader("✨ Generated Product Description:")
|
||||
st.write(description)
|
||||
|
||||
json_ld = {
|
||||
"@context": "https://schema.org",
|
||||
"@type": "Product",
|
||||
"name": product_title,
|
||||
"description": description,
|
||||
"audience": target_audience,
|
||||
"brand": {
|
||||
"@type": "Brand",
|
||||
"name": "Your Brand Name"
|
||||
},
|
||||
"keywords": keywords.split(", ")
|
||||
}
|
||||
|
||||
|
||||
def write_ai_prod_desc():
|
||||
product_title, product_details, target_audience, brand_tone, description_length, keywords = display_inputs()
|
||||
|
||||
if st.button("Generate Product Description 🚀"):
|
||||
with st.spinner("Generating description..."):
|
||||
description = generate_product_description(
|
||||
product_title,
|
||||
product_details.split(", "), # Split details into a list
|
||||
target_audience,
|
||||
brand_tone,
|
||||
description_length.split(" ")[0].lower(), # Extract length from selectbox
|
||||
keywords
|
||||
)
|
||||
display_output(description)
|
||||
220
ToBeMigrated/ai_writers/ai_writer_dashboard.py
Normal file
220
ToBeMigrated/ai_writers/ai_writer_dashboard.py
Normal file
@@ -0,0 +1,220 @@
|
||||
import streamlit as st
|
||||
from lib.utils.alwrity_utils import (essay_writer, ai_news_writer, ai_finance_ta_writer)
|
||||
|
||||
from lib.ai_writers.ai_story_writer.story_writer import story_input_section
|
||||
from lib.ai_writers.ai_product_description_writer import write_ai_prod_desc
|
||||
from lib.ai_writers.ai_copywriter.copywriter_dashboard import copywriter_dashboard
|
||||
from lib.ai_writers.linkedin_writer import LinkedInAIWriter
|
||||
from lib.ai_writers.blog_rewriter_updater.ai_blog_rewriter import write_blog_rewriter
|
||||
from lib.ai_writers.ai_blog_faqs_writer.faqs_ui import main as faqs_generator
|
||||
from lib.ai_writers.ai_blog_writer.ai_blog_generator import ai_blog_writer_page
|
||||
from lib.ai_writers.ai_outline_writer.outline_ui import main as outline_generator
|
||||
from lib.alwrity_ui.dashboard_styles import apply_dashboard_style, render_dashboard_header, render_category_header, render_card
|
||||
from loguru import logger
|
||||
|
||||
# Try to import AI Content Performance Predictor (AI-first approach)
|
||||
try:
|
||||
from lib.content_performance_predictor.ai_performance_predictor import render_ai_predictor_ui as render_content_performance_predictor
|
||||
AI_PREDICTOR_AVAILABLE = True
|
||||
logger.info("AI Content Performance Predictor loaded successfully")
|
||||
except ImportError:
|
||||
logger.warning("AI Content Performance Predictor not available")
|
||||
render_content_performance_predictor = None
|
||||
AI_PREDICTOR_AVAILABLE = False
|
||||
|
||||
# Try to import Bootstrap AI Competitive Suite
|
||||
try:
|
||||
from lib.ai_competitive_suite.bootstrap_ai_suite import render_bootstrap_ai_suite
|
||||
BOOTSTRAP_SUITE_AVAILABLE = True
|
||||
logger.info("Bootstrap AI Competitive Suite loaded successfully")
|
||||
except ImportError:
|
||||
logger.warning("Bootstrap AI Competitive Suite not available")
|
||||
render_bootstrap_ai_suite = None
|
||||
BOOTSTRAP_SUITE_AVAILABLE = False
|
||||
|
||||
def list_ai_writers():
|
||||
"""Return a list of available AI writers with their metadata (no UI rendering)."""
|
||||
writers = []
|
||||
|
||||
# Add Content Performance Predictor if available
|
||||
if render_content_performance_predictor:
|
||||
# AI-first approach description
|
||||
if AI_PREDICTOR_AVAILABLE:
|
||||
description = "🎯 AI-powered content performance prediction with competitive intelligence - perfect for solo entrepreneurs"
|
||||
name = "AI Content Performance Predictor"
|
||||
else:
|
||||
description = "Predict content success before publishing with AI-powered performance analysis"
|
||||
name = "Content Performance Predictor"
|
||||
|
||||
writers.append({
|
||||
"name": name,
|
||||
"icon": "🎯",
|
||||
"description": description,
|
||||
"category": "⭐ Featured",
|
||||
"function": render_content_performance_predictor,
|
||||
"path": "performance_predictor",
|
||||
"featured": True
|
||||
})
|
||||
|
||||
# Add Bootstrap AI Competitive Suite if available
|
||||
if render_bootstrap_ai_suite:
|
||||
writers.append({
|
||||
"name": "Bootstrap AI Competitive Suite",
|
||||
"icon": "🚀",
|
||||
"description": "🥷 Complete AI-powered competitive toolkit: content performance prediction + competitive intelligence for solo entrepreneurs",
|
||||
"category": "⭐ Featured",
|
||||
"function": render_bootstrap_ai_suite,
|
||||
"path": "bootstrap_ai_suite",
|
||||
"featured": True
|
||||
})
|
||||
|
||||
# Add existing writers
|
||||
writers.extend([
|
||||
{
|
||||
"name": "AI Blog Writer",
|
||||
"icon": "📝",
|
||||
"description": "Generate comprehensive blog posts from keywords, URLs, or uploaded content",
|
||||
"category": "Content Creation",
|
||||
"function": ai_blog_writer_page,
|
||||
"path": "ai_blog_writer"
|
||||
},
|
||||
{
|
||||
"name": "AI Blog Rewriter",
|
||||
"icon": "🔄",
|
||||
"description": "Rewrite and update existing blog content with improved quality and SEO optimization",
|
||||
"category": "Content Creation",
|
||||
"function": write_blog_rewriter,
|
||||
"path": "blog_rewriter"
|
||||
},
|
||||
{
|
||||
"name": "Story Writer",
|
||||
"icon": "📚",
|
||||
"description": "Create engaging stories and narratives with AI assistance",
|
||||
"category": "Creative Writing",
|
||||
"function": story_input_section,
|
||||
"path": "story_writer"
|
||||
},
|
||||
{
|
||||
"name": "Essay writer",
|
||||
"icon": "✍️",
|
||||
"description": "Generate well-structured essays on any topic",
|
||||
"category": "Academic",
|
||||
"function": essay_writer,
|
||||
"path": "essay_writer"
|
||||
},
|
||||
{
|
||||
"name": "Write News reports",
|
||||
"icon": "📰",
|
||||
"description": "Create professional news articles and reports",
|
||||
"category": "Journalism",
|
||||
"function": ai_news_writer,
|
||||
"path": "news_writer"
|
||||
},
|
||||
{
|
||||
"name": "Write Financial TA report",
|
||||
"icon": "📊",
|
||||
"description": "Generate technical analysis reports for financial markets",
|
||||
"category": "Finance",
|
||||
"function": ai_finance_ta_writer,
|
||||
"path": "financial_writer"
|
||||
},
|
||||
{
|
||||
"name": "AI Product Description Writer",
|
||||
"icon": "🛍️",
|
||||
"description": "Create compelling product descriptions that drive sales",
|
||||
"category": "E-commerce",
|
||||
"function": write_ai_prod_desc,
|
||||
"path": "product_writer"
|
||||
},
|
||||
{
|
||||
"name": "AI Copywriter",
|
||||
"icon": "✒️",
|
||||
"description": "Generate persuasive copy for marketing and advertising",
|
||||
"category": "Marketing",
|
||||
"function": copywriter_dashboard,
|
||||
"path": "copywriter"
|
||||
},
|
||||
{
|
||||
"name": "LinkedIn AI Writer",
|
||||
"icon": "💼",
|
||||
"description": "Create professional LinkedIn content that engages your network",
|
||||
"category": "Professional",
|
||||
"function": lambda: LinkedInAIWriter().run(),
|
||||
"path": "linkedin_writer"
|
||||
},
|
||||
{
|
||||
"name": "FAQ Generator",
|
||||
"icon": "❓",
|
||||
"description": "Generate comprehensive, well-researched FAQs from any content source with customizable options",
|
||||
"category": "Content Creation",
|
||||
"function": faqs_generator,
|
||||
"path": "faqs_generator"
|
||||
},
|
||||
{
|
||||
"name": "Blog Outline Generator",
|
||||
"icon": "📋",
|
||||
"description": "Create detailed blog outlines with AI-powered content generation and image integration",
|
||||
"category": "Content Creation",
|
||||
"function": outline_generator,
|
||||
"path": "outline_generator"
|
||||
}
|
||||
])
|
||||
|
||||
return writers
|
||||
|
||||
def get_ai_writers():
|
||||
"""Main function to display AI writers dashboard with premium glassmorphic design."""
|
||||
logger.info("Starting AI Writers Dashboard")
|
||||
|
||||
# Apply common dashboard styling
|
||||
apply_dashboard_style()
|
||||
|
||||
# Render dashboard header
|
||||
render_dashboard_header(
|
||||
"🤖 AI Content Writers",
|
||||
"Choose from our collection of specialized AI writers, each designed for specific content types and industries. Create engaging, high-quality content with just a few clicks."
|
||||
)
|
||||
|
||||
writers = list_ai_writers()
|
||||
logger.info(f"Found {len(writers)} AI writers")
|
||||
|
||||
# Group writers by category for better organization
|
||||
categories = {}
|
||||
for writer in writers:
|
||||
category = writer["category"]
|
||||
if category not in categories:
|
||||
categories[category] = []
|
||||
categories[category].append(writer)
|
||||
|
||||
# Render writers by category with common cards
|
||||
for category_name, category_writers in categories.items():
|
||||
render_category_header(category_name)
|
||||
|
||||
# Create columns for this category
|
||||
cols = st.columns(min(len(category_writers), 3))
|
||||
|
||||
for idx, writer in enumerate(category_writers):
|
||||
with cols[idx % 3]:
|
||||
# Use the common card renderer
|
||||
if render_card(
|
||||
icon=writer['icon'],
|
||||
title=writer['name'],
|
||||
description=writer['description'],
|
||||
category=writer['category'],
|
||||
key_suffix=f"{writer['path']}_{category_name}",
|
||||
help_text=f"Launch {writer['name']} - {writer['description']}"
|
||||
):
|
||||
logger.info(f"Selected writer: {writer['name']} with path: {writer['path']}")
|
||||
st.session_state.selected_writer = writer
|
||||
st.query_params["writer"] = writer['path']
|
||||
logger.info(f"Updated query params with writer: {writer['path']}")
|
||||
st.rerun()
|
||||
|
||||
# Add spacing between categories
|
||||
st.markdown('<div class="category-spacer"></div>', unsafe_allow_html=True)
|
||||
|
||||
logger.info("Finished rendering AI Writers Dashboard")
|
||||
|
||||
return writers
|
||||
|
||||
# Remove the old ai_writers function since it's now integrated into get_ai_writers
|
||||
247
ToBeMigrated/ai_writers/long_form_ai_writer.py
Normal file
247
ToBeMigrated/ai_writers/long_form_ai_writer.py
Normal file
@@ -0,0 +1,247 @@
|
||||
#####################################################
|
||||
#
|
||||
# Alwrity, AI Long form writer - Writing_with_Prompt_Chaining
|
||||
# and generative AI.
|
||||
#
|
||||
#####################################################
|
||||
|
||||
import os
|
||||
import re
|
||||
import time #iwish
|
||||
import sys
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
from configparser import ConfigParser
|
||||
import streamlit as st
|
||||
|
||||
from pprint import pprint
|
||||
from textwrap import dedent
|
||||
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
from ..utils.read_main_config_params import read_return_config_section
|
||||
from ..ai_web_researcher.gpt_online_researcher import do_metaphor_ai_research
|
||||
from ..ai_web_researcher.gpt_online_researcher import do_google_serp_search, do_tavily_ai_search
|
||||
from ..blog_metadata.get_blog_metadata import get_blog_metadata_longform
|
||||
from ..blog_postprocessing.save_blog_to_file import save_blog_to_file
|
||||
from ..gpt_providers.text_generation.main_text_generation import llm_text_gen
|
||||
|
||||
|
||||
def generate_with_retry(prompt, system_prompt=None):
|
||||
"""
|
||||
Generates content from the model with retry handling for errors.
|
||||
|
||||
Parameters:
|
||||
prompt (str): The prompt to generate content from.
|
||||
system_prompt (str, optional): Custom system prompt to use instead of the default one.
|
||||
|
||||
Returns:
|
||||
str: The generated content.
|
||||
"""
|
||||
try:
|
||||
# FIXME: Need a progress bar here.
|
||||
return llm_text_gen(prompt, system_prompt)
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating content: {e}")
|
||||
st.error(f"Error generating content: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def long_form_generator(keywords, search_params=None, blog_params=None):
|
||||
"""
|
||||
Generate a long-form blog post based on the given keywords
|
||||
|
||||
Args:
|
||||
keywords (str): Topic or keywords for the blog post
|
||||
search_params (dict, optional): Search parameters for research
|
||||
blog_params (dict, optional): Blog content characteristics
|
||||
"""
|
||||
|
||||
# Initialize default parameters if not provided
|
||||
if blog_params is None:
|
||||
blog_params = {
|
||||
"blog_length": 3000, # Default longer for long-form content
|
||||
"blog_tone": "Professional",
|
||||
"blog_demographic": "Professional",
|
||||
"blog_type": "Informational",
|
||||
"blog_language": "English"
|
||||
}
|
||||
else:
|
||||
# Ensure we have a higher word count for long-form content
|
||||
if blog_params.get("blog_length", 0) < 2500:
|
||||
blog_params["blog_length"] = max(3000, blog_params.get("blog_length", 0))
|
||||
|
||||
# Extract parameters with defaults
|
||||
blog_length = blog_params.get("blog_length", 3000)
|
||||
blog_tone = blog_params.get("blog_tone", "Professional")
|
||||
blog_demographic = blog_params.get("blog_demographic", "Professional")
|
||||
blog_type = blog_params.get("blog_type", "Informational")
|
||||
blog_language = blog_params.get("blog_language", "English")
|
||||
|
||||
st.subheader(f"Long-form {blog_type} Blog ({blog_length}+ words)")
|
||||
|
||||
with st.status("Generating comprehensive long-form content...", expanded=True) as status:
|
||||
# Step 1: Generate outline
|
||||
status.update(label="Creating detailed content outline...")
|
||||
|
||||
# Use a customized prompt based on the blog parameters
|
||||
outline_prompt = f"""
|
||||
As an expert content strategist writing in a {blog_tone} tone for {blog_demographic} audience,
|
||||
create a detailed outline for a comprehensive {blog_type} blog post about "{keywords}"
|
||||
that will be approximately {blog_length} words in {blog_language}.
|
||||
|
||||
The outline should include:
|
||||
1. An engaging headline
|
||||
2. 5-7 main sections with descriptive headings
|
||||
3. 2-3 subsections under each main section
|
||||
4. Key points to cover in each section
|
||||
5. Ideas for relevant examples or case studies
|
||||
6. Suggestions for data points or statistics to include
|
||||
|
||||
Format the outline in markdown with proper headings and bullet points.
|
||||
"""
|
||||
|
||||
try:
|
||||
outline = llm_text_gen(outline_prompt)
|
||||
st.markdown("### Content Outline")
|
||||
st.markdown(outline)
|
||||
status.update(label="Outline created successfully ✓")
|
||||
|
||||
# Step 2: Research the topic using the search parameters
|
||||
status.update(label="Researching topic details...")
|
||||
research_results = research_topic(keywords, search_params)
|
||||
status.update(label="Research completed ✓")
|
||||
|
||||
# Step 3: Generate the full content
|
||||
status.update(label=f"Writing {blog_length}+ word {blog_tone} {blog_type} content...")
|
||||
|
||||
full_content_prompt = f"""
|
||||
You are a professional content writer who specializes in {blog_type} content with a {blog_tone} tone
|
||||
for {blog_demographic} audiences. Write a comprehensive, in-depth blog post in {blog_language} about:
|
||||
|
||||
"{keywords}"
|
||||
|
||||
Use this outline as your structure:
|
||||
{outline}
|
||||
|
||||
And incorporate these research findings where relevant:
|
||||
{research_results}
|
||||
|
||||
The blog post should:
|
||||
- Be approximately {blog_length} words
|
||||
- Include an engaging introduction and strong conclusion
|
||||
- Use appropriate subheadings for all sections in the outline
|
||||
- Include examples, data points, and actionable insights
|
||||
- Be formatted in markdown with proper headings, bullet points, and emphasis
|
||||
- Maintain a {blog_tone} tone throughout
|
||||
- Address the needs and interests of a {blog_demographic} audience
|
||||
|
||||
Do not include phrases like "according to research" or "based on the outline" in your content.
|
||||
"""
|
||||
|
||||
full_content = llm_text_gen(full_content_prompt)
|
||||
status.update(label="Long-form content generated successfully! ✓", state="complete")
|
||||
|
||||
# Display the full content
|
||||
st.markdown("### Your Complete Long-form Blog Post")
|
||||
st.markdown(full_content)
|
||||
|
||||
return full_content
|
||||
|
||||
except Exception as e:
|
||||
status.update(label=f"Error generating long-form content: {str(e)}", state="error")
|
||||
st.error(f"Failed to generate long-form content: {str(e)}")
|
||||
return None
|
||||
|
||||
def research_topic(keywords, search_params=None):
|
||||
"""
|
||||
Research a topic using search parameters and return a summary
|
||||
|
||||
Args:
|
||||
keywords (str): Topic to research
|
||||
search_params (dict, optional): Search parameters
|
||||
|
||||
Returns:
|
||||
str: Research summary
|
||||
"""
|
||||
# Display a placeholder for research results
|
||||
placeholder = st.empty()
|
||||
placeholder.info("Researching topic... Please wait.")
|
||||
|
||||
try:
|
||||
from .ai_blog_writer.keywords_to_blog_streamlit import do_tavily_ai_search
|
||||
|
||||
# Use provided search params or defaults
|
||||
if search_params is None:
|
||||
search_params = {
|
||||
"max_results": 10,
|
||||
"search_depth": "advanced",
|
||||
"time_range": "year"
|
||||
}
|
||||
|
||||
# Conduct research using Tavily
|
||||
tavily_results = do_tavily_ai_search(
|
||||
keywords,
|
||||
max_results=search_params.get("max_results", 10),
|
||||
search_depth=search_params.get("search_depth", "advanced"),
|
||||
include_domains=search_params.get("include_domains", []),
|
||||
time_range=search_params.get("time_range", "year")
|
||||
)
|
||||
|
||||
# Extract research data
|
||||
research_data = ""
|
||||
if tavily_results and len(tavily_results) == 3:
|
||||
results, titles, answer = tavily_results
|
||||
|
||||
if answer and len(answer) > 50:
|
||||
research_data += f"Summary: {answer}\n\n"
|
||||
|
||||
if results and 'results' in results and len(results['results']) > 0:
|
||||
research_data += "Key Sources:\n"
|
||||
for i, result in enumerate(results['results'][:7], 1):
|
||||
title = result.get('title', 'Untitled Source')
|
||||
content_snippet = result.get('content', '')[:300] + "..."
|
||||
research_data += f"{i}. {title}\n{content_snippet}\n\n"
|
||||
|
||||
# If research data is empty or too short, provide a generic response
|
||||
if not research_data or len(research_data) < 100:
|
||||
research_data = f"No specific research data found for '{keywords}'. Please provide more specific information in your content."
|
||||
|
||||
placeholder.success("Research completed successfully!")
|
||||
return research_data
|
||||
|
||||
except Exception as e:
|
||||
placeholder.error(f"Research failed: {str(e)}")
|
||||
return f"Unable to gather research for '{keywords}'. Please continue with the content based on your knowledge."
|
||||
finally:
|
||||
# Remove the placeholder after a short delay
|
||||
import time
|
||||
time.sleep(1)
|
||||
placeholder.empty()
|
||||
|
||||
|
||||
def generate_long_form_content(content_keywords):
|
||||
"""
|
||||
Main function to generate long-form content based on the provided keywords.
|
||||
|
||||
Parameters:
|
||||
content_keywords (str): The main keywords or topic for the long-form content.
|
||||
|
||||
Returns:
|
||||
str: The generated long-form content.
|
||||
"""
|
||||
return long_form_generator(content_keywords)
|
||||
|
||||
|
||||
# Example usage
|
||||
if __name__ == "__main__":
|
||||
# Example usage of the function
|
||||
content_keywords = "artificial intelligence in healthcare"
|
||||
generated_content = generate_long_form_content(content_keywords)
|
||||
print(f"Generated content: {generated_content[:100]}...")
|
||||
202
ToBeMigrated/ai_writers/scholar_blogs/main_arxiv_to_blog.py
Normal file
202
ToBeMigrated/ai_writers/scholar_blogs/main_arxiv_to_blog.py
Normal file
@@ -0,0 +1,202 @@
|
||||
import sys
|
||||
import os
|
||||
import datetime
|
||||
|
||||
import tiktoken
|
||||
|
||||
from .arxiv_schlorly_research import fetch_arxiv_data, create_dataframe, get_arxiv_main_content
|
||||
from .arxiv_schlorly_research import arxiv_bibtex, scrape_images_from_arxiv, download_image
|
||||
from .arxiv_schlorly_research import read_written_ids, extract_arxiv_ids_from_line, append_id_to_file
|
||||
from .write_research_review_blog import review_research_paper
|
||||
from .combine_research_and_blog import blog_with_research
|
||||
from .write_blog_scholar_paper import write_blog_from_paper
|
||||
from .gpt_providers.gemini_pro_text import gemini_text_response
|
||||
from .generate_image_from_prompt import generate_image
|
||||
from .convert_content_to_markdown import convert_tomarkdown_format
|
||||
from .get_blog_metadata import blog_metadata
|
||||
from .get_code_examples import gemini_get_code_samples
|
||||
from .save_blog_to_file import save_blog_to_file
|
||||
from .take_url_screenshot import screenshot_api
|
||||
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def blog_arxiv_keyword(query):
|
||||
""" Write blog on given arxiv paper."""
|
||||
arxiv_id = None
|
||||
arxiv_url = None
|
||||
bibtex = None
|
||||
research_review = None
|
||||
column_names = ['Title', 'Date', 'Id', 'Summary', 'PDF URL']
|
||||
papers = fetch_arxiv_data(query)
|
||||
df = create_dataframe(papers, column_names)
|
||||
|
||||
for paper in papers:
|
||||
# Extracting the arxiv_id
|
||||
arxiv_id = paper[2].split('/')[-1]
|
||||
arxiv_url = "https://browse.arxiv.org/html/" + arxiv_id
|
||||
bibtex = arxiv_bibtex(arxiv_id)
|
||||
logger.info(f"Get research paper text from the url: {arxiv_url}")
|
||||
research_content = get_arxiv_main_content(arxiv_url)
|
||||
|
||||
num_tokens = num_tokens_from_string(research_content, "cl100k_base")
|
||||
logger.info(f"Number of tokens sent: {num_tokens}")
|
||||
# If the number of tokens is below the threshold, process and print the review
|
||||
if 1000 < num_tokens < 30000:
|
||||
logger.info(f"Writing research review on {paper[0]}")
|
||||
research_review = review_research_paper(research_content)
|
||||
research_review = f"\n{research_review}\n\n" + f"```{bibtex}```"
|
||||
#research_review = research_review + "\n\n\n" + f"{df.to_markdown()}"
|
||||
research_review = convert_tomarkdown_format(research_review, "gemini")
|
||||
break
|
||||
else:
|
||||
# Skip to the next iteration if the condition is not met
|
||||
continue
|
||||
|
||||
logger.info(f"Final scholar article: \n\n{research_review}\n")
|
||||
|
||||
# TBD: Scrape images from research reports and pass to vision to get conclusions out of it.
|
||||
#image_urls = scrape_images_from_arxiv(arxiv_url)
|
||||
#print("Downloading images found on the page:")
|
||||
#for img_url in image_urls:
|
||||
# download_image(img_url, arxiv_url)
|
||||
try:
|
||||
blog_postprocessing(arxiv_id, research_review)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed in blog post processing: {err}")
|
||||
sys.exit(1)
|
||||
|
||||
logger.info(f"\n\n ################ Finished writing Blog for : #################### \n")
|
||||
|
||||
|
||||
def blog_arxiv_url_list(file_path):
|
||||
""" Write blogs on all the arxiv links given in a file. """
|
||||
extracted_ids = []
|
||||
try:
|
||||
with open(file_path, 'r', encoding="utf-8") as file:
|
||||
for line in file:
|
||||
arxiv_id = extract_arxiv_ids_from_line(line)
|
||||
if arxiv_id:
|
||||
extracted_ids.append(arxiv_id)
|
||||
except FileNotFoundError:
|
||||
logger.error(f"File not found: {file_path}")
|
||||
raise FileNotFoundError
|
||||
except Exception as e:
|
||||
logger.error(f"Error while reading the file: {e}")
|
||||
raise e
|
||||
|
||||
# Read already written IDs
|
||||
written_ids = read_written_ids('papers_already_written_on.txt')
|
||||
|
||||
# Loop through extracted IDs
|
||||
for arxiv_id in extracted_ids:
|
||||
if arxiv_id not in written_ids:
|
||||
# This ID has not been written on yet
|
||||
arxiv_url = "https://browse.arxiv.org/html/" + arxiv_id
|
||||
logger.info(f"Get research paper text from the url: {arxiv_url}")
|
||||
research_content = get_arxiv_main_content(arxiv_url)
|
||||
try:
|
||||
num_tokens = num_tokens_from_string(research_content, "cl100k_base")
|
||||
except Exception as err:
|
||||
logger.error(f"Failed in counting tokens: {err}")
|
||||
sys.exit(1)
|
||||
logger.info(f"Number of tokens sent: {num_tokens}")
|
||||
# If the number of tokens is below the threshold, process and print the review
|
||||
# FIXME: Docs over 30k tokens, need to be chunked and summarized.
|
||||
if 1000 < num_tokens < 30000:
|
||||
try:
|
||||
logger.info(f"Getting bibtex for arxiv ID: {arxiv_id}")
|
||||
bibtex = arxiv_bibtex(arxiv_id)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to get Bibtex: {err}")
|
||||
|
||||
try:
|
||||
logger.info(f"Writing a research review..")
|
||||
research_review = review_research_paper(research_content, "gemini")
|
||||
logger.info(f"Research Review: \n{research_review}\n\n")
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to write review on research paper: {arxiv_id}{err}")
|
||||
|
||||
research_blog = write_blog_from_paper(research_content, "gemini")
|
||||
logger.info(f"\n\nResearch Blog: {research_blog}\n\n")
|
||||
research_blog = f"\n{research_review}\n\n" + f"```\n{bibtex}\n```"
|
||||
#research_review = blog_with_research(research_review, research_blog, "gemini")
|
||||
#logger.info(f"\n\n\nBLOG_WITH_RESEARCh: {research_review}\n\n\n")
|
||||
research_review = convert_tomarkdown_format(research_review, "gemini")
|
||||
research_review = f"\n{research_review}\n\n" + f"```{bibtex}```"
|
||||
logger.info(f"Final blog from research paper: \n\n{research_review}\n\n\n")
|
||||
|
||||
try:
|
||||
blog_postprocessing(arxiv_id, research_review)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed in blog post processing: {err}")
|
||||
sys.exit(1)
|
||||
|
||||
logger.info(f"\n\n ################ Finished writing Blog for : #################### \n")
|
||||
else:
|
||||
# Skip to the next iteration if the condition is not met
|
||||
logger.error("FIXME: Docs over 30k tokens, need to be chunked and summarized.")
|
||||
continue
|
||||
else:
|
||||
logger.warning(f"Already written, skip writing on Arxiv paper ID: {arxiv_id}")
|
||||
|
||||
|
||||
def blog_postprocessing(arxiv_id, research_review):
|
||||
""" Common function to do blog postprocessing. """
|
||||
try:
|
||||
append_id_to_file(arxiv_id, "papers_already_written_on.txt")
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to write/append ID to papers_already_written_on.txt: {err}")
|
||||
raise err
|
||||
|
||||
try:
|
||||
blog_title, blog_meta_desc, blog_tags, blog_categories = blog_metadata(research_review)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to get blog metadata: {err}")
|
||||
raise err
|
||||
|
||||
try:
|
||||
arxiv_url_scrnsht = f"https://arxiv.org/abs/{arxiv_id}"
|
||||
generated_image_filepath = take_paper_screenshot(arxiv_url_scrnsht)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to tsk paper screenshot: {err}")
|
||||
raise err
|
||||
|
||||
try:
|
||||
save_blog_to_file(research_review, blog_title, blog_meta_desc, blog_tags,\
|
||||
blog_categories, generated_image_filepath)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to save blog to a file: {err}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def take_paper_screenshot(arxiv_url):
|
||||
""" Common function to take paper screenshot. """
|
||||
# fixme: Remove the hardcoding, need add another option OR in config ?
|
||||
image_dir = os.path.join(os.getcwd(), "blog_images")
|
||||
generated_image_name = f"generated_image_{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}.png"
|
||||
generated_image_filepath = os.path.join(image_dir, generated_image_name)
|
||||
|
||||
if arxiv_url:
|
||||
try:
|
||||
generated_image_filepath = screenshot_api(arxiv_url, generated_image_filepath)
|
||||
except Exception as err:
|
||||
logger.error(f"Failed in taking url screenshot: {err}")
|
||||
|
||||
return generated_image_filepath
|
||||
|
||||
|
||||
def num_tokens_from_string(string, encoding_name):
|
||||
"""Returns the number of tokens in a text string."""
|
||||
try:
|
||||
encoding = tiktoken.get_encoding(encoding_name)
|
||||
num_tokens = len(encoding.encode(string))
|
||||
return num_tokens
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to count tokens: {err}")
|
||||
sys.exit(1)
|
||||
@@ -0,0 +1,49 @@
|
||||
import sys
|
||||
|
||||
from .gpt_providers.openai_chat_completion import openai_chatgpt
|
||||
from .gpt_providers.gemini_pro_text import gemini_text_response
|
||||
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def write_blog_from_paper(paper_content):
|
||||
""" Write blog from given paper url. """
|
||||
prompt = f"""As an expert in NLP and AI, I will provide you with a content of a research paper.
|
||||
Your task is to write a highly detailed blog(at least 2000 words), breaking down complex concepts for beginners.
|
||||
Take your time and do not rush to respond.
|
||||
Do not provide explanations, suggestions in your response.
|
||||
|
||||
Include the below section in your blog:
|
||||
Highlights: Include a list of 5 most important and unique claims of the given research paper.
|
||||
Abstract: Start by reading the abstract, which provides a concise summary of the research, including its purpose, methodology, and key findings.
|
||||
Introduction: This section will give you background information and set the context for the research. It often ends with a statement of the research question or hypothesis.
|
||||
Methodology: Include description of how authors conducted the research. This can include data sources, experimental setup, analytical techniques, etc.
|
||||
Results: This section presents the data or findings of the research. Pay attention to figures, tables, and any statistical analysis provided.
|
||||
Discussion/Analysis: In this section, Explain how research paper answers the research questions or how they fit with existing knowledge.
|
||||
Conclusion: This part summarizes the main findings and their implications. It might also suggest areas for further research.
|
||||
References: The cited works can provide additional context or background reading.
|
||||
Remember, Please use MLA format and markdown syntax.
|
||||
Do not provide description, explanations for your response.
|
||||
Take your time in crafting your blog content, do not rush to give the response.
|
||||
Using the blog structure above, please write a detailed and original blog on given research paper: \n'{paper_content}'\n\n"""
|
||||
|
||||
if 'gemini' in gpt_providers:
|
||||
try:
|
||||
response = gemini_text_response(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to get response from gemini: {err}")
|
||||
raise err
|
||||
elif 'openai' in gpt_providers:
|
||||
try:
|
||||
logger.info("Calling OpenAI LLM.")
|
||||
response = openai_chatgpt(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
logger.error(f"failed to get response from Openai: {err}")
|
||||
raise err
|
||||
@@ -0,0 +1,89 @@
|
||||
import sys
|
||||
|
||||
from .gpt_providers.openai_chat_completion import openai_chatgpt
|
||||
from .gpt_providers.gemini_pro_text import gemini_text_response
|
||||
from .gpt_providers.mistral_chat_completion import mistral_text_response
|
||||
|
||||
from loguru import logger
|
||||
logger.remove()
|
||||
logger.add(sys.stdout,
|
||||
colorize=True,
|
||||
format="<level>{level}</level>|<green>{file}:{line}:{function}</green>| {message}"
|
||||
)
|
||||
|
||||
|
||||
def review_research_paper(research_blog):
|
||||
""" """
|
||||
prompt = f"""As world's top researcher and academician, I will provide you with research paper.
|
||||
Your task is to write a highly detailed review report.
|
||||
Important, your report should be factual, original and demostrate your expertise.
|
||||
|
||||
Review guidelines:
|
||||
1). Read the Abstract and Introduction Carefully:
|
||||
Begin by thoroughly reading the abstract and introduction of the paper.
|
||||
Try to understand the research question, the objectives, and the background information.
|
||||
Identify the central argument or hypothesis that the study is examining.
|
||||
|
||||
2). Examine the Methodology and Methods:
|
||||
Read closely at the research design, whether it is experimental, observational, qualitative, or a combination of methods.
|
||||
Check the sampling strategy and the size of the sample.
|
||||
Review the methods of data collection and the instruments used for this purpose.
|
||||
Think about any ethical issues and possible biases in the study.
|
||||
|
||||
3). Analyze the Results and Discussion:
|
||||
Review how the results are presented, including any tables, graphs, and statistical analysis.
|
||||
Evaluate the findings' validity and reliability.
|
||||
Analyze whether the results support or contradict the research question and hypothesis.
|
||||
Read the discussion section where the authors interpret their findings and their significance.
|
||||
|
||||
4). Consider the Limitations and Strengths:
|
||||
Spot any limitations or potential weaknesses in the study.
|
||||
Evaluate the strengths and contributions that the research makes.
|
||||
Think about how generalizable the findings are to other populations or situations.
|
||||
|
||||
5). Assess the Writing and Organization:
|
||||
Judge the clarity and structure of the report.
|
||||
Consider the use of language, grammar, and the overall formatting.
|
||||
Assess how well the arguments are logically organized and how coherent the report is.
|
||||
|
||||
6). Evaluate the Literature Review:
|
||||
Examine how comprehensive and relevant the literature review is.
|
||||
Consider how the study adds to or builds upon existing research.
|
||||
Evaluate the timeliness and quality of the sources cited in the research.
|
||||
|
||||
7). Review the Conclusion and Implications:
|
||||
Look at the conclusions drawn from the study and how well they align with the findings.
|
||||
Think about the practical implications and potential applications of the research.
|
||||
Evaluate the suggestions for further research or policy actions.
|
||||
|
||||
8). Overall Assessment:
|
||||
Formulate an overall opinion about the research report's quality and thoroughness.
|
||||
Consider the significance and impact of the findings.
|
||||
Evaluate how the study contributes to its field of research.
|
||||
|
||||
9). Provide Constructive Feedback:
|
||||
Offer constructive criticism and suggestions for improvement, where necessary.
|
||||
Think about possible biases or alternative ways to interpret the findings.
|
||||
Suggest ideas for future research or for replicating the study.
|
||||
|
||||
Do not provide description, explanations for your response.
|
||||
Using the above review guidelines, write a detailed review report on the below research paper.
|
||||
Research Paper: '{research_blog}'
|
||||
"""
|
||||
|
||||
if 'gemini' in gpt_providers:
|
||||
try:
|
||||
response = gemini_text_response(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
logger.error(f"Failed to get response from gemini: {err}")
|
||||
response = mistral_text_response(prompt)
|
||||
return response
|
||||
|
||||
elif 'openai' in gpt_providers:
|
||||
try:
|
||||
logger.info("Calling OpenAI LLM.")
|
||||
response = openai_chatgpt(prompt)
|
||||
return response
|
||||
except Exception as err:
|
||||
SystemError(f"Failed to get response from Openai: {err}")
|
||||
Reference in New Issue
Block a user