GEO Agent – Generative Engine Optimization for the AI Era
This hackathon is about building the GEO Agent, an AI-powered system that measures and improves a brand's Share of Model (SoM) — the percentage of AI-generated answers in which a brand is mentioned.
This is a live-fire hackathon challenge with strict live data requirements.
1. The New Reality: From SEO to GEO
For 20+ years, brands have competed for Google's 10 blue links (traditional SEO).
Today, users increasingly ask AI models directly:
- "What are the best international business programs?"
- "What's the best CRM for startups?"
- "What consulting firm helps international students get U.S. jobs?"
LLMs like ChatGPT, Gemini, Perplexity, and Claude return one synthesized answer.
If your brand is not in that answer, you effectively do not exist in that user's consideration set.
This gives rise to a new metric:
- Share of Model (SoM): The percentage of AI-generated answers in which your brand is mentioned.
Your mission: Build the first version of a tool that measures and improves a brand's SoM.
2. Problem Statement
Today, brands generally:
- Don't know whether AI models mention them
- Don't know which competitors are being recommended
- Don't know why those competitors rank
- Don't know what content to create to fix it
You are building the GEO Agent — a live AI consultant that:
- Audits a brand's Share of Model
- Explains why the brand and competitors rank as they do
- Recommends concrete actions and content to improve SoM
3. Brand Scope – Choose ONE Real Brand
Pick one real company with an actual digital footprint:
-
Option A: Global Brand
- Examples: Nike, Coca-Cola, Airbnb, Stripe, etc.
-
Option B: Tech Startup
- Examples: Linear, Notion, Retool, Webflow, Ramp, etc.
-
Option C: Education / Program Brand
- Examples: Semester at Sea, General Assembly, NYU Florence, etc.
No fictional companies.
4. GEO Agent MVP Requirements
The GEO Agent must convincingly answer two core questions:
- How well is my brand performing in AI searches?
- How can we improve our positioning?
4.1 How Well Is My Brand Performing in AI Searches?
The agent must:
-
Generate at least 5 high-intent queries relevant to the brand
- Example: "Best international business programs"
- Example: "Top tools for product teams"
- Query at least 2 AI systems or search tools live (LLM APIs, Perplexity, web search + synthesis, etc.)
From these responses, the agent must extract:
- Whether the brand is mentioned
- Which competitors are mentioned
- Position / order in the answer
- Sentiment (positive, neutral, negative)
Compute a simple Share of Model score:
- % of queries where the brand appears
- Relative frequency vs competitors
And surface evidence, including:
- The actual raw responses
- URLs used
- Citation references (e.g., which sources supported which conclusions)
4.2 How Can We Improve Our Positioning?
The agent must:
-
Analyze why competitors are ranking, e.g.:
- Wikipedia presence?
- Domain authority?
- More backlinks?
- Structured data / schema?
- Press mentions?
-
Identify content gaps, e.g.:
- Missing comparison pages?
- No listicle presence?
- No third-party validation?
Then automatically generate:
- A draft Wikipedia-style summary (if appropriate)
- A comparison page outline
- A suggested SEO / GEO content strategy
- Structured data (schema) recommendations
Finally, prioritize actions, such as:
- High impact / Low effort
- Medium impact / High effort
This must be programmatic, not manually typed advice.
5. What Makes This an AGENT (Not Just a Script)
The GEO Agent should:
- Dynamically generate queries
- Loop over tools (LLM APIs, search APIs, scraping, etc.)
- Decide next actions based on intermediate results
- Aggregate findings across queries and systems
- Produce a structured report
Nice-to-have / bonus capabilities:
- Tool calling / tool orchestration
- Retry and fallback logic
- Model comparison (e.g., different LLMs / search engines)
- Autonomous research loops
- Monitoring / re-runnable pipeline over time
6. Expected Demo Format
At demo time, you must show:
Input:
- Brand name
- Target customer segment (optional)
- A live run of the agent
Output:
- Share of Model score
- Competitor comparison table
- Evidence citations (links, quoted passages, raw responses)
- Action plan (prioritized recommendations)
- Generated content example (e.g., wiki summary, comparison page outline)
- Architecture overview (2–3 minutes)
7. Judging Criteria
Reliability (30%)
- Uses live data and real APIs
- Provides proper citations
- Produces repeatable results
- Shows transparent reasoning (how conclusions were reached)
Insight Quality (25%)
- Findings are non-obvious and interesting
- Competitor analysis is real, not generic
- Strategy logically follows from evidence
Technical Robustness (25%)
- Quality of tool orchestration
- Level of automation
- Error handling strategy
- Model usage sophistication (e.g., multi-step reasoning, multiple providers)
Clarity to the CEO (20%)
- Non-technical execs can understand the output
- Report is structured and easy to scan
- Strategy is concrete and actionable
8. What Winning Teams Usually Do
Winning teams typically:
- Build a query generator → evaluator → synthesizer pipeline
- Compare multiple LLMs / search providers
- Extract structured entities (brands, competitors, rankings, sentiment) from responses
- Build a scoring framework for Share of Model
- Produce a polished, "consultant-style" report
Teams that struggle often:
- Just ask a single LLM once
- Manually copy/paste results
- Skip citations
- Over-focus on UI instead of reasoning and evidence
9. Optional Advanced Layer (If Time Allows)
If the core MVP is complete, consider:
- Tracking Share of Model over time
- Detecting misinformation (incorrect statements about the brand)
- Monitoring new competitor emergence
- Suggesting PR placements and outreach opportunities
- Building a lightweight dashboard for ongoing monitoring
Requirements
1. Use a Real Brand
Choose one real-world company, product, or program. No fictional brands, no mock data.
2. Fetch Live Data
Your agent must make real API calls to at least one of the following:
- Search engines
- LLM APIs (ChatGPT, Perplexity, Gemini, Claude, etc.)
- Web scraping tools
If the data could have been copy-pasted ahead of time, it doesn't count.
3. Deploy to a Public URL
Your solution must be accessible at a public URL. Localhost demos are not accepted. Judges must be able to use or demo your GEO Agent without running it on your machine.
4. Answer These Three Questions
Your agent must clearly produce:
- Audit — Where is this brand missing or misrepresented in AI answers?
- Explain — Why does this happen? (sources, structure, citations, gaps)
- Fix — What specific content or changes would improve the AI's answer?
5. Show Measurable Output
Your agent should compute and display:
- Share of Model score — % of queries where the brand appears
- Competitor comparison table — Brand vs competitors on mentions, position, sentiment
- Evidence citations — Raw responses, URLs, quoted passages
- Action plan — Prioritized recommendations (high impact / low effort first)
- Generated content — At least one artifact (e.g., wiki summary, FAQ draft, comparison page outline)
6. Include a Live Demo
Your submission must include a live run of the agent during your presentation. Pre-recorded demos alone are not sufficient — judges need to see the agent working in real time.
7. Provide an Architecture Overview
Spend 2–3 minutes explaining how your system works: which APIs you call, how your agent decides what to do, and how the output is generated.
Prizes
Surprize
Surprise to be announced!
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
Mitchell Itkin
Founder / Pulse AI NYC
DJ Lee
Co-Founder / Pulse AI NYC
Judging Criteria
-
Creativity
How original and insightful is the approach? -
Commercializability
Could a real company pay for this tomorrow? -
Technically Sound
Is this technically sound and real? -
Presentation
Did you communicate the value clearly?
Questions? Email the hackathon manager
Tell your friends
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.


