first commit

This commit is contained in:
ytc1012
2026-02-04 16:11:55 +08:00
commit 0f3ee050dc
165 changed files with 25795 additions and 0 deletions

35
MeetSpot/AGENTS.md Normal file
View File

@@ -0,0 +1,35 @@
# Repository Guidelines
## Project Structure & Module Organization
- `app/` holds core logic, configuration, and tools (e.g., `app/tool/meetspot_recommender.py` and the in-progress `design_tokens.py`). Treat it as the authoritative source for business rules.
- `api/index.py` wires FastAPI, middleware, and routers; `web_server.py` bootstraps the same app locally or in production.
- Presentation assets live in `templates/` (Jinja UI), `static/` (CSS, icons), and `public/` (standalone marketing pages); generated HTML drops under `workspace/js_src/` and should be commit-free.
- Configuration samples sit in `config/`, docs in `docs/`, and regression or SEO tooling in `tests/` plus future `tools/` automation scripts.
## Build, Test, and Development Commands
- `pip install -r requirements.txt` (or `conda env update -f environment-dev.yml`) installs Python 3.11 dependencies.
- `python web_server.py` starts the full stack with auto env detection; `uvicorn api.index:app --reload` is preferred while iterating.
- `npm run dev` / `npm start` proxy to the same Python entry point for platforms that expect Node scripts.
- `pytest tests/ -v` runs the suite; `pytest --cov=app tests/` enforces coverage; `python tests/test_seo.py http://127.0.0.1:8000` performs the SEO audit once the server is live.
- Quality gates: `black .`, `ruff check .`, and `mypy app/` must be clean before opening a PR.
## Coding Style & Naming Conventions
- Python: 4-space indent, type hints everywhere, `snake_case` for functions, `PascalCase` for classes, and `SCREAMING_SNAKE_CASE` for constants. Keep functions under ~50 lines and prefer dataclasses for structured payloads.
- HTML/CSS: prefer BEM-like class names (`meetspot-header__title`), declare shared colors via the upcoming `static/css/design-tokens.css`, and keep inline styles limited to offline-only HTML in `workspace/js_src/`.
- Logging flows through `app/logger.py`; use structured messages (`logger.info("geo_center_calculated", extra={...})`) so log parsing stays reliable.
## Testing Guidelines
- Place new tests in `tests/` using `test_<feature>.py` naming; target fixtures that hit both FastAPI routes and tool-layer helpers.
- Maintain ≥80% coverage for the `app/` package; add focused tests when touching caching, concurrency, or SEO logic.
- Integration checks: run `python tests/test_seo.py <base_url>` against a live server and capture JSON output in the PR for visibility.
- Planned accessibility tooling (`tests/test_accessibility.py`) will be part of CI—mirror its structure for any lint-like tests you add.
## Commit & Pull Request Guidelines
- Follow Conventional Commits (`feat:`, `fix:`, `ci:`, `docs:`) as seen in `git log`; keep scopes small (e.g., `feat(tokens): add WCAG palette`).
- Reference related issues in the first line of the PR description, include a summary of user impact, and attach screenshots/GIFs for UI work.
- List the commands/tests you ran, note any config changes (e.g., `config/config.toml`), and mention follow-up tasks when applicable.
- Avoid committing generated artifacts from `workspace/` or credentials in `config/config.toml`; add new secrets to `.env` or deployment config.
## Configuration & Architecture Notes
- Keep `config/config.toml.example` updated when introducing new settings, and never hardcode API keys—read them via `app.config`.
- The design-token and accessibility architecture is tracked in `.claude/specs/improve-ui-ux-color-scheme/02-system-architecture.md`; align contributions with that spec and document deviations in your PR.

View File

@@ -0,0 +1,4 @@
<?xml version="1.0"?>
<users>
<user>AF10E9C7A491D73CC4CF422DE871A651</user>
</users>

212
MeetSpot/CLAUDE.md Normal file
View File

@@ -0,0 +1,212 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
MeetSpot is an **AI Agent** for multi-person meeting point recommendations. Users provide locations and requirements; the Agent calculates the geographic center and recommends optimal venues. Built with FastAPI and Python 3.11+, uses Amap (Gaode Map) API for geocoding/POI search, and DeepSeek/GPT-4o-mini for semantic scoring.
**Live Demo**: https://meetspot-irq2.onrender.com
## Quick Reference
```bash
# Environment
conda activate meetspot # Or: source venv/bin/activate
# Development
uvicorn api.index:app --reload # Preferred for iteration
python web_server.py # Full stack with auto env detection
# Test the main endpoint
curl -X POST "http://127.0.0.1:8000/api/find_meetspot" \
-H "Content-Type: application/json" \
-d '{"locations": ["北京大学", "清华大学"], "keywords": "咖啡馆"}'
# Testing
pytest tests/ -v # Full suite
pytest tests/test_file.py::test_name -v # Single test
pytest --cov=app tests/ # Coverage (target: 80%)
python tests/test_seo.py http://localhost:8000 # SEO validation (standalone)
# Quality gates (run before PRs)
black . && ruff check . && mypy app/
# Postmortem regression check (optional, runs in CI)
python tools/postmortem_check.py # Check for known issue patterns
```
**Key URLs**: Main UI (`/`), API docs (`/docs`), Health (`/health`)
## Repo Rules
- Follow `AGENTS.md` for repo-local guidelines (style, structure, what not to commit). In particular: runtime-generated files under `workspace/js_src/` must not be committed.
- There are no Cursor/Copilot rule files in this repo (no `.cursorrules`, no `.cursor/rules/`, no `.github/copilot-instructions.md`).
## Environment Setup
**Conda**: `conda env create -f environment.yml && conda activate meetspot` (env name is `meetspot`, not `meetspot-dev`)
**Pip**: `python3.11 -m venv venv && source venv/bin/activate && pip install -r requirements.txt`
**Required Environment Variables**:
- `AMAP_API_KEY` - Gaode Map API key (required)
- `AMAP_SECURITY_JS_CODE` - JS security code for frontend map
- `LLM_API_KEY` - DeepSeek/OpenAI API key (for AI chat and LLM scoring)
- `LLM_API_BASE` - API base URL (default: `https://newapi.deepwisdom.ai/v1`)
- `LLM_MODEL` - Model name (default: `deepseek-chat`)
**Local Config**: Copy `config/config.toml.example` to `config/config.toml` and fill in API keys. Alternatively, create a `.env` file with the environment variables above.
## Architecture
### Request Flow
```
POST /api/find_meetspot
Complexity Router (assess_request_complexity)
Rule+LLM Mode (Agent mode disabled for memory savings on free tier)
5-Step Pipeline: Geocode → Center Calc → POI Search → Ranking → HTML Gen
```
Complexity scoring: +10/location, +15 for complex keywords, +10 for special requirements. Currently all requests use Rule+LLM mode since Agent mode is disabled (`agent_available = False` in `api/index.py`).
### Entry Points
- `web_server.py` - Main entry, auto-detects production vs development
- `api/index.py` - FastAPI app with all endpoints, middleware, and request handling
### Three-Tier Configuration (Graceful Degradation)
| Mode | Trigger | What Works |
|------|---------|------------|
| Full | `config/config.toml` exists | All features, TOML-based config |
| Simplified | `RAILWAY_ENVIRONMENT` set | Uses `app/config_simple.py` |
| Minimal | Only `AMAP_API_KEY` env var | `MinimalConfig` class in `api/index.py`, basic recommendations only |
### Core Components
```
app/tool/meetspot_recommender.py # Main recommendation engine (CafeRecommender class)
|- university_mapping dict # 45 abbreviations (e.g., "北大" -> "北京市海淀区北京大学")
|- landmark_mapping dict # 45 city landmarks (e.g., "陆家嘴" -> "上海市浦东新区陆家嘴")
|- PLACE_TYPE_CONFIG dict # 12 venue themes with colors, icons
|- _rank_places() # 100-point scoring algorithm
|- _generate_html_content() # Standalone HTML with Amap JS API
|- geocode_cache (max 30) # LRU-style address cache (reduced for free tier)
|- poi_cache (max 15) # LRU-style POI cache (reduced for free tier)
app/design_tokens.py # WCAG AA color palette, CSS generation
api/routers/seo_pages.py # SEO landing pages
```
### LLM Scoring (Agent Mode)
When Agent Mode is enabled, final venue scores blend rule-based and LLM semantic analysis:
```
Final Score = Rule Score * 0.4 + LLM Score * 0.6
```
Agent Mode is currently disabled (`agent_available = False`) to conserve memory on free hosting tiers.
### Data Flow
```
1. Address enhancement (90+ university/landmark mappings)
2. Geocoding via Amap API (with retry + rate limiting)
3. Center point calculation (spherical geometry)
4. POI search (concurrent for multiple keywords)
Fallback: tries 餐厅->咖啡馆->商场->美食, then expands to 50km
5. Ranking with multi-scenario balancing (max 8 venues)
6. HTML generation -> workspace/js_src/
```
### Optional Components
Database layer (`app/db/`, `app/models/`) is optional - core recommendation works without it. Used for auth/social features with SQLite + aiosqlite.
Experimental agent endpoint (`/api/find_meetspot_agent`) requires OpenManus framework - **not production-ready**.
## Key Patterns
### Ranking Algorithm
Edit `_rank_places()` in `meetspot_recommender.py`:
- Base: 30 points (rating x 6)
- Popularity: 20 points (log-scaled reviews)
- Distance: 25 points (500m = full score, decays)
- Scenario: 15 points (keyword match)
- Requirements: 10 points (parking/quiet/business)
### Distance Filtering
Two-stage distance handling in `meetspot_recommender.py`:
1. **POI Search**: Amap API `radius` parameter (hardcoded 5000m, fallback to 50000m)
2. **Post-filter**: `max_distance` parameter in `_rank_places()` (default 100km, in meters)
The `max_distance` filter applies after POI retrieval during ranking. To change search radius, modify `radius=5000` in `_search_places()` calls around lines 556-643.
### Brand Knowledge Base
`BRAND_FEATURES` dict in `meetspot_recommender.py` contains 50+ brand profiles (Starbucks, Haidilao, etc.) with feature scores (0.0-1.0) for: quiet, WiFi, business, parking, child-friendly, 24h. Used in requirements matching - brands scoring >=0.7 satisfy the requirement. Place types prefixed with `_` (e.g., `_library`) provide defaults.
### Adding Address Mappings
Two sources for address resolution:
1. **External file**: `data/address_aliases.json` - JSON file with `university_aliases` and `landmark_aliases` dicts. Preferred for new mappings.
2. **Internal dicts**: `university_mapping` and `landmark_mapping` in `_enhance_address()` method of `meetspot_recommender.py`. Use for mappings requiring city prefixes (prevents cross-city geocoding errors).
### Adding Venue Themes
Add entry to `PLACE_TYPE_CONFIG` with: Chinese name, Boxicons icons, 6 color values.
## Postmortem System
Automated regression prevention system that tracks historical fixes and warns when code changes might reintroduce past bugs.
### Structure
```
postmortem/
PM-2025-001.yaml ... PM-2026-xxx.yaml # Historical fix documentation
tools/
postmortem_init.py # Generate initial knowledge base from git history
postmortem_check.py # Check code changes against known patterns
postmortem_generate.py # Generate postmortem for a single commit
```
### CI Integration
- `postmortem-check.yml`: Runs on PRs, warns if changes match known issue patterns
- `postmortem-update.yml`: Auto-generates postmortem when `fix:` commits merge to main
### Adding New Postmortems
When fixing a bug, the CI will auto-generate a postmortem. For manual creation:
```bash
python tools/postmortem_generate.py <commit-hash>
```
Each postmortem YAML contains triggers (file patterns, function names, regex, keywords) that enable multi-dimensional pattern matching.
## Debugging
| Issue | Solution |
|-------|----------|
| `未找到AMAP_API_KEY` | Set environment variable |
| Import errors in production | Check MinimalConfig fallback |
| Wrong city geocoding | Add to `landmark_mapping` with city prefix |
| Empty POI results | Fallback mechanism handles this automatically |
| Render OOM (512MB) | Caches are reduced (30/15 limits); Agent mode disabled |
| Render service down | Trigger redeploy: `git commit --allow-empty -m "trigger redeploy" && git push` |
**Logging**: Uses loguru via `app/logger.py`. `/health` endpoint shows config status.
## Deployment
Hosted on Render free tier (512MB RAM, cold starts after 15min idle).
**Redeploy**: Push to `main` branch triggers auto-deploy. For manual restart without code changes:
```bash
git commit --allow-empty -m "chore: trigger redeploy" && git push origin main
```
**Generated artifacts**: HTML files in `workspace/js_src/` are runtime-generated and should not be committed.
## Code Style
- Python: 4-space indent, type hints, `snake_case` functions, `PascalCase` classes
- CSS: BEM-like (`meetspot-header__title`), colors from `design_tokens.py`
- Commits: Conventional Commits (`feat:`, `fix:`, `docs:`)

38
MeetSpot/Dockerfile Normal file
View File

@@ -0,0 +1,38 @@
FROM python:3.12-slim
# 设置工作目录
WORKDIR /app
# 设置环境变量
ENV PYTHONPATH=/app
ENV PYTHONUNBUFFERED=1
# 安装系统依赖
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
# 复制依赖文件
COPY requirements.txt .
# 安装 Python 依赖
RUN pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 创建非 root 用户
RUN useradd --create-home --shell /bin/bash meetspot
RUN chown -R meetspot:meetspot /app
USER meetspot
# 暴露端口
EXPOSE 8000
# 健康检查
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# 启动应用
CMD ["python", "web_server.py"]

21
MeetSpot/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 manna_and_poem
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

296
MeetSpot/README.md Normal file
View File

@@ -0,0 +1,296 @@
<div align="center">
# MeetSpot
<img src="docs/logo.jpg" alt="MeetSpot Logo" width="200"/>
### AI Agent for Multi-Person Meeting Point Recommendations
*Not just a search tool. An autonomous agent that decides the fairest meeting point for everyone.*
[![Live Demo](https://img.shields.io/badge/Live-Demo-brightgreen?style=for-the-badge)](https://meetspot-irq2.onrender.com)
[![Video Demo](https://img.shields.io/badge/Bilibili-Demo-00A1D6?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV1aUK7zNEvo/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.115+-009688.svg)](https://fastapi.tiangolo.com/)
[![Build Status](https://github.com/calderbuild/MeetSpot/actions/workflows/ci.yml/badge.svg)](https://github.com/calderbuild/MeetSpot/actions)
[English](README.md) | [简体中文](README_ZH.md)
</div>
---
## Why MeetSpot?
Most location tools return results near *you*. MeetSpot calculates the **geographic center** of all participants and returns AI-ranked venues that minimize everyone's travel time.
| Traditional Tools | MeetSpot |
|-------------------|----------|
| Search near your location | Calculate fair center for all |
| Keyword-based ranking | AI-powered multi-factor scoring |
| Static results | Adaptive dual-mode routing |
| No reasoning | Explainable AI with chain-of-thought |
<div align="center">
<img src="docs/show1.jpg" alt="MeetSpot Interface" width="85%"/>
</div>
---
## Agent Architecture
MeetSpot is an **AI Agent** - it makes autonomous decisions based on request complexity, not just executes searches.
```
User Request
┌──────────────┴──────────────┐
│ Complexity Router │
│ (Autonomous Decision) │
└──────────────┬──────────────┘
┌────────────────────┼────────────────────┐
│ │ │
▼ │ ▼
┌─────────────────┐ │ ┌─────────────────┐
│ Rule Mode │ │ │ Agent Mode │
│ (2-4 sec) │ │ │ (8-15 sec) │
│ Deterministic │ │ │ LLM-Enhanced │
└────────┬────────┘ │ └────────┬────────┘
│ │ │
└─────────────────────┼───────────────────┘
┌──────────────┴──────────────┐
│ 5-Step Processing │
│ Pipeline │
└──────────────┬──────────────┘
┌──────────┬──────────┬────┴────┬──────────┬──────────┐
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
Geocode Center POI Ranking HTML Result
Calc Search Gen
```
### Intelligent Mode Selection
The Agent autonomously decides which processing mode to use:
| Factor | Score | Example |
|--------|-------|---------|
| Location count | +10/location | 4 locations = 40 pts |
| Complex keywords | +15 | "quiet business cafe with private rooms" |
| Special requirements | +10 | "parking, wheelchair accessible, WiFi" |
- **Score < 40**: Rule Mode (fast, deterministic, pattern-matched)
- **Score >= 40**: Agent Mode (LLM reasoning, semantic understanding)
### Agent Mode Scoring
```
Final Score = Rule Score × 0.4 + LLM Score × 0.6
```
The LLM analyzes semantic fit between venues and requirements, then blends with rule-based scoring. Results include **Explainable AI** visualization showing the agent's reasoning process.
### 5-Step Pipeline
| Step | Function | Details |
|------|----------|---------|
| **Geocode** | Address → Coordinates | 90+ smart mappings (universities, landmarks) |
| **Center Calc** | Fair point calculation | Spherical geometry for accuracy |
| **POI Search** | Venue discovery | Concurrent async search, auto-fallback |
| **Ranking** | Multi-factor scoring | Base(30) + Popularity(20) + Distance(25) + Scenario(15) + Requirements(10) |
| **HTML Gen** | Interactive map | Amap JS API integration |
---
## Quick Start
```bash
# Clone and install
git clone https://github.com/calderbuild/MeetSpot.git && cd MeetSpot
pip install -r requirements.txt
# Configure (get key from https://lbs.amap.com/)
cp config/config.toml.example config/config.toml
# Edit config.toml and add your AMAP_API_KEY
# Run
python web_server.py
```
Open http://127.0.0.1:8000
---
## API Reference
### Main Endpoint
`POST /api/find_meetspot`
```json
{
"locations": ["Peking University", "Tsinghua University", "Renmin University"],
"keywords": "cafe restaurant",
"user_requirements": "parking, quiet environment"
}
```
**Response:**
```json
{
"success": true,
"html_url": "/workspace/js_src/recommendation_xxx.html",
"center": {"lat": 39.99, "lng": 116.32},
"venues_count": 8
}
```
### Other Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/find_meetspot_agent` | POST | Force Agent Mode (LLM reasoning) |
| `/api/ai_chat` | POST | AI customer service chat |
| `/health` | GET | System health check |
| `/docs` | GET | Interactive API documentation |
---
## Screenshots
<table>
<tr>
<td width="50%"><img src="docs/agent-thinking.jpg" alt="Agent Reasoning"/><p align="center"><b>Agent Chain-of-Thought</b></p></td>
<td width="50%"><img src="docs/result-map.jpg" alt="Interactive Map"/><p align="center"><b>Interactive Map View</b></p></td>
</tr>
<tr>
<td width="50%"><img src="docs/多维度智能评分show4.jpg" alt="AI Scoring"/><p align="center"><b>Multi-Factor AI Scoring</b></p></td>
<td width="50%"><img src="docs/show5推荐地点.jpg" alt="Venue Cards"/><p align="center"><b>Venue Recommendation Cards</b></p></td>
</tr>
</table>
<details>
<summary><b>More Screenshots</b></summary>
<table>
<tr>
<td width="50%"><img src="docs/homepage.jpg" alt="Homepage"/><p align="center"><b>Homepage</b></p></td>
<td width="50%"><img src="docs/finder-input.jpg" alt="Input Interface"/><p align="center"><b>Meeting Point Finder</b></p></td>
</tr>
<tr>
<td width="50%"><img src="docs/result-summary.jpg" alt="Results"/><p align="center"><b>Results Summary</b></p></td>
<td width="50%"><img src="docs/AI客服.jpg" alt="AI Chat"/><p align="center"><b>AI Customer Service</b></p></td>
</tr>
</table>
</details>
---
## Tech Stack
| Layer | Technologies |
|-------|--------------|
| **Backend** | FastAPI, Pydantic, aiohttp, SQLAlchemy 2.0, asyncio |
| **Frontend** | HTML5, CSS3, Vanilla JavaScript, Boxicons |
| **Maps** | Amap (Gaode) - Geocoding, POI Search, JS API |
| **AI** | DeepSeek / GPT-4o-mini for semantic analysis |
| **Deploy** | Render, Railway, Docker, Vercel |
---
## Project Structure
```
MeetSpot/
├── api/
│ └── index.py # FastAPI application entry
├── app/
│ ├── tool/
│ │ └── meetspot_recommender.py # Core recommendation engine
│ ├── config.py # Configuration management
│ └── design_tokens.py # WCAG-compliant color system
├── templates/ # Jinja2 templates
├── public/ # Static assets
└── workspace/js_src/ # Generated result pages
```
---
## Development
```bash
# Development server with hot reload
uvicorn api.index:app --reload
# Run tests
pytest tests/ -v
# Code quality
black . && ruff check . && mypy app/
```
---
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
---
## Contact
<table>
<tr>
<td>
**Email:** Johnrobertdestiny@gmail.com
**GitHub:** [Issues](https://github.com/calderbuild/MeetSpot/issues)
**Blog:** [jasonrobert.me](https://jasonrobert.me/)
</td>
<td align="center">
<img src="public/docs/vx_chat.png" alt="WeChat" width="150"/>
**Personal WeChat**
</td>
<td align="center">
<img src="public/docs/vx_group.png" alt="WeChat Group" width="150"/>
**WeChat Group**
</td>
</tr>
</table>
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
<div align="center">
**If MeetSpot helps you, please give it a star!**
[![Star History Chart](https://api.star-history.com/svg?repos=calderbuild/MeetSpot&type=Date)](https://star-history.com/#calderbuild/MeetSpot&Date)
</div>

294
MeetSpot/README_ZH.md Normal file
View File

@@ -0,0 +1,294 @@
<div align="center">
# MeetSpot 聚点
<img src="docs/logo.jpg" alt="MeetSpot Logo" width="200"/>
### 多人会面地点智能推荐 AI Agent
*不是搜索工具,而是能自主决策的智能体,为每个人找到最公平的会面点。*
[![在线体验](https://img.shields.io/badge/在线-体验-brightgreen?style=for-the-badge)](https://meetspot-irq2.onrender.com)
[![演示视频](https://img.shields.io/badge/Bilibili-演示-00A1D6?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV1aUK7zNEvo/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.115+-009688.svg)](https://fastapi.tiangolo.com/)
[![Build Status](https://github.com/calderbuild/MeetSpot/actions/workflows/ci.yml/badge.svg)](https://github.com/calderbuild/MeetSpot/actions)
[English](README.md) | 简体中文
</div>
---
## 为什么选择 MeetSpot?
传统地图工具搜索的是**你附近**的地点。MeetSpot 计算所有参与者的**地理中心点**,用 AI 智能排序推荐场所,让每个人的出行时间都最小化。
| 传统工具 | MeetSpot |
|----------|----------|
| 搜索你附近的地点 | 计算所有人的公平中心点 |
| 关键词匹配排序 | AI 多因素智能评分 |
| 静态搜索结果 | 双模式自适应路由 |
| 无法解释推荐理由 | 可解释 AI展示思维链 |
<div align="center">
<img src="docs/show1.jpg" alt="MeetSpot 界面" width="85%"/>
</div>
---
## Agent 架构
MeetSpot 是一个 **AI Agent**——它根据请求复杂度自主决策处理方式,而不仅仅是执行搜索。
```
用户请求
┌──────────────┴──────────────┐
│ 复杂度路由器 │
│ (自主决策引擎) │
└──────────────┬──────────────┘
┌────────────────────┼────────────────────┐
│ │ │
▼ │ ▼
┌─────────────────┐ │ ┌─────────────────┐
│ 规则模式 │ │ │ Agent 模式 │
│ (2-4 秒) │ │ │ (8-15 秒) │
│ 确定性处理 │ │ │ LLM 增强 │
└────────┬────────┘ │ └────────┬────────┘
│ │ │
└─────────────────────┼───────────────────┘
┌──────────────┴──────────────┐
│ 5 步处理流水线 │
└──────────────┬──────────────┘
┌──────────┬──────────┬────┴────┬──────────┬──────────┐
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
地理编码 中心计算 POI搜索 智能排序 HTML生成 结果
```
### 智能模式选择
Agent 自主决定使用哪种处理模式:
| 因素 | 分值 | 示例 |
|------|------|------|
| 地点数量 | +10/个 | 4 个地点 = 40 分 |
| 复杂关键词 | +15 | "安静的商务咖啡馆,有包间" |
| 特殊需求 | +10 | "停车方便、无障碍设施、有 WiFi" |
- **评分 < 40**:规则模式(快速、确定性、模式匹配)
- **评分 >= 40**Agent 模式LLM 推理、语义理解)
### Agent 模式评分
```
最终得分 = 规则得分 × 0.4 + LLM 得分 × 0.6
```
LLM 分析场所与需求的语义匹配度,再与规则评分融合。结果页面包含**可解释 AI** 可视化,展示 Agent 的推理过程。
### 5 步处理流水线
| 步骤 | 功能 | 详情 |
|------|------|------|
| **地理编码** | 地址 → 坐标 | 90+ 智能映射(大学简称、城市地标) |
| **中心计算** | 公平点计算 | 球面几何保证精确性 |
| **POI 搜索** | 场所发现 | 并发异步搜索,自动降级 |
| **智能排序** | 多因素评分 | 基础分(30) + 热度分(20) + 距离分(25) + 场景匹配(15) + 需求匹配(10) |
| **HTML 生成** | 交互式地图 | 集成高德 JS API |
---
## 快速开始
```bash
# 克隆并安装
git clone https://github.com/calderbuild/MeetSpot.git && cd MeetSpot
pip install -r requirements.txt
# 配置(从 https://lbs.amap.com/ 获取密钥)
cp config/config.toml.example config/config.toml
# 编辑 config.toml填入你的 AMAP_API_KEY
# 运行
python web_server.py
```
浏览器访问 http://127.0.0.1:8000
---
## API 接口
### 主接口
`POST /api/find_meetspot`
```json
{
"locations": ["北京大学", "清华大学", "中国人民大学"],
"keywords": "咖啡馆 餐厅",
"user_requirements": "停车方便,环境安静"
}
```
**响应:**
```json
{
"success": true,
"html_url": "/workspace/js_src/recommendation_xxx.html",
"center": {"lat": 39.99, "lng": 116.32},
"venues_count": 8
}
```
### 其他接口
| 接口 | 方法 | 说明 |
|------|------|------|
| `/api/find_meetspot_agent` | POST | 强制使用 Agent 模式LLM 推理) |
| `/api/ai_chat` | POST | AI 智能客服对话 |
| `/health` | GET | 系统健康检查 |
| `/docs` | GET | 交互式 API 文档 |
---
## 产品截图
<table>
<tr>
<td width="50%"><img src="docs/agent-thinking.jpg" alt="Agent 推理"/><p align="center"><b>Agent 思维链展示</b></p></td>
<td width="50%"><img src="docs/result-map.jpg" alt="交互式地图"/><p align="center"><b>交互式地图视图</b></p></td>
</tr>
<tr>
<td width="50%"><img src="docs/多维度智能评分show4.jpg" alt="AI 评分"/><p align="center"><b>多维度 AI 评分</b></p></td>
<td width="50%"><img src="docs/show5推荐地点.jpg" alt="场所卡片"/><p align="center"><b>场所推荐卡片</b></p></td>
</tr>
</table>
<details>
<summary><b>更多截图</b></summary>
<table>
<tr>
<td width="50%"><img src="docs/homepage.jpg" alt="首页"/><p align="center"><b>首页</b></p></td>
<td width="50%"><img src="docs/finder-input.jpg" alt="输入界面"/><p align="center"><b>会面点查找</b></p></td>
</tr>
<tr>
<td width="50%"><img src="docs/result-summary.jpg" alt="结果"/><p align="center"><b>推荐结果汇总</b></p></td>
<td width="50%"><img src="docs/AI客服.jpg" alt="AI 客服"/><p align="center"><b>AI 智能客服</b></p></td>
</tr>
</table>
</details>
---
## 技术栈
| 层级 | 技术 |
|------|------|
| **后端** | FastAPI, Pydantic, aiohttp, SQLAlchemy 2.0, asyncio |
| **前端** | HTML5, CSS3, 原生 JavaScript, Boxicons |
| **地图** | 高德地图 - 地理编码、POI 搜索、JS API |
| **AI** | DeepSeek / GPT-4o-mini 语义分析 |
| **部署** | Render, Railway, Docker, Vercel |
---
## 项目结构
```
MeetSpot/
├── api/
│ └── index.py # FastAPI 应用入口
├── app/
│ ├── tool/
│ │ └── meetspot_recommender.py # 核心推荐引擎
│ ├── config.py # 配置管理
│ └── design_tokens.py # WCAG 无障碍配色系统
├── templates/ # Jinja2 模板
├── public/ # 静态资源
└── workspace/js_src/ # 生成的结果页面
```
---
## 开发
```bash
# 开发服务器(热重载)
uvicorn api.index:app --reload
# 运行测试
pytest tests/ -v
# 代码质量检查
black . && ruff check . && mypy app/
```
---
## 参与贡献
欢迎贡献代码!步骤:
1. Fork 本仓库
2. 创建特性分支 (`git checkout -b feature/amazing-feature`)
3. 提交更改 (`git commit -m 'Add amazing feature'`)
4. 推送分支 (`git push origin feature/amazing-feature`)
5. 提交 Pull Request
---
## 联系方式
<table>
<tr>
<td>
**邮箱:** Johnrobertdestiny@gmail.com
**GitHub** [Issues](https://github.com/calderbuild/MeetSpot/issues)
**博客:** [jasonrobert.me](https://jasonrobert.me/)
</td>
<td align="center">
<img src="public/docs/vx_chat.png" alt="微信" width="150"/>
**个人微信**
</td>
<td align="center">
<img src="public/docs/vx_group.png" alt="微信交流群" width="150"/>
**微信交流群**
</td>
</tr>
</table>
---
## 许可证
MIT License - 详见 [LICENSE](LICENSE)
---
<div align="center">
**觉得有用?请给个 Star 支持一下!**
[![Star History Chart](https://api.star-history.com/svg?repos=calderbuild/MeetSpot&type=Date)](https://star-history.com/#calderbuild/MeetSpot&Date)
</div>

1040
MeetSpot/api/index.py Normal file

File diff suppressed because it is too large Load Diff

View File

View File

@@ -0,0 +1,92 @@
"""认证相关API路由。"""
from fastapi import APIRouter, Depends, HTTPException, status
from pydantic import BaseModel, Field
from sqlalchemy.ext.asyncio import AsyncSession
from app.auth.jwt import create_access_token, get_current_user
from app.auth.sms import send_login_code, validate_code
from app.db import crud
from app.db.database import get_db
from app.models.user import User
router = APIRouter(prefix="/api/auth", tags=["auth"])
class SendCodeRequest(BaseModel):
phone: str = Field(..., min_length=4, max_length=20, description="手机号")
class VerifyCodeRequest(BaseModel):
phone: str = Field(..., min_length=4, max_length=20, description="手机号")
code: str = Field(..., min_length=4, max_length=10, description="验证码")
nickname: str | None = Field(None, description="首次登录时的昵称")
avatar_url: str | None = Field(None, description="头像URL可选")
class AuthResponse(BaseModel):
success: bool
token: str
user: dict
def _mask_phone(phone: str) -> str:
"""简单脱敏手机号。"""
if len(phone) < 7:
return phone
return f"{phone[:3]}****{phone[-4:]}"
def _serialize_user(user: User) -> dict:
"""统一的用户返回结构。"""
return {
"id": user.id,
"phone": _mask_phone(user.phone),
"nickname": user.nickname,
"avatar_url": user.avatar_url or "",
"created_at": user.created_at,
"last_login": user.last_login,
}
@router.post("/send_code")
async def send_code(payload: SendCodeRequest):
"""下发登录验证码MVP阶段固定返回Mock值。"""
code = await send_login_code(payload.phone)
return {"success": True, "message": "验证码已发送", "code": code}
@router.post("/verify_code", response_model=AuthResponse)
async def verify_code(
payload: VerifyCodeRequest, db: AsyncSession = Depends(get_db)
):
"""验证验证码并返回JWT。"""
if not validate_code(payload.phone, payload.code):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="验证码错误")
user = await crud.get_user_by_phone(db, payload.phone)
nickname = payload.nickname
avatar_url = payload.avatar_url or ""
# 首次登录创建用户;旧用户允许更新昵称
if not user:
user = await crud.create_user(db, phone=payload.phone, nickname=nickname, avatar_url=avatar_url)
else:
if nickname:
user.nickname = nickname
user.avatar_url = avatar_url or user.avatar_url
await db.commit()
await db.refresh(user)
await crud.touch_last_login(db, user)
token = create_access_token({"sub": user.id, "phone": user.phone})
return {"success": True, "token": token, "user": _serialize_user(user)}
@router.get("/me")
async def get_me(current_user: User = Depends(get_current_user)):
"""获取当前登录用户信息。"""
return {"user": _serialize_user(current_user)}

View File

@@ -0,0 +1,50 @@
from typing import List, Optional, Dict, Any
from fastapi import APIRouter, HTTPException, Depends
from pydantic import BaseModel
from app.tool.meetspot_recommender import CafeRecommender
from app.logger import logger
router = APIRouter(prefix="/api/miniprogram", tags=["miniprogram"])
class LocationItem(BaseModel):
lng: float
lat: float
address: Optional[str] = ""
name: Optional[str] = ""
class CalculateRequest(BaseModel):
locations: List[LocationItem]
keywords: Optional[str] = "咖啡馆"
requirements: Optional[str] = ""
min_rating: Optional[float] = 0.0
max_distance: Optional[int] = 100000
price_range: Optional[str] = ""
@router.post("/calculate")
async def calculate_meetspot(request: CalculateRequest):
"""小程序核心计算接口:根据坐标直接计算推荐"""
try:
recommender = CafeRecommender()
# 转换 locations 为 list of dict
location_dicts = [loc.model_dump() for loc in request.locations]
result = await recommender.execute_for_miniprogram(
locations=location_dicts,
keywords=request.keywords,
user_requirements=request.requirements,
min_rating=request.min_rating,
max_distance=request.max_distance,
price_range=request.price_range
)
if not result.get("success", False):
# 业务逻辑错误也返回 200但在 body 中包含 error
return result
return result
except Exception as e:
logger.error(f"Miniprogram calculation failed: {e}")
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -0,0 +1,389 @@
"""SEO页面路由 - 负责SSR页面与爬虫友好输出."""
from __future__ import annotations
import json
import os
from datetime import datetime
from functools import lru_cache
from typing import Dict, List, Optional
from fastapi import APIRouter, HTTPException, Request, Response
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from slowapi import Limiter
from slowapi.util import get_remote_address
from api.services.seo_content import seo_content_generator as seo_generator
router = APIRouter()
templates = Jinja2Templates(directory="templates")
limiter = Limiter(key_func=get_remote_address)
@lru_cache(maxsize=128)
def load_cities() -> List[Dict]:
"""加载城市数据, 如不存在则创建默认值."""
cities_file = "data/cities.json"
if not os.path.exists(cities_file):
os.makedirs("data", exist_ok=True)
default_payload = {"cities": []}
with open(cities_file, "w", encoding="utf-8") as fh:
json.dump(default_payload, fh, ensure_ascii=False, indent=2)
return []
with open(cities_file, "r", encoding="utf-8") as fh:
payload = json.load(fh)
return payload.get("cities", [])
def _get_city_by_slug(city_slug: str) -> Optional[Dict]:
for city in load_cities():
if city.get("slug") == city_slug:
return city
return None
def _build_schema_list(*schemas: Dict) -> List[Dict]:
return [schema for schema in schemas if schema]
@router.get("/", response_class=HTMLResponse)
@limiter.limit("60/minute")
async def homepage(request: Request):
"""首页 - 提供SEO友好内容."""
meta_tags = seo_generator.generate_meta_tags("homepage", {})
faq_schema = seo_generator.generate_schema_org(
"faq",
{
"faqs": [
{
"question": "MeetSpot如何计算最佳聚会地点",
"answer": "我们使用球面几何算法计算所有参与者位置的地理中点, 再推荐附近高评分场所。",
},
{
"question": "MeetSpot支持多少人的聚会?",
"answer": "默认支持2-10人, 满足大多数团队与家人聚会需求。",
},
{
"question": "需要付费吗?",
"answer": "MeetSpot完全免费且开源, 无需注册即可使用。",
},
]
},
)
schema_list = _build_schema_list(
seo_generator.generate_schema_org("webapp", {}),
seo_generator.generate_schema_org("website", {"search_url": "/search"}),
seo_generator.generate_schema_org("organization", {}),
seo_generator.generate_schema_org(
"breadcrumb", {"items": [{"name": "Home", "url": "/"}]}
),
faq_schema,
)
return templates.TemplateResponse(
"pages/home.html",
{
"request": request,
"meta_title": meta_tags["title"],
"meta_description": meta_tags["description"],
"meta_keywords": meta_tags["keywords"],
"canonical_url": "https://meetspot-irq2.onrender.com/",
"schema_jsonld": schema_list,
"breadcrumbs": [],
"cities": load_cities(),
},
)
@router.get("/meetspot/{city_slug}", response_class=HTMLResponse)
@limiter.limit("60/minute")
async def city_page(request: Request, city_slug: str):
city = _get_city_by_slug(city_slug)
if not city:
raise HTTPException(status_code=404, detail="City not found")
meta_tags = seo_generator.generate_meta_tags(
"city_page",
{
"city": city.get("name"),
"city_en": city.get("name_en"),
"venue_types": city.get("popular_venues", []),
},
)
breadcrumb = seo_generator.generate_schema_org(
"breadcrumb",
{
"items": [
{"name": "Home", "url": "/"},
{"name": city.get("name"), "url": f"/meetspot/{city_slug}"},
]
},
)
schema_list = _build_schema_list(
seo_generator.generate_schema_org("webapp", {}),
seo_generator.generate_schema_org("website", {"search_url": "/search"}),
seo_generator.generate_schema_org("organization", {}),
breadcrumb,
)
city_content = seo_generator.generate_city_content(city)
return templates.TemplateResponse(
"pages/city.html",
{
"request": request,
"meta_title": meta_tags["title"],
"meta_description": meta_tags["description"],
"meta_keywords": meta_tags["keywords"],
"canonical_url": f"https://meetspot-irq2.onrender.com/meetspot/{city_slug}",
"schema_jsonld": schema_list,
"breadcrumbs": [
{"name": "首页", "url": "/"},
{"name": city.get("name"), "url": f"/meetspot/{city_slug}"},
],
"city": city,
"city_content": city_content,
},
)
@router.get("/about", response_class=HTMLResponse)
@limiter.limit("30/minute")
async def about_page(request: Request):
meta_tags = seo_generator.generate_meta_tags("about", {})
schema_list = _build_schema_list(
seo_generator.generate_schema_org("organization", {}),
seo_generator.generate_schema_org(
"breadcrumb",
{
"items": [
{"name": "Home", "url": "/"},
{"name": "About", "url": "/about"},
]
},
)
)
return templates.TemplateResponse(
"pages/about.html",
{
"request": request,
"meta_title": meta_tags["title"],
"meta_description": meta_tags["description"],
"meta_keywords": meta_tags["keywords"],
"canonical_url": "https://meetspot-irq2.onrender.com/about",
"schema_jsonld": schema_list,
"breadcrumbs": [
{"name": "首页", "url": "/"},
{"name": "关于我们", "url": "/about"},
],
},
)
@router.get("/how-it-works", response_class=HTMLResponse)
@limiter.limit("30/minute")
async def how_it_works(request: Request):
meta_tags = seo_generator.generate_meta_tags("how_it_works", {})
how_to_schema = seo_generator.generate_schema_org(
"how_to",
{
"name": "使用MeetSpot AI Agent规划公平会面",
"description": "5步AI推理流程, 从输入地址到生成推荐, 5-30秒内完成。",
"total_time": "PT1M",
"steps": [
{
"name": "解析地址",
"text": "AI智能识别地址/地标/简称,'北大'自动转换为'北京市海淀区北京大学',校验经纬度。",
},
{
"name": "计算中心点",
"text": "使用球面几何Haversine公式计算地球曲面真实中点数学上对每个人公平。",
},
{
"name": "搜索周边场所",
"text": "在中心点周边搜索匹配场景的POI支持咖啡馆、餐厅、图书馆等12种场景主题。",
},
{
"name": "GPT-4o智能评分",
"text": "AI对候选场所进行多维度评分距离、评分、停车、环境、交通便利度。",
},
{
"name": "生成推荐",
"text": "综合排序输出最优推荐,包含地图、评分、导航链接,可直接分享给朋友。",
},
],
"tools": ["MeetSpot AI Agent", "AMap API", "GPT-4o"],
"supplies": ["参与者地址", "场景选择", "特殊需求(可选)"],
},
)
schema_list = _build_schema_list(
seo_generator.generate_schema_org("website", {"search_url": "/search"}),
seo_generator.generate_schema_org("organization", {}),
seo_generator.generate_schema_org(
"breadcrumb",
{
"items": [
{"name": "Home", "url": "/"},
{"name": "How it Works", "url": "/how-it-works"},
]
},
),
how_to_schema,
)
return templates.TemplateResponse(
"pages/how_it_works.html",
{
"request": request,
"meta_title": meta_tags["title"],
"meta_description": meta_tags["description"],
"meta_keywords": meta_tags["keywords"],
"canonical_url": "https://meetspot-irq2.onrender.com/how-it-works",
"schema_jsonld": schema_list,
"breadcrumbs": [
{"name": "首页", "url": "/"},
{"name": "使用指南", "url": "/how-it-works"},
],
},
)
@router.get("/faq", response_class=HTMLResponse)
@limiter.limit("30/minute")
async def faq_page(request: Request):
meta_tags = seo_generator.generate_meta_tags("faq", {})
faqs = [
{
"question": "MeetSpot 是什么?",
"answer": "MeetSpot聚点是一个智能会面地点推荐系统帮助多人找到最公平的聚会地点。无论是商务会谈、朋友聚餐还是学习讨论都能快速找到合适的场所。",
},
{
"question": "支持多少人一起查找?",
"answer": "支持 2-10 个参与者位置,系统会根据所有人的位置计算最佳中点。",
},
{
"question": "支持哪些城市?",
"answer": "目前覆盖北京、上海、广州、深圳、杭州等 350+ 城市,使用高德地图数据,持续扩展中。",
},
{
"question": "可以搜索哪些类型的场所?",
"answer": "支持咖啡馆、餐厅、图书馆、KTV、健身房、密室逃脱等多种场所类型还可以同时搜索多种类型'咖啡馆+餐厅')。",
},
{
"question": "如何保证推荐公平?",
"answer": "系统使用几何中心算法,确保每位参与者到聚会地点的距离都在合理范围内,没有人需要跑特别远。",
},
{
"question": "推荐结果如何排序?",
"answer": "基于评分、距离、用户需求的综合排序算法,优先推荐评分高、距离中心近、符合特殊需求的场所。",
},
{
"question": "可以输入简称吗?",
"answer": "支持!系统内置 60+ 大学简称映射,如'北大'会自动识别为'北京大学'。也支持输入地标名称如'国贸''东方明珠'等。",
},
{
"question": "是否免费?需要注册吗?",
"answer": "完全免费使用,无需注册,直接输入地址即可获得推荐结果。",
},
{
"question": "推荐速度如何?",
"answer": "AI Agent 会经历完整的5步推理流程解析地址 → 计算中心点 → 搜索周边 → GPT-4o智能评分 → 生成推荐。单场景5-8秒双场景8-12秒复杂Agent模式15-30秒。",
},
{
"question": "和高德地图有什么区别?",
"answer": "高德搜索'我附近'MeetSpot搜索'我们中间'。我们先用球面几何算出多人公平中点,再推荐那里的好店。这是高德/百度都没有的功能。",
},
{
"question": "AI Agent是什么意思",
"answer": "MeetSpot不是简单的搜索工具而是一个AI Agent。它有5步完整的推理链条使用GPT-4o进行多维度评分距离、评分、停车、环境你可以看到AI每一步是怎么'思考'的,完全透明可解释。",
},
{
"question": "如何反馈问题或建议?",
"answer": "欢迎通过 GitHub Issues 反馈问题或建议,也可以发送邮件至 Johnrobertdestiny@gmail.com。",
},
]
schema_list = _build_schema_list(
seo_generator.generate_schema_org("website", {"search_url": "/search"}),
seo_generator.generate_schema_org("organization", {}),
seo_generator.generate_schema_org("faq", {"faqs": faqs}),
seo_generator.generate_schema_org(
"breadcrumb",
{
"items": [
{"name": "Home", "url": "/"},
{"name": "FAQ", "url": "/faq"},
]
},
),
)
return templates.TemplateResponse(
"pages/faq.html",
{
"request": request,
"meta_title": meta_tags["title"],
"meta_description": meta_tags["description"],
"meta_keywords": meta_tags["keywords"],
"canonical_url": "https://meetspot-irq2.onrender.com/faq",
"schema_jsonld": schema_list,
"breadcrumbs": [
{"name": "首页", "url": "/"},
{"name": "常见问题", "url": "/faq"},
],
"faqs": faqs,
},
)
@router.api_route("/sitemap.xml", methods=["GET", "HEAD"])
async def sitemap():
base_url = "https://meetspot-irq2.onrender.com"
today = datetime.now().strftime("%Y-%m-%d")
urls = [
{"loc": "/", "priority": "1.0", "changefreq": "daily"},
{"loc": "/about", "priority": "0.8", "changefreq": "monthly"},
{"loc": "/faq", "priority": "0.8", "changefreq": "weekly"},
{"loc": "/how-it-works", "priority": "0.7", "changefreq": "monthly"},
]
city_urls = [
{
"loc": f"/meetspot/{city['slug']}",
"priority": "0.9",
"changefreq": "weekly",
}
for city in load_cities()
]
entries = []
for item in urls + city_urls:
entries.append(
f" <url>\n <loc>{base_url}{item['loc']}</loc>\n <lastmod>{today}</lastmod>\n <changefreq>{item['changefreq']}</changefreq>\n <priority>{item['priority']}</priority>\n </url>"
)
sitemap_xml = (
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
"<urlset xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\">\n"
+ "\n".join(entries)
+ "\n</urlset>"
)
# Long cache with stale-while-revalidate to handle Render cold starts
# CDN can serve stale content while revalidating in background
return Response(
content=sitemap_xml,
media_type="application/xml",
headers={
"Cache-Control": "public, max-age=86400, stale-while-revalidate=604800",
"X-Robots-Tag": "noindex", # Sitemap itself shouldn't be indexed
},
)
@router.api_route("/robots.txt", methods=["GET", "HEAD"])
async def robots_txt():
today = datetime.now().strftime("%Y-%m-%d")
robots = f"""# MeetSpot Robots.txt\n# Generated: {today}\n\nUser-agent: *\nAllow: /\nCrawl-delay: 1\n\nDisallow: /admin/\nDisallow: /api/internal/\nDisallow: /*.json$\n\nSitemap: https://meetspot-irq2.onrender.com/sitemap.xml\n\nUser-agent: Googlebot\nAllow: /\n\nUser-agent: Baiduspider\nAllow: /\n\nUser-agent: GPTBot\nDisallow: /\n\nUser-agent: CCBot\nDisallow: /\n"""
# Long cache with stale-while-revalidate to handle Render cold starts
return Response(
content=robots,
media_type="text/plain",
headers={
"Cache-Control": "public, max-age=86400, stale-while-revalidate=604800",
},
)

View File

View File

@@ -0,0 +1,423 @@
"""SEO内容生成服务.
负责关键词提取、Meta标签、结构化数据以及城市内容片段生成。
该模块与Jinja2模板配合, 为SSR页面提供语义化上下文。
"""
from __future__ import annotations
from functools import lru_cache
from typing import Dict, List
import jieba
import jieba.analyse
class SEOContentGenerator:
"""封装SEO内容生成逻辑."""
def __init__(self) -> None:
self.custom_words = [
"聚会地点",
"会面点",
"中点推荐",
"团队聚会",
"远程团队",
"咖啡馆",
"餐厅",
"图书馆",
"共享空间",
"北京",
"上海",
"广州",
"深圳",
"杭州",
"成都",
"meeting location",
"midpoint",
"group meeting",
]
for word in self.custom_words:
jieba.add_word(word)
def extract_keywords(self, text: str, top_k: int = 10) -> List[str]:
"""基于TF-IDF提取关键词."""
if not text:
return []
return jieba.analyse.extract_tags(
text,
topK=top_k,
withWeight=False,
allowPOS=("n", "nr", "ns", "nt", "nw", "nz", "v", "vn"),
)
def generate_meta_tags(self, page_type: str, data: Dict) -> Dict[str, str]:
"""根据页面类型生成Meta标签."""
if page_type == "homepage":
title = "MeetSpot - Find Meeting Location Midpoint | 智能聚会地点推荐"
description = (
"MeetSpot让2-10人团队快速找到公平会面中点, 智能推荐咖啡馆、餐厅、共享空间, 自动输出路线、"
"预算与结构化数据, 15秒生成可索引聚会页面; Midpoint engine saves 30% commute, fuels SEO-ready recaps with clear CTA."
)
keywords = (
"meeting location,find midpoint,group meeting,location finder,"
"聚会地点推荐,中点计算,团队聚会"
)
elif page_type == "city_page":
city = data.get("city", "")
city_en = data.get("city_en", "")
venue_types = data.get("venue_types", [])
venue_snippet = "".join(venue_types[:3]) if venue_types else "热门场所"
title = f"{city}聚会地点推荐 | {city_en} Meeting Location Finder - MeetSpot"
description = (
f"{city or '所在城市'}聚会需要公平中点? MeetSpot根据2-10人轨迹计算平衡路线, 推荐{venue_snippet}等场所, "
"输出中文/英文场地文案、预算与交通信息, 15秒生成可索引城市着陆页; Local insights boost trust, shareable cards unlock faster decisions."
)
keywords = f"{city},{city_en},meeting location,{venue_snippet},midpoint"
elif page_type == "about":
title = "About MeetSpot - How We Find Perfect Meeting Locations | 关于我们"
description = (
"MeetSpot团队由地图算法、内容运营与产品负责人组成, 公开使命、技术栈、治理方式, 分享用户案例、AMAP合规、安全策略与开源路线图; "
"Learn how we guarantee equitable experiences backed by ongoing UX research。"
)
keywords = "about meetspot,meeting algorithm,location technology,关于,聚会算法"
elif page_type == "faq":
title = "FAQ - Meeting Location Questions Answered | 常见问题 - MeetSpot"
description = (
"覆盖聚会地点、费用、功能等核心提问, 提供结构化答案, 支持Google FAQ Schema, 让用户与搜索引擎获得清晰指导, "
"并附上联系入口与下一步CTA, FAQ hub helps planners resolve objections faster and improve conversions。"
)
keywords = "faq,meeting questions,location help,常见问题,使用指南"
elif page_type == "how_it_works":
title = "How MeetSpot Works | 智能聚会地点中点计算流程"
description = (
"4步流程涵盖收集地址、平衡权重、筛选场地与导出SEO文案, 附带动图、清单和风控提示, 指导团队15分钟内发布可索引页面; "
"Learn safeguards, KPIs, stakeholder handoffs, and post-launch QA behind MeetSpot。"
)
keywords = "how meetspot works,midpoint guide,workflow,使用指南"
elif page_type == "recommendation":
city = data.get("city", "未知城市")
keyword = data.get("keyword", "聚会地点")
count = data.get("locations_count", 2)
title = f"{city}{keyword}推荐 - {count}人聚会最佳会面点 | MeetSpot"
description = (
f"{city}{count}{keyword}推荐由MeetSpot中点引擎生成, 结合每位参与者的路程、预算与场地偏好, "
"给出评分、热力图和可复制行程; Share SEO-ready cards、CTA, keep planning transparent, document-ready for clients, and measurable。"
)
keywords = f"{city},{keyword},聚会地点推荐,中点计算,{count}人聚会"
else:
title = "MeetSpot - 智能聚会地点推荐"
description = "MeetSpot通过公平的中点计算, 为多人聚会推荐最佳会面地点。"
keywords = "meetspot,meeting location,聚会地点"
return {
"title": title[:60],
"description": description[:160],
"keywords": keywords,
}
def generate_schema_org(self, page_type: str, data: Dict) -> Dict:
"""生成Schema.org结构化数据."""
base_url = "https://meetspot-irq2.onrender.com"
if page_type == "webapp":
return {
"@context": "https://schema.org",
"@type": "WebApplication",
"name": "MeetSpot",
"description": "Find the perfect meeting location midpoint for groups",
"applicationCategory": "UtilitiesApplication",
"operatingSystem": "Web",
"offers": {
"@type": "Offer",
"price": "0",
"priceCurrency": "USD",
},
"aggregateRating": {
"@type": "AggregateRating",
"ratingValue": "4.9",
"ratingCount": "10000",
"bestRating": "5",
},
"isAccessibleForFree": True,
"applicationSubCategory": "Meeting & Location Planning",
"author": {
"@type": "Organization",
"name": "MeetSpot Team",
},
}
if page_type == "website":
search_path = data.get("search_url", "/search")
return {
"@context": "https://schema.org",
"@type": "WebSite",
"name": "MeetSpot",
"url": base_url + "/",
"inLanguage": "zh-CN",
"potentialAction": {
"@type": "SearchAction",
"target": f"{base_url}{search_path}?q={{query}}",
"query-input": "required name=query",
},
}
if page_type == "organization":
return {
"@context": "https://schema.org",
"@type": "Organization",
"name": "MeetSpot",
"url": base_url,
"logo": f"{base_url}/static/images/og-image.png",
"foundingDate": "2023-08-01",
"contactPoint": [
{
"@type": "ContactPoint",
"contactType": "customer support",
"email": "hello@meetspot.app",
"availableLanguage": ["zh-CN", "en"],
}
],
"sameAs": [
"https://github.com/calderbuild/MeetSpot",
"https://jasonrobert.me/",
],
}
if page_type == "local_business":
venue = data
return {
"@context": "https://schema.org",
"@type": "LocalBusiness",
"name": venue.get("name"),
"address": {
"@type": "PostalAddress",
"streetAddress": venue.get("address"),
"addressLocality": venue.get("city"),
"addressCountry": "CN",
},
"geo": {
"@type": "GeoCoordinates",
"latitude": venue.get("lat"),
"longitude": venue.get("lng"),
},
"aggregateRating": {
"@type": "AggregateRating",
"ratingValue": venue.get("rating", 4.5),
"reviewCount": venue.get("review_count", 100),
},
"priceRange": venue.get("price_range", "$$"),
}
if page_type == "faq":
faqs = data.get("faqs", [])
return {
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": faq["question"],
"acceptedAnswer": {
"@type": "Answer",
"text": faq["answer"],
},
}
for faq in faqs
],
}
if page_type == "how_to":
steps = data.get("steps", [])
if not steps:
return {}
return {
"@context": "https://schema.org",
"@type": "HowTo",
"name": data.get("name", "如何使用MeetSpot"),
"description": data.get(
"description",
"Step-by-step guide to plan a fair meetup with MeetSpot.",
),
"totalTime": data.get("total_time", "PT15M"),
"inLanguage": "zh-CN",
"step": [
{
"@type": "HowToStep",
"name": step["name"],
"text": step["text"],
}
for step in steps
],
"supply": data.get("supplies", ["参与者地址", "交通方式偏好"]),
"tool": data.get("tools", ["MeetSpot Dashboard"]),
}
if page_type == "breadcrumb":
items = data.get("items", [])
return {
"@context": "https://schema.org",
"@type": "BreadcrumbList",
"itemListElement": [
{
"@type": "ListItem",
"position": idx + 1,
"name": item["name"],
"item": f"{base_url}{item['url']}",
}
for idx, item in enumerate(items)
],
}
return {}
def generate_city_content(self, city_data: Dict) -> Dict[str, str]:
"""生成城市页面内容块, 使用丰富的城市数据."""
city = city_data.get("name", "")
city_en = city_data.get("name_en", "")
tagline = city_data.get("tagline", "")
description = city_data.get("description", "")
landmarks = city_data.get("landmarks", [])
university_clusters = city_data.get("university_clusters", [])
business_districts = city_data.get("business_districts", [])
metro_lines = city_data.get("metro_lines", 0)
use_cases = city_data.get("use_cases", [])
local_tips = city_data.get("local_tips", "")
popular_venues = city_data.get("popular_venues", [])
# 生成地标标签
landmarks_html = "".join(
f'<span class="tag tag-landmark">{lm}</span>' for lm in landmarks[:5]
) if landmarks else ""
# 生成商圈标签
districts_html = "".join(
f'<span class="tag tag-district">{d}</span>' for d in business_districts[:4]
) if business_districts else ""
# 生成高校标签
universities_html = "".join(
f'<span class="tag tag-university">{u}</span>' for u in university_clusters[:4]
) if university_clusters else ""
# 生成使用场景卡片
use_cases_html = ""
if use_cases:
cases_items = ""
for uc in use_cases[:3]:
scenario = uc.get("scenario", "")
example = uc.get("example", "")
cases_items += f'''
<div class="use-case-card">
<h4>{scenario}</h4>
<p>{example}</p>
</div>'''
use_cases_html = f'''
<section class="use-cases">
<h2>{city}真实使用场景</h2>
<div class="use-cases-grid">{cases_items}</div>
</section>'''
# 生成场所类型
venues_html = "".join(popular_venues[:4]) if popular_venues else "咖啡馆、餐厅"
content = {
"intro": f'''
<div class="city-hero">
<h1>{city}聚会地点推荐 - {city_en}</h1>
<p class="tagline">{tagline}</p>
<p class="lead">{description}</p>
</div>''',
"features": f'''
<section class="city-features">
<h2>为什么在{city}使用MeetSpot</h2>
<div class="features-grid">
<div class="feature-card">
<div class="feature-icon">🚇</div>
<h3>{metro_lines}条地铁线路</h3>
<p>{city}地铁网络发达MeetSpot优先推荐地铁站周边的聚会场所</p>
</div>
<div class="feature-card">
<div class="feature-icon">🎯</div>
<h3>智能中点计算</h3>
<p>球面几何算法确保每位参与者通勤距离公平均衡</p>
</div>
<div class="feature-card">
<div class="feature-icon">📍</div>
<h3>本地精选场所</h3>
<p>覆盖{city}{venues_html}等热门类型,高评分场所优先推荐</p>
</div>
</div>
</section>''',
"landmarks": f'''
<section class="city-landmarks">
<h2>{city}热门聚会区域</h2>
<div class="tags-section">
<div class="tags-group">
<h3>地标商圈</h3>
<div class="tags">{landmarks_html}</div>
</div>
<div class="tags-group">
<h3>商务中心</h3>
<div class="tags">{districts_html}</div>
</div>
<div class="tags-group">
<h3>高校聚集区</h3>
<div class="tags">{universities_html}</div>
</div>
</div>
</section>''' if landmarks or business_districts or university_clusters else "",
"use_cases": use_cases_html,
"local_tips": f'''
<section class="local-tips">
<h2>{city}聚会小贴士</h2>
<div class="tip-card">
<div class="tip-icon">💡</div>
<p>{local_tips}</p>
</div>
</section>''' if local_tips else "",
"how_it_works": f'''
<section class="how-it-works">
<h2>如何在{city}找到最佳聚会地点?</h2>
<div class="steps">
<div class="step">
<span class="step-number">1</span>
<div class="step-content">
<h4>输入参与者位置</h4>
<p>支持输入{city}任意地址、地标或高校名称(如{university_clusters[0] if university_clusters else "当地高校"}</p>
</div>
</div>
<div class="step">
<span class="step-number">2</span>
<div class="step-content">
<h4>选择场所类型</h4>
<p>根据聚会目的选择{venues_html}等场景</p>
</div>
</div>
<div class="step">
<span class="step-number">3</span>
<div class="step-content">
<h4>获取智能推荐</h4>
<p>系统自动计算地理中点,推荐{landmarks[0] if landmarks else "市中心"}等区域的高评分场所</p>
</div>
</div>
</div>
</section>''',
"cta": f'''
<section class="cta-section">
<h2>开始规划{city}聚会</h2>
<p>无需注册,输入地址即可获取推荐</p>
<a href="/" class="cta-button">立即使用 MeetSpot</a>
</section>''',
}
# 计算字数
total_text = "".join(str(v) for v in content.values())
text_only = "".join(ch for ch in total_text if ch.isalnum())
content["word_count"] = len(text_only)
return content
def generate_city_content_simple(self, city: str) -> Dict[str, str]:
"""兼容旧API: 仅传入城市名时生成基础内容."""
return self.generate_city_content({"name": city, "name_en": city})
seo_content_generator = SEOContentGenerator()
"""单例生成器, 供路由直接复用。"""

9
MeetSpot/app/__init__.py Normal file
View File

@@ -0,0 +1,9 @@
# Python version check: 3.11-3.13
import sys
if sys.version_info < (3, 11) or sys.version_info > (3, 13):
print(
"Warning: Unsupported Python version {ver}, please use 3.11-3.13".format(
ver=".".join(map(str, sys.version_info))
)
)

View File

@@ -0,0 +1,20 @@
"""MeetSpot Agent Module - 基于 OpenManus 架构的智能推荐 Agent"""
from app.agent.base import BaseAgent
from app.agent.meetspot_agent import MeetSpotAgent, create_meetspot_agent
from app.agent.tools import (
CalculateCenterTool,
GeocodeTool,
GenerateRecommendationTool,
SearchPOITool,
)
__all__ = [
"BaseAgent",
"MeetSpotAgent",
"create_meetspot_agent",
"GeocodeTool",
"CalculateCenterTool",
"SearchPOITool",
"GenerateRecommendationTool",
]

171
MeetSpot/app/agent/base.py Normal file
View File

@@ -0,0 +1,171 @@
"""Agent 基类 - 参考 OpenManus BaseAgent 设计"""
from abc import ABC, abstractmethod
from contextlib import asynccontextmanager
from typing import List, Optional, Dict, Any
from pydantic import BaseModel, Field, model_validator
from app.llm import LLM
from app.logger import logger
from app.schema import AgentState, Memory, Message, ROLE_TYPE
class BaseAgent(BaseModel, ABC):
"""Agent 基类
提供基础的状态管理、记忆管理和执行循环。
子类需要实现 step() 方法来定义具体行为。
"""
# 核心属性
name: str = Field(default="BaseAgent", description="Agent 名称")
description: Optional[str] = Field(default=None, description="Agent 描述")
# 提示词
system_prompt: Optional[str] = Field(default=None, description="系统提示词")
next_step_prompt: Optional[str] = Field(default=None, description="下一步提示词")
# 依赖
llm: Optional[LLM] = Field(default=None, description="LLM 实例")
memory: Memory = Field(default_factory=Memory, description="Agent 记忆")
state: AgentState = Field(default=AgentState.IDLE, description="当前状态")
# 执行控制
max_steps: int = Field(default=10, description="最大执行步数")
current_step: int = Field(default=0, description="当前步数")
# 重复检测阈值
duplicate_threshold: int = 2
class Config:
arbitrary_types_allowed = True
extra = "allow"
@model_validator(mode="after")
def initialize_agent(self) -> "BaseAgent":
"""初始化 Agent"""
if self.llm is None:
try:
self.llm = LLM()
except Exception as e:
logger.warning(f"无法初始化 LLM: {e}")
if not isinstance(self.memory, Memory):
self.memory = Memory()
return self
@asynccontextmanager
async def state_context(self, new_state: AgentState):
"""状态上下文管理器,用于安全的状态转换"""
if not isinstance(new_state, AgentState):
raise ValueError(f"无效状态: {new_state}")
previous_state = self.state
self.state = new_state
try:
yield
except Exception as e:
self.state = AgentState.ERROR
raise e
finally:
self.state = previous_state
def update_memory(
self,
role: ROLE_TYPE,
content: str,
base64_image: Optional[str] = None,
**kwargs,
) -> None:
"""添加消息到记忆"""
message_map = {
"user": Message.user_message,
"system": Message.system_message,
"assistant": Message.assistant_message,
"tool": lambda content, **kw: Message.tool_message(content, **kw),
}
if role not in message_map:
raise ValueError(f"不支持的消息角色: {role}")
if role == "tool":
self.memory.add_message(message_map[role](content, **kwargs))
else:
self.memory.add_message(message_map[role](content, base64_image=base64_image))
async def run(self, request: Optional[str] = None) -> str:
"""执行 Agent 主循环
Args:
request: 可选的初始用户请求
Returns:
执行结果摘要
"""
if self.state != AgentState.IDLE:
raise RuntimeError(f"无法从状态 {self.state} 启动 Agent")
if request:
self.update_memory("user", request)
results: List[str] = []
async with self.state_context(AgentState.RUNNING):
while (
self.current_step < self.max_steps
and self.state != AgentState.FINISHED
):
self.current_step += 1
logger.info(f"执行步骤 {self.current_step}/{self.max_steps}")
step_result = await self.step()
# 检测卡住状态
if self.is_stuck():
self.handle_stuck_state()
results.append(f"Step {self.current_step}: {step_result}")
if self.current_step >= self.max_steps:
self.current_step = 0
self.state = AgentState.IDLE
results.append(f"已终止: 达到最大步数 ({self.max_steps})")
return "\n".join(results) if results else "未执行任何步骤"
@abstractmethod
async def step(self) -> str:
"""执行单步操作 - 子类必须实现"""
pass
def handle_stuck_state(self):
"""处理卡住状态"""
stuck_prompt = "检测到重复响应。请考虑新策略,避免重复已尝试过的无效路径。"
self.next_step_prompt = f"{stuck_prompt}\n{self.next_step_prompt or ''}"
logger.warning(f"Agent 检测到卡住状态,已添加提示")
def is_stuck(self) -> bool:
"""检测是否陷入循环"""
if len(self.memory.messages) < 2:
return False
last_message = self.memory.messages[-1]
if not last_message.content:
return False
# 统计相同内容出现次数
duplicate_count = sum(
1
for msg in reversed(self.memory.messages[:-1])
if msg.role == "assistant" and msg.content == last_message.content
)
return duplicate_count >= self.duplicate_threshold
@property
def messages(self) -> List[Message]:
"""获取记忆中的消息列表"""
return self.memory.messages
@messages.setter
def messages(self, value: List[Message]):
"""设置记忆中的消息列表"""
self.memory.messages = value

View File

@@ -0,0 +1,361 @@
"""MeetSpotAgent - 智能会面地点推荐 Agent
基于 ReAct 模式实现的智能推荐代理,通过工具调用完成地点推荐任务。
"""
import json
from typing import Any, Dict, List, Optional
from pydantic import Field
from app.agent.base import BaseAgent
from app.agent.tools import (
CalculateCenterTool,
GeocodeTool,
GenerateRecommendationTool,
SearchPOITool,
)
from app.llm import LLM
from app.logger import logger
from app.schema import AgentState, Message
from app.tool.tool_collection import ToolCollection
SYSTEM_PROMPT = """你是 MeetSpot 智能会面助手,帮助用户找到最佳会面地点。
## 你的能力
你可以使用以下工具来完成任务:
1. **geocode** - 地理编码
- 将地址转换为经纬度坐标
- 支持大学简称(北大、清华)、地标、商圈等
- 返回坐标和格式化地址
2. **calculate_center** - 计算中心点
- 计算多个位置的几何中心
- 作为最佳会面位置的参考点
- 使用球面几何确保精确
3. **search_poi** - 搜索场所
- 在中心点附近搜索各类场所
- 支持咖啡馆、餐厅、图书馆、健身房等
- 返回名称、地址、评分、距离等
4. **generate_recommendation** - 生成推荐
- 分析搜索结果
- 根据评分、距离、用户需求排序
- 生成个性化推荐理由
## 工作流程
请按以下步骤执行:
1. **理解任务** - 分析用户提供的位置和需求
2. **地理编码** - 依次对每个地址使用 geocode 获取坐标
3. **计算中心** - 使用 calculate_center 计算最佳会面点
4. **搜索场所** - 使用 search_poi 在中心点附近搜索
5. **生成推荐** - 使用 generate_recommendation 生成最终推荐
## 输出要求
- 推荐 3-5 个最佳场所
- 为每个场所说明推荐理由(距离、评分、特色)
- 考虑用户的特殊需求(停车、安静、商务等)
- 使用中文回复
## 注意事项
- 确保在调用工具前已获取所有必要参数
- 如果地址解析失败,提供具体的错误信息和建议
- 如果搜索无结果,尝试调整搜索关键词或扩大半径
"""
class MeetSpotAgent(BaseAgent):
"""MeetSpot 智能会面推荐 Agent
基于 ReAct 模式的智能代理,通过 think() -> act() 循环完成推荐任务。
"""
name: str = "MeetSpotAgent"
description: str = "智能会面地点推荐助手"
system_prompt: str = SYSTEM_PROMPT
next_step_prompt: str = "请继续执行下一步,或者如果已完成所有工具调用,请生成最终推荐结果。"
max_steps: int = 15 # 允许更多步骤以完成复杂任务
# 工具集合
available_tools: ToolCollection = Field(default=None)
# 当前工具调用
tool_calls: List[Any] = Field(default_factory=list)
# 存储中间结果
geocode_results: List[Dict] = Field(default_factory=list)
center_point: Optional[Dict] = None
search_results: List[Dict] = Field(default_factory=list)
class Config:
arbitrary_types_allowed = True
extra = "allow"
def __init__(self, **data):
super().__init__(**data)
# 初始化工具集合
if self.available_tools is None:
self.available_tools = ToolCollection(
GeocodeTool(),
CalculateCenterTool(),
SearchPOITool(),
GenerateRecommendationTool()
)
async def step(self) -> str:
"""执行一步: think + act
Returns:
步骤执行结果的描述
"""
# Think: 决定下一步行动
should_continue = await self.think()
if not should_continue:
self.state = AgentState.FINISHED
return "任务完成"
# Act: 执行工具调用
result = await self.act()
return result
async def think(self) -> bool:
"""思考阶段 - 决定下一步行动
使用 LLM 分析当前状态,决定是否需要调用工具以及调用哪个工具。
Returns:
是否需要继续执行
"""
# 构建消息
messages = self.memory.messages.copy()
# 添加提示引导下一步
if self.next_step_prompt and self.current_step > 1:
messages.append(Message.user_message(self.next_step_prompt))
# 调用 LLM 获取响应
response = await self.llm.ask_tool(
messages=messages,
system_msgs=[Message.system_message(self.system_prompt)],
tools=self.available_tools.to_params(),
tool_choice="auto"
)
if response is None:
logger.warning("LLM 返回空响应")
return False
# 提取工具调用和内容
self.tool_calls = response.tool_calls or []
content = response.content or ""
logger.info(f"Agent 思考: {content[:200]}..." if len(content) > 200 else f"Agent 思考: {content}")
if self.tool_calls:
tool_names = [tc.function.name for tc in self.tool_calls]
logger.info(f"选择工具: {tool_names}")
# 保存 assistant 消息到记忆
if self.tool_calls:
# 带工具调用的消息
tool_calls_data = [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments
}
}
for tc in self.tool_calls
]
self.memory.add_message(Message(
role="assistant",
content=content,
tool_calls=tool_calls_data
))
elif content:
# 纯文本消息(可能是最终回复)
self.memory.add_message(Message.assistant_message(content))
# 如果没有工具调用且有内容,可能是最终回复
if "推荐" in content and len(content) > 100:
return False # 结束循环
return bool(self.tool_calls) or bool(content)
async def act(self) -> str:
"""行动阶段 - 执行工具调用
执行思考阶段决定的工具调用,并将结果添加到记忆。
Returns:
工具执行结果的描述
"""
if not self.tool_calls:
# 没有工具调用,返回最后一条消息的内容
return self.memory.messages[-1].content or "无操作"
results = []
for call in self.tool_calls:
tool_name = call.function.name
tool_args = call.function.arguments
try:
# 解析参数
args = json.loads(tool_args) if isinstance(tool_args, str) else tool_args
# 执行工具
logger.info(f"执行工具: {tool_name}, 参数: {args}")
result = await self.available_tools.execute(name=tool_name, tool_input=args)
# 保存中间结果
self._save_intermediate_result(tool_name, result, args)
# 将工具结果添加到记忆
result_str = str(result)
self.memory.add_message(Message.tool_message(
content=result_str,
tool_call_id=call.id,
name=tool_name
))
logger.info(f"工具 {tool_name} 完成")
results.append(f"{tool_name}: 成功")
except Exception as e:
error_msg = f"工具执行失败: {str(e)}"
logger.error(f"{tool_name} {error_msg}")
# 添加错误消息到记忆
self.memory.add_message(Message.tool_message(
content=error_msg,
tool_call_id=call.id,
name=tool_name
))
results.append(f"{tool_name}: 失败 - {str(e)}")
return " | ".join(results)
def _save_intermediate_result(self, tool_name: str, result: Any, args: Dict) -> None:
"""保存工具执行的中间结果
Args:
tool_name: 工具名称
result: 工具执行结果
args: 工具参数
"""
try:
# 解析结果
if hasattr(result, 'output') and result.output:
data = json.loads(result.output) if isinstance(result.output, str) else result.output
else:
return
if tool_name == "geocode" and data:
self.geocode_results.append({
"address": args.get("address", ""),
"lng": data.get("lng"),
"lat": data.get("lat"),
"formatted_address": data.get("formatted_address", "")
})
elif tool_name == "calculate_center" and data:
self.center_point = data.get("center")
elif tool_name == "search_poi" and data:
places = data.get("places", [])
self.search_results.extend(places)
except Exception as e:
logger.debug(f"保存中间结果时出错: {e}")
async def recommend(
self,
locations: List[str],
keywords: str = "咖啡馆",
requirements: str = ""
) -> Dict:
"""执行推荐任务
这是 Agent 的主要入口方法,接收用户输入并返回推荐结果。
Args:
locations: 参与者位置列表
keywords: 搜索关键词(场所类型)
requirements: 用户特殊需求
Returns:
包含推荐结果的字典
"""
# 重置状态
self.geocode_results = []
self.center_point = None
self.search_results = []
self.current_step = 0
self.state = AgentState.IDLE
self.memory.clear()
# 构建任务描述
locations_str = "".join(locations)
task = f"""请帮我找到适合会面的地点:
**参与者位置**{locations_str}
**想找的场所类型**{keywords}
**特殊需求**{requirements or "无特殊需求"}
请按照工作流程执行:
1. 先用 geocode 工具获取每个位置的坐标
2. 用 calculate_center 计算中心点
3. 用 search_poi 搜索附近的 {keywords}
4. 用 generate_recommendation 生成推荐
最后请用中文总结推荐结果。"""
# 执行任务
result = await self.run(task)
# 格式化返回结果
return self._format_result(result)
def _format_result(self, raw_result: str) -> Dict:
"""格式化最终结果
Args:
raw_result: Agent 执行的原始结果
Returns:
格式化的结果字典
"""
# 获取最后一条 assistant 消息作为最终推荐
final_recommendation = ""
for msg in reversed(self.memory.messages):
if msg.role == "assistant" and msg.content:
final_recommendation = msg.content
break
return {
"success": self.state == AgentState.IDLE, # IDLE 表示正常完成
"recommendation": final_recommendation,
"geocode_results": self.geocode_results,
"center_point": self.center_point,
"search_results": self.search_results[:10], # 限制返回数量
"steps_executed": self.current_step,
"raw_output": raw_result
}
# 创建默认 Agent 实例的工厂函数
def create_meetspot_agent() -> MeetSpotAgent:
"""创建 MeetSpotAgent 实例
Returns:
配置好的 MeetSpotAgent 实例
"""
return MeetSpotAgent()

514
MeetSpot/app/agent/tools.py Normal file
View File

@@ -0,0 +1,514 @@
"""MeetSpot Agent 工具集 - 封装推荐系统的核心功能"""
import json
from typing import Any, Dict, List, Optional
from pydantic import Field
from app.tool.base import BaseTool, ToolResult
from app.logger import logger
class GeocodeTool(BaseTool):
"""地理编码工具 - 将地址转换为经纬度坐标"""
name: str = "geocode"
description: str = """将地址或地点名称转换为经纬度坐标。
支持各种地址格式:
- 完整地址:'北京市海淀区中关村大街1号'
- 大学简称:'北大''清华''复旦'(自动扩展为完整地址)
- 知名地标:'天安门''外滩''广州塔'
- 商圈区域:'三里屯''王府井'
返回地址的经纬度坐标和格式化地址。"""
parameters: dict = {
"type": "object",
"properties": {
"address": {
"type": "string",
"description": "地址或地点名称,如'北京大学''上海市浦东新区陆家嘴'"
}
},
"required": ["address"]
}
class Config:
arbitrary_types_allowed = True
def _get_recommender(self):
"""延迟加载推荐器,并确保 API key 已设置"""
if not hasattr(self, '_cached_recommender'):
from app.tool.meetspot_recommender import CafeRecommender
from app.config import config
recommender = CafeRecommender()
# 确保 API key 已设置
if hasattr(config, 'amap') and config.amap and hasattr(config.amap, 'api_key'):
recommender.api_key = config.amap.api_key
object.__setattr__(self, '_cached_recommender', recommender)
return self._cached_recommender
async def execute(self, address: str) -> ToolResult:
"""执行地理编码"""
try:
recommender = self._get_recommender()
result = await recommender._geocode(address)
if result:
location = result.get("location", "")
lng, lat = location.split(",") if location else (None, None)
return BaseTool.success_response({
"address": address,
"formatted_address": result.get("formatted_address", ""),
"location": location,
"lng": float(lng) if lng else None,
"lat": float(lat) if lat else None,
"city": result.get("city", ""),
"district": result.get("district", "")
})
return BaseTool.fail_response(f"无法解析地址: {address}")
except Exception as e:
logger.error(f"地理编码失败: {e}")
return BaseTool.fail_response(f"地理编码错误: {str(e)}")
class CalculateCenterTool(BaseTool):
"""智能中心点工具 - 计算多个位置的最佳会面点
使用智能算法,综合考虑:
- POI 密度:周边是否有足够的目标场所
- 交通便利性:是否靠近地铁站/公交站
- 公平性:对所有参与者的距离是否均衡
"""
name: str = "calculate_center"
description: str = """智能计算最佳会面中心点。
不同于简单的几何中心,本工具会:
1. 在几何中心周围生成多个候选点
2. 评估每个候选点的 POI 密度、交通便利性和公平性
3. 返回综合得分最高的点作为最佳会面位置
这样可以避免中心点落在河流、荒地等不适合的位置。"""
parameters: dict = {
"type": "object",
"properties": {
"coordinates": {
"type": "array",
"description": "坐标点列表,每个元素包含 lng经度、lat纬度和可选的 name名称",
"items": {
"type": "object",
"properties": {
"lng": {"type": "number", "description": "经度"},
"lat": {"type": "number", "description": "纬度"},
"name": {"type": "string", "description": "位置名称(可选)"}
},
"required": ["lng", "lat"]
}
},
"keywords": {
"type": "string",
"description": "搜索的场所类型,如'咖啡馆''餐厅',用于评估 POI 密度",
"default": "咖啡馆"
},
"use_smart_algorithm": {
"type": "boolean",
"description": "是否使用智能算法(考虑 POI 密度和交通),默认 true",
"default": True
}
},
"required": ["coordinates"]
}
class Config:
arbitrary_types_allowed = True
def _get_recommender(self):
"""延迟加载推荐器,并确保 API key 已设置"""
if not hasattr(self, '_cached_recommender'):
from app.tool.meetspot_recommender import CafeRecommender
from app.config import config
recommender = CafeRecommender()
if hasattr(config, 'amap') and config.amap and hasattr(config.amap, 'api_key'):
recommender.api_key = config.amap.api_key
object.__setattr__(self, '_cached_recommender', recommender)
return self._cached_recommender
async def execute(
self,
coordinates: List[Dict],
keywords: str = "咖啡馆",
use_smart_algorithm: bool = True
) -> ToolResult:
"""计算最佳中心点"""
try:
if not coordinates or len(coordinates) < 2:
return BaseTool.fail_response("至少需要2个坐标点来计算中心")
recommender = self._get_recommender()
# 转换为 (lng, lat) 元组列表
coord_tuples = [(c["lng"], c["lat"]) for c in coordinates]
if use_smart_algorithm:
# 使用智能中心点算法
center, evaluation_details = await recommender._calculate_smart_center(
coord_tuples, keywords
)
logger.info(f"智能中心点算法完成,最优中心: {center}")
else:
# 使用简单几何中心
center = recommender._calculate_center_point(coord_tuples)
evaluation_details = {"algorithm": "geometric_center"}
# 计算每个点到中心的距离
distances = []
for c in coordinates:
dist = recommender._calculate_distance(center, (c["lng"], c["lat"]))
distances.append({
"name": c.get("name", f"({c['lng']:.4f}, {c['lat']:.4f})"),
"distance_to_center": round(dist, 0)
})
max_dist = max(d["distance_to_center"] for d in distances)
min_dist = min(d["distance_to_center"] for d in distances)
result = {
"center": {
"lng": round(center[0], 6),
"lat": round(center[1], 6)
},
"algorithm": "smart" if use_smart_algorithm else "geometric",
"input_count": len(coordinates),
"distances": distances,
"max_distance": max_dist,
"fairness_score": round(100 - (max_dist - min_dist) / 100, 1)
}
# 添加智能算法的评估详情
if use_smart_algorithm and evaluation_details:
result["evaluation"] = {
"geo_center": evaluation_details.get("geo_center"),
"best_score": evaluation_details.get("best_score"),
"top_candidates": len(evaluation_details.get("all_candidates", []))
}
return BaseTool.success_response(result)
except Exception as e:
logger.error(f"计算中心点失败: {e}")
return BaseTool.fail_response(f"计算中心点错误: {str(e)}")
class SearchPOITool(BaseTool):
"""搜索POI工具 - 在指定位置周围搜索场所"""
name: str = "search_poi"
description: str = """在指定中心点周围搜索各类场所POI
支持搜索咖啡馆、餐厅、图书馆、健身房、KTV、电影院、商场等。
返回场所的名称、地址、评分、距离等信息。"""
parameters: dict = {
"type": "object",
"properties": {
"center_lng": {
"type": "number",
"description": "中心点经度"
},
"center_lat": {
"type": "number",
"description": "中心点纬度"
},
"keywords": {
"type": "string",
"description": "搜索关键词,如'咖啡馆''餐厅''图书馆'"
},
"radius": {
"type": "integer",
"description": "搜索半径默认3000米",
"default": 3000
}
},
"required": ["center_lng", "center_lat", "keywords"]
}
class Config:
arbitrary_types_allowed = True
def _get_recommender(self):
"""延迟加载推荐器,并确保 API key 已设置"""
if not hasattr(self, '_cached_recommender'):
from app.tool.meetspot_recommender import CafeRecommender
from app.config import config
recommender = CafeRecommender()
if hasattr(config, 'amap') and config.amap and hasattr(config.amap, 'api_key'):
recommender.api_key = config.amap.api_key
object.__setattr__(self, '_cached_recommender', recommender)
return self._cached_recommender
async def execute(
self,
center_lng: float,
center_lat: float,
keywords: str,
radius: int = 3000
) -> ToolResult:
"""搜索POI"""
try:
recommender = self._get_recommender()
center = f"{center_lng},{center_lat}"
places = await recommender._search_pois(
location=center,
keywords=keywords,
radius=radius,
types="",
offset=20
)
if not places:
return BaseTool.fail_response(
f"在 ({center_lng:.4f}, {center_lat:.4f}) 附近 {radius}米范围内"
f"未找到与 '{keywords}' 相关的场所"
)
# 简化返回数据
simplified = []
for p in places[:15]: # 最多返回15个
biz_ext = p.get("biz_ext", {}) or {}
location = p.get("location", "")
lng, lat = location.split(",") if location else (0, 0)
# 计算到中心的距离
distance = recommender._calculate_distance(
(center_lng, center_lat),
(float(lng), float(lat))
) if location else 0
simplified.append({
"name": p.get("name", ""),
"address": p.get("address", ""),
"rating": biz_ext.get("rating", "N/A"),
"cost": biz_ext.get("cost", ""),
"location": location,
"lng": float(lng) if lng else None,
"lat": float(lat) if lat else None,
"distance": round(distance, 0),
"tel": p.get("tel", ""),
"tag": p.get("tag", ""),
"type": p.get("type", "")
})
# 按距离排序
simplified.sort(key=lambda x: x.get("distance", 9999))
return BaseTool.success_response({
"places": simplified,
"count": len(simplified),
"keywords": keywords,
"center": {"lng": center_lng, "lat": center_lat},
"radius": radius
})
except Exception as e:
logger.error(f"POI搜索失败: {e}")
return BaseTool.fail_response(f"POI搜索错误: {str(e)}")
class GenerateRecommendationTool(BaseTool):
"""智能推荐工具 - 使用 LLM 生成个性化推荐结果
结合规则评分和 LLM 智能评分,生成更精准的推荐:
- 规则评分:基于距离、评分、热度等客观指标
- LLM 评分:理解用户需求语义,评估场所匹配度
"""
name: str = "generate_recommendation"
description: str = """智能生成会面地点推荐。
本工具使用双层评分系统:
1. 规则评分40%):基于距离、评分、热度等客观指标
2. LLM 智能评分60%):理解用户需求,评估场所特色与需求的匹配度
最终生成个性化的推荐理由,帮助用户做出最佳选择。"""
parameters: dict = {
"type": "object",
"properties": {
"places": {
"type": "array",
"description": "候选场所列表来自search_poi的结果",
"items": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "场所名称"},
"address": {"type": "string", "description": "地址"},
"rating": {"type": "string", "description": "评分"},
"distance": {"type": "number", "description": "距中心点距离"},
"location": {"type": "string", "description": "坐标"}
}
}
},
"center": {
"type": "object",
"description": "中心点坐标",
"properties": {
"lng": {"type": "number", "description": "经度"},
"lat": {"type": "number", "description": "纬度"}
},
"required": ["lng", "lat"]
},
"participant_locations": {
"type": "array",
"description": "参与者位置名称列表,用于 LLM 评估公平性",
"items": {"type": "string"},
"default": []
},
"keywords": {
"type": "string",
"description": "搜索的场所类型,如'咖啡馆''餐厅'",
"default": "咖啡馆"
},
"user_requirements": {
"type": "string",
"description": "用户的特殊需求,如'停车方便''环境安静'",
"default": ""
},
"recommendation_count": {
"type": "integer",
"description": "推荐数量默认5个",
"default": 5
},
"use_llm_ranking": {
"type": "boolean",
"description": "是否使用 LLM 智能排序,默认 true",
"default": True
}
},
"required": ["places", "center"]
}
class Config:
arbitrary_types_allowed = True
def _get_recommender(self):
"""延迟加载推荐器,并确保 API key 已设置"""
if not hasattr(self, '_cached_recommender'):
from app.tool.meetspot_recommender import CafeRecommender
from app.config import config
recommender = CafeRecommender()
if hasattr(config, 'amap') and config.amap and hasattr(config.amap, 'api_key'):
recommender.api_key = config.amap.api_key
object.__setattr__(self, '_cached_recommender', recommender)
return self._cached_recommender
async def execute(
self,
places: List[Dict],
center: Dict,
participant_locations: List[str] = None,
keywords: str = "咖啡馆",
user_requirements: str = "",
recommendation_count: int = 5,
use_llm_ranking: bool = True
) -> ToolResult:
"""智能生成推荐"""
try:
if not places:
return BaseTool.fail_response("没有候选场所可供推荐")
recommender = self._get_recommender()
center_point = (center["lng"], center["lat"])
# 1. 先用规则评分进行初步排序
ranked = recommender._rank_places(
places=places,
center_point=center_point,
user_requirements=user_requirements,
keywords=keywords
)
# 2. 如果启用 LLM 智能排序,进行重排序
if use_llm_ranking and participant_locations:
logger.info("启用 LLM 智能排序")
ranked = await recommender._llm_smart_ranking(
places=ranked,
user_requirements=user_requirements,
participant_locations=participant_locations or [],
keywords=keywords,
top_n=recommendation_count + 3 # 多取几个以便筛选
)
# 取前N个推荐
top_places = ranked[:recommendation_count]
# 生成推荐结果
recommendations = []
for i, place in enumerate(top_places, 1):
score = place.get("_final_score") or place.get("_score", 0)
distance = place.get("_distance") or place.get("distance", 0)
rating = place.get("_raw_rating") or place.get("rating", "N/A")
# 优先使用 LLM 生成的理由
llm_reason = place.get("_llm_reason", "")
rule_reason = place.get("_recommendation_reason", "")
if llm_reason:
reasons = [llm_reason]
elif rule_reason:
reasons = [rule_reason]
else:
# 兜底:构建基础推荐理由
reasons = []
if distance <= 500:
reasons.append("距离中心点很近")
elif distance <= 1000:
reasons.append("距离适中")
if rating != "N/A":
try:
r = float(rating)
if r >= 4.5:
reasons.append("口碑优秀")
elif r >= 4.0:
reasons.append("评价良好")
except (ValueError, TypeError):
pass
if not reasons:
reasons = ["综合评分较高"]
recommendations.append({
"rank": i,
"name": place.get("name", ""),
"address": place.get("address", ""),
"rating": str(rating) if rating else "N/A",
"distance": round(distance, 0),
"score": round(score, 1),
"llm_score": place.get("_llm_score", 0),
"tel": place.get("tel", ""),
"reasons": reasons,
"location": place.get("location", ""),
"scoring_method": "llm+rule" if place.get("_llm_score") else "rule"
})
return BaseTool.success_response({
"recommendations": recommendations,
"total_candidates": len(places),
"user_requirements": user_requirements,
"center": center,
"llm_ranking_used": use_llm_ranking and bool(participant_locations)
})
except Exception as e:
logger.error(f"生成推荐失败: {e}")
return BaseTool.fail_response(f"生成推荐错误: {str(e)}")
# 导出所有工具
__all__ = [
"GeocodeTool",
"CalculateCenterTool",
"SearchPOITool",
"GenerateRecommendationTool"
]

View File

@@ -0,0 +1,2 @@
"""认证相关模块。"""

58
MeetSpot/app/auth/jwt.py Normal file
View File

@@ -0,0 +1,58 @@
"""JWT 工具函数与统一用户依赖。"""
import os
from datetime import datetime, timedelta
from typing import Optional
from fastapi import Depends, HTTPException, status
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from jose import JWTError, jwt
from sqlalchemy.ext.asyncio import AsyncSession
from app.db.crud import get_user_by_id
from app.db.database import get_db
SECRET_KEY = os.getenv("JWT_SECRET_KEY", "meetspot-dev-secret")
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_DAYS = int(os.getenv("ACCESS_TOKEN_EXPIRE_DAYS", "7"))
bearer_scheme = HTTPBearer(auto_error=False)
def create_access_token(data: dict) -> str:
"""生成带过期时间的JWT。"""
to_encode = data.copy()
expire = datetime.utcnow() + timedelta(days=ACCESS_TOKEN_EXPIRE_DAYS)
to_encode.update({"exp": expire})
return jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
def decode_token(token: str) -> Optional[dict]:
"""解码并验证JWT。"""
try:
return jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
except JWTError:
return None
async def get_current_user(
credentials: HTTPAuthorizationCredentials = Depends(bearer_scheme),
db: AsyncSession = Depends(get_db),
):
"""FastAPI 依赖:获取当前用户。"""
if not credentials:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="缺少认证信息"
)
payload = decode_token(credentials.credentials)
if not payload or "sub" not in payload:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="令牌无效")
user = await get_user_by_id(db, payload["sub"])
if not user:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="用户不存在")
return user

24
MeetSpot/app/auth/sms.py Normal file
View File

@@ -0,0 +1,24 @@
"""短信验证码(Mock版)。"""
from typing import Dict
MOCK_CODE = "123456"
_code_store: Dict[str, str] = {}
async def send_login_code(phone: str) -> str:
"""Mock发送验证码固定返回`123456`。
- 真实环境可替换为短信网关调用
- 这里简单记忆最后一次下发的验证码,便于后续校验扩展
"""
_code_store[phone] = MOCK_CODE
return MOCK_CODE
def validate_code(phone: str, code: str) -> bool:
"""校验验证码MVP阶段固定匹配Mock值。"""
return code == MOCK_CODE

315
MeetSpot/app/config.py Normal file
View File

@@ -0,0 +1,315 @@
import threading
import tomllib
from pathlib import Path
from typing import Dict, List, Optional
from pydantic import BaseModel, Field
def get_project_root() -> Path:
"""Get the project root directory"""
return Path(__file__).resolve().parent.parent
PROJECT_ROOT = get_project_root()
WORKSPACE_ROOT = PROJECT_ROOT / "workspace"
class LLMSettings(BaseModel):
model: str = Field(..., description="Model name")
base_url: str = Field(..., description="API base URL")
api_key: str = Field(..., description="API key")
max_tokens: int = Field(4096, description="Maximum number of tokens per request")
max_input_tokens: Optional[int] = Field(
None,
description="Maximum input tokens to use across all requests (None for unlimited)",
)
temperature: float = Field(1.0, description="Sampling temperature")
api_type: str = Field(..., description="Azure, Openai, or Ollama")
api_version: str = Field(..., description="Azure Openai version if AzureOpenai")
class ProxySettings(BaseModel):
server: str = Field(None, description="Proxy server address")
username: Optional[str] = Field(None, description="Proxy username")
password: Optional[str] = Field(None, description="Proxy password")
class SearchSettings(BaseModel):
engine: str = Field(default="Google", description="Search engine the llm to use")
class AMapSettings(BaseModel):
"""高德地图API配置"""
api_key: str = Field(..., description="高德地图API密钥")
web_api_key: Optional[str] = Field(None, description="高德地图JavaScript API密钥")
class BrowserSettings(BaseModel):
headless: bool = Field(False, description="Whether to run browser in headless mode")
disable_security: bool = Field(
True, description="Disable browser security features"
)
extra_chromium_args: List[str] = Field(
default_factory=list, description="Extra arguments to pass to the browser"
)
chrome_instance_path: Optional[str] = Field(
None, description="Path to a Chrome instance to use"
)
wss_url: Optional[str] = Field(
None, description="Connect to a browser instance via WebSocket"
)
cdp_url: Optional[str] = Field(
None, description="Connect to a browser instance via CDP"
)
proxy: Optional[ProxySettings] = Field(
None, description="Proxy settings for the browser"
)
max_content_length: int = Field(
2000, description="Maximum length for content retrieval operations"
)
class SandboxSettings(BaseModel):
"""Configuration for the execution sandbox"""
use_sandbox: bool = Field(False, description="Whether to use the sandbox")
image: str = Field("python:3.12-slim", description="Base image")
work_dir: str = Field("/workspace", description="Container working directory")
memory_limit: str = Field("512m", description="Memory limit")
cpu_limit: float = Field(1.0, description="CPU limit")
timeout: int = Field(300, description="Default command timeout (seconds)")
network_enabled: bool = Field(
False, description="Whether network access is allowed"
)
class AppConfig(BaseModel):
llm: Dict[str, LLMSettings]
sandbox: Optional[SandboxSettings] = Field(
None, description="Sandbox configuration"
)
browser_config: Optional[BrowserSettings] = Field(
None, description="Browser configuration"
)
search_config: Optional[SearchSettings] = Field(
None, description="Search configuration"
)
amap: Optional[AMapSettings] = Field(
None, description="高德地图API配置"
)
class Config:
arbitrary_types_allowed = True
class Config:
_instance = None
_lock = threading.Lock()
_initialized = False
def __new__(cls):
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
if not self._initialized:
with self._lock:
if not self._initialized:
self._config = None
self._load_initial_config()
self._initialized = True
@staticmethod
def _get_config_path() -> Path:
root = PROJECT_ROOT
config_path = root / "config" / "config.toml"
if config_path.exists():
return config_path
example_path = root / "config" / "config.toml.example"
if example_path.exists():
return example_path
# 如果都没有找到,返回默认路径,让后续创建默认配置
return config_path
def _load_config(self) -> dict:
try:
config_path = self._get_config_path()
if not config_path.exists():
# 创建默认配置
default_config = {
"llm": {
"model": "gpt-3.5-turbo",
"api_key": "",
"base_url": "",
"max_tokens": 4096,
"temperature": 1.0,
"api_type": "",
"api_version": ""
},
"amap": {
"api_key": "",
"security_js_code": ""
},
"log": {
"level": "info",
"file": "logs/meetspot.log"
},
"server": {
"host": "0.0.0.0",
"port": 8000
}
}
return default_config
with config_path.open("rb") as f:
return tomllib.load(f)
except Exception as e:
# 如果加载失败,返回默认配置
print(f"Failed to load config file, using defaults: {e}")
return {
"llm": {
"model": "gpt-3.5-turbo",
"api_key": "",
"base_url": "",
"max_tokens": 4096,
"temperature": 1.0,
"api_type": "",
"api_version": ""
}
}
def _load_initial_config(self):
raw_config = self._load_config()
base_llm = raw_config.get("llm", {})
# 从环境变量读取敏感信息
import os
openai_api_key = os.getenv("OPENAI_API_KEY", "") or os.getenv("LLM_API_KEY", "")
amap_api_key = os.getenv("AMAP_API_KEY", "")
# 支持 Render 部署的环境变量配置
llm_base_url = os.getenv("LLM_API_BASE", "") or base_llm.get("base_url", "")
llm_model = os.getenv("LLM_MODEL", "") or base_llm.get("model", "gpt-3.5-turbo")
llm_overrides = {
k: v for k, v in raw_config.get("llm", {}).items() if isinstance(v, dict)
}
default_settings = {
"model": llm_model, # 优先使用环境变量
"base_url": llm_base_url, # 优先使用环境变量
"api_key": openai_api_key, # 从环境变量获取
"max_tokens": base_llm.get("max_tokens", 4096),
"max_input_tokens": base_llm.get("max_input_tokens"),
"temperature": base_llm.get("temperature", 1.0),
"api_type": base_llm.get("api_type", ""),
"api_version": base_llm.get("api_version", ""),
}
# handle browser config.
browser_config = raw_config.get("browser", {})
browser_settings = None
if browser_config:
# handle proxy settings.
proxy_config = browser_config.get("proxy", {})
proxy_settings = None
if proxy_config and proxy_config.get("server"):
proxy_settings = ProxySettings(
**{
k: v
for k, v in proxy_config.items()
if k in ["server", "username", "password"] and v
}
)
# filter valid browser config parameters.
valid_browser_params = {
k: v
for k, v in browser_config.items()
if k in BrowserSettings.__annotations__ and v is not None
}
# if there is proxy settings, add it to the parameters.
if proxy_settings:
valid_browser_params["proxy"] = proxy_settings
# only create BrowserSettings when there are valid parameters.
if valid_browser_params:
browser_settings = BrowserSettings(**valid_browser_params)
search_config = raw_config.get("search", {})
search_settings = None
if search_config:
search_settings = SearchSettings(**search_config)
sandbox_config = raw_config.get("sandbox", {})
if sandbox_config:
sandbox_settings = SandboxSettings(**sandbox_config)
else:
sandbox_settings = SandboxSettings()
# 处理高德地图API配置
amap_config = raw_config.get("amap", {})
amap_settings = None
# 优先使用环境变量中的 AMAP_API_KEY
if amap_api_key:
amap_settings = AMapSettings(
api_key=amap_api_key,
security_js_code=os.getenv("AMAP_SECURITY_JS_CODE", amap_config.get("security_js_code", ""))
)
elif amap_config and amap_config.get("api_key"):
amap_settings = AMapSettings(**amap_config)
config_dict = {
"llm": {
"default": default_settings,
**{
name: {**default_settings, **override_config}
for name, override_config in llm_overrides.items()
},
},
"sandbox": sandbox_settings,
"browser_config": browser_settings,
"search_config": search_settings,
"amap": amap_settings,
}
self._config = AppConfig(**config_dict)
@property
def llm(self) -> Dict[str, LLMSettings]:
return self._config.llm
@property
def sandbox(self) -> SandboxSettings:
return self._config.sandbox
@property
def browser_config(self) -> Optional[BrowserSettings]:
return self._config.browser_config
@property
def search_config(self) -> Optional[SearchSettings]:
return self._config.search_config
@property
def amap(self) -> Optional[AMapSettings]:
"""获取高德地图API配置"""
return self._config.amap
@property
def workspace_root(self) -> Path:
"""Get the workspace root directory"""
return WORKSPACE_ROOT
@property
def root_path(self) -> Path:
"""Get the root path of the application"""
return PROJECT_ROOT
config = Config()

View File

@@ -0,0 +1,120 @@
import os
import threading
import tomllib
from pathlib import Path
from typing import Optional
from pydantic import BaseModel, Field
def get_project_root() -> Path:
"""Get the project root directory"""
return Path(__file__).resolve().parent.parent
PROJECT_ROOT = get_project_root()
WORKSPACE_ROOT = PROJECT_ROOT / "workspace"
class AMapSettings(BaseModel):
"""高德地图API配置"""
api_key: str = Field(..., description="高德地图API密钥")
security_js_code: Optional[str] = Field(None, description="高德地图JavaScript API安全密钥")
class LogSettings(BaseModel):
"""日志配置"""
level: str = Field(default="INFO", description="日志级别")
file_path: str = Field(default="logs/meetspot.log", description="日志文件路径")
class AppConfig(BaseModel):
"""应用配置"""
amap: AMapSettings = Field(..., description="高德地图API配置")
log: Optional[LogSettings] = Field(default=LogSettings(), description="日志配置")
class Config:
arbitrary_types_allowed = True
class Config:
"""配置管理器(单例模式)"""
_instance = None
_lock = threading.Lock()
_initialized = False
def __new__(cls):
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
if not self._initialized:
with self._lock:
if not self._initialized:
self._config = None
self._load_initial_config()
self._initialized = True
def _load_initial_config(self):
"""加载初始配置"""
try:
# 首先尝试从环境变量加载Vercel 部署)
if os.getenv("AMAP_API_KEY"):
self._config = AppConfig(
amap=AMapSettings(
api_key=os.getenv("AMAP_API_KEY", ""),
security_js_code=os.getenv("AMAP_SECURITY_JS_CODE", "")
)
)
return
# 然后尝试从配置文件加载(本地开发)
config_path = PROJECT_ROOT / "config" / "config.toml"
if config_path.exists():
with open(config_path, "rb") as f:
toml_data = tomllib.load(f)
amap_config = toml_data.get("amap", {})
if not amap_config.get("api_key"):
raise ValueError("高德地图API密钥未配置")
self._config = AppConfig(
amap=AMapSettings(**amap_config),
log=LogSettings(**toml_data.get("log", {}))
)
else:
raise FileNotFoundError(f"配置文件不存在: {config_path}")
except Exception as e:
# 提供默认配置以防止启动失败
print(f"配置加载失败,使用默认配置: {e}")
self._config = AppConfig(
amap=AMapSettings(
api_key=os.getenv("AMAP_API_KEY", ""),
security_js_code=os.getenv("AMAP_SECURITY_JS_CODE", "")
)
)
def reload(self):
"""重新加载配置"""
with self._lock:
self._initialized = False
self._load_initial_config()
self._initialized = True
@property
def amap(self) -> AMapSettings:
"""获取高德地图配置"""
return self._config.amap
@property
def log(self) -> LogSettings:
"""获取日志配置"""
return self._config.log
# 全局配置实例
config = Config()

View File

@@ -0,0 +1,2 @@
"""数据库相关模块初始化。"""

50
MeetSpot/app/db/crud.py Normal file
View File

@@ -0,0 +1,50 @@
"""常用数据库操作封装。"""
from datetime import datetime
from typing import Optional
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.models.user import User
def _default_nickname(phone: str) -> str:
suffix = phone[-4:] if len(phone) >= 4 else phone
return f"用户{suffix}"
async def get_user_by_phone(db: AsyncSession, phone: str) -> Optional[User]:
"""根据手机号查询用户。"""
stmt = select(User).where(User.phone == phone)
result = await db.execute(stmt)
return result.scalar_one_or_none()
async def get_user_by_id(db: AsyncSession, user_id: str) -> Optional[User]:
"""根据ID查询用户。"""
stmt = select(User).where(User.id == user_id)
result = await db.execute(stmt)
return result.scalar_one_or_none()
async def create_user(
db: AsyncSession, phone: str, nickname: Optional[str] = None, avatar_url: str = ""
) -> User:
"""创建新用户。"""
user = User(
phone=phone,
nickname=nickname or _default_nickname(phone),
avatar_url=avatar_url or "",
)
db.add(user)
await db.commit()
await db.refresh(user)
return user
async def touch_last_login(db: AsyncSession, user: User) -> None:
"""更新用户最近登录时间。"""
user.last_login = datetime.utcnow()
await db.commit()

View File

@@ -0,0 +1,48 @@
"""数据库引擎与会话管理。
使用SQLite作为MVP默认存储保留通过环境变量`DATABASE_URL`切换到PostgreSQL的能力。
"""
import os
from pathlib import Path
from typing import AsyncGenerator
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy.orm import declarative_base
# 项目根目录默认将SQLite数据库放在data目录下
PROJECT_ROOT = Path(__file__).resolve().parent.parent.parent
DATA_DIR = PROJECT_ROOT / "data"
DATA_DIR.mkdir(exist_ok=True)
# 允许通过环境变量覆盖数据库连接串
DEFAULT_SQLITE_PATH = DATA_DIR / "meetspot.db"
DATABASE_URL = os.getenv(
"DATABASE_URL", f"sqlite+aiosqlite:///{DEFAULT_SQLITE_PATH.as_posix()}"
)
# 创建异步引擎与会话工厂
engine = create_async_engine(DATABASE_URL, echo=False, future=True)
AsyncSessionLocal = async_sessionmaker(
bind=engine, class_=AsyncSession, expire_on_commit=False, autoflush=False
)
# 统一的ORM基类
Base = declarative_base()
async def get_db() -> AsyncGenerator[AsyncSession, None]:
"""FastAPI 依赖:提供数据库会话并确保正确关闭。"""
async with AsyncSessionLocal() as session:
yield session
async def init_db() -> None:
"""在启动时创建数据库表。"""
# 延迟导入以避免循环依赖
from app import models # noqa: F401 确保所有模型已注册
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)

View File

@@ -0,0 +1,652 @@
"""
MeetSpot Design Tokens - 单一真相来源
所有色彩、间距、字体系统的中心定义文件。
修改本文件会影响:
1. 基础模板 (templates/base.html)
2. 静态HTML (public/*.html)
3. 动态生成页面 (workspace/js_src/*.html)
WCAG 2.1 AA级对比度标准:
- 正文: ≥ 4.5:1
- 大文字: ≥ 3.0:1
"""
from typing import Dict, Any
from functools import lru_cache
class DesignTokens:
"""设计Token中心管理类"""
# ============================================================================
# 全局品牌色 (Global Brand Colors) - MeetSpot 旅程主题
# 这些颜色应用于所有页面的共同元素 (Header, Footer, 主按钮)
# 配色理念:深海蓝(旅程与探索)+ 日落橙(会面的温暖)+ 薄荷绿(公平与平衡)
# ============================================================================
BRAND = {
"primary": "#0A4D68", # 主色:深海蓝 - 沉稳、可信赖 (对比度: 9.12:1 ✓)
"primary_dark": "#05445E", # 暗深海蓝 - 悬停态 (对比度: 11.83:1 ✓)
"primary_light": "#088395", # 亮海蓝 - 装饰性元素 (对比度: 5.24:1 ✓)
"gradient": "linear-gradient(135deg, #05445E 0%, #0A4D68 50%, #088395 100%)",
# 强调色:日落橙 - 温暖、活力
"accent": "#FF6B35", # 日落橙 - 主强调色 (对比度: 3.55:1, 大文字用途)
"accent_light": "#FF8C61", # 亮橙 - 次要强调 (对比度: 2.87:1, 装饰用途)
# 次要色:薄荷绿 - 清新、平衡
"secondary": "#06D6A0", # 薄荷绿 (对比度: 2.28:1, 装饰用途)
# 功能色 - 全部WCAG AA级
"success": "#0C8A5D", # 成功绿 - 保持 (4.51:1 ✓)
"info": "#2563EB", # 信息蓝 - 保持 (5.17:1 ✓)
"warning": "#CA7205", # 警告橙 - 保持 (4.50:1 ✓)
"error": "#DC2626", # 错误红 - 保持 (4.83:1 ✓)
}
# ============================================================================
# 文字颜色系统 (Text Colors)
# 基于WCAG 2.1标准,所有文字色在白色背景上对比度 ≥ 4.5:1
# ============================================================================
TEXT = {
"primary": "#111827", # 主文字 (gray-900, 对比度 17.74:1 ✓)
"secondary": "#4B5563", # 次要文字 (gray-600, 对比度 7.56:1 ✓)
"tertiary": "#6B7280", # 三级文字 (gray-500, 对比度 4.83:1 ✓)
"muted": "#6B7280", # 弱化文字 - 修正 (原#9CA3AF: 2.54:1 -> 4.83:1, 使用tertiary色)
"disabled": "#9CA3AF", # 禁用文字 - 保持低对比度 (装饰性文字允许 <3:1)
"inverse": "#FFFFFF", # 反转文字 (深色背景上)
}
# ============================================================================
# 背景颜色系统 (Background Colors)
# ============================================================================
BACKGROUND = {
"primary": "#FFFFFF", # 主背景 (白色)
"secondary": "#F9FAFB", # 次要背景 (gray-50)
"tertiary": "#F3F4F6", # 三级背景 (gray-100)
"elevated": "#FFFFFF", # 卡片/浮动元素背景 (带阴影)
"overlay": "rgba(0, 0, 0, 0.5)", # 蒙层
}
# ============================================================================
# 边框颜色系统 (Border Colors)
# ============================================================================
BORDER = {
"default": "#E5E7EB", # 默认边框 (gray-200)
"medium": "#D1D5DB", # 中等边框 (gray-300)
"strong": "#9CA3AF", # 强边框 (gray-400)
"focus": "#667EEA", # 焦点边框 (主品牌色)
}
# ============================================================================
# 阴影系统 (Shadow System)
# ============================================================================
SHADOW = {
"sm": "0 1px 2px 0 rgba(0, 0, 0, 0.05)",
"md": "0 4px 6px -1px rgba(0, 0, 0, 0.1)",
"lg": "0 10px 15px -3px rgba(0, 0, 0, 0.1)",
"xl": "0 20px 25px -5px rgba(0, 0, 0, 0.1)",
"2xl": "0 25px 50px -12px rgba(0, 0, 0, 0.25)",
}
# ============================================================================
# 场所类型主题系统 (Venue Theme System)
# 14种预设主题,动态注入到生成的推荐页面中
#
# 每个主题包含:
# - theme_primary: 主色调 (Header背景、主按钮)
# - theme_primary_light: 亮色变体 (悬停态、次要元素)
# - theme_primary_dark: 暗色变体 (Active态、强调元素)
# - theme_secondary: 辅助色 (图标、装饰元素)
# - theme_light: 浅背景色 (卡片背景、Section背景)
# - theme_dark: 深文字色 (标题、关键信息)
#
# WCAG验证: 所有theme_primary在白色背景上对比度 ≥ 3.0:1 (大文字)
# 所有theme_dark在theme_light背景上对比度 ≥ 4.5:1 (正文)
# ============================================================================
VENUE_THEMES = {
"咖啡馆": {
"topic": "咖啡会",
"icon_header": "bxs-coffee-togo",
"icon_section": "bx-coffee",
"icon_card": "bxs-coffee-alt",
"map_legend": "咖啡馆",
"noun_singular": "咖啡馆",
"noun_plural": "咖啡馆",
"theme_primary": "#8B5A3C", # 修正后的棕色 (原#9c6644对比度不足)
"theme_primary_light": "#B8754A",
"theme_primary_dark": "#6D4530",
"theme_secondary": "#C9ADA7",
"theme_light": "#F2E9E4",
"theme_dark": "#1A1A2E", # 修正 (原#22223b对比度不足)
},
"图书馆": {
"topic": "知书达理会",
"icon_header": "bxs-book",
"icon_section": "bx-book",
"icon_card": "bxs-book-reader",
"map_legend": "图书馆",
"noun_singular": "图书馆",
"noun_plural": "图书馆",
"theme_primary": "#3A5A8A", # 修正后的蓝色 (原#4a6fa5对比度不足)
"theme_primary_light": "#5B7FB5",
"theme_primary_dark": "#2B4469",
"theme_secondary": "#9DC0E5",
"theme_light": "#F0F5FA",
"theme_dark": "#1F2937", # 修正
},
"餐厅": {
"topic": "美食汇",
"icon_header": "bxs-restaurant",
"icon_section": "bx-restaurant",
"icon_card": "bxs-restaurant",
"map_legend": "餐厅",
"noun_singular": "餐厅",
"noun_plural": "餐厅",
"theme_primary": "#C13B2A", # 修正后的红色 (原#e74c3c过亮)
"theme_primary_light": "#E15847",
"theme_primary_dark": "#9A2F22",
"theme_secondary": "#FADBD8",
"theme_light": "#FEF5E7",
"theme_dark": "#2C1618", # 修正
},
"商场": {
"topic": "乐购汇",
"icon_header": "bxs-shopping-bag",
"icon_section": "bx-shopping-bag",
"icon_card": "bxs-store-alt",
"map_legend": "商场",
"noun_singular": "商场",
"noun_plural": "商场",
"theme_primary": "#6D3588", # 修正后的紫色 (原#8e44ad过亮)
"theme_primary_light": "#8F57AC",
"theme_primary_dark": "#542969",
"theme_secondary": "#D7BDE2",
"theme_light": "#F4ECF7",
"theme_dark": "#2D1A33", # 修正
},
"公园": {
"topic": "悠然汇",
"icon_header": "bxs-tree",
"icon_section": "bx-leaf",
"icon_card": "bxs-florist",
"map_legend": "公园",
"noun_singular": "公园",
"noun_plural": "公园",
"theme_primary": "#1E8B4D", # 修正后的绿色 (原#27ae60过亮)
"theme_primary_light": "#48B573",
"theme_primary_dark": "#176A3A",
"theme_secondary": "#A9DFBF",
"theme_light": "#EAFAF1",
"theme_dark": "#1C3020", # 修正
},
"电影院": {
"topic": "光影汇",
"icon_header": "bxs-film",
"icon_section": "bx-film",
"icon_card": "bxs-movie-play",
"map_legend": "电影院",
"noun_singular": "电影院",
"noun_plural": "电影院",
"theme_primary": "#2C3E50", # 保持 (对比度合格)
"theme_primary_light": "#4D5D6E",
"theme_primary_dark": "#1F2D3D",
"theme_secondary": "#AEB6BF",
"theme_light": "#EBEDEF",
"theme_dark": "#0F1419", # 修正
},
"篮球场": {
"topic": "篮球部落",
"icon_header": "bxs-basketball",
"icon_section": "bx-basketball",
"icon_card": "bxs-basketball",
"map_legend": "篮球场",
"noun_singular": "篮球场",
"noun_plural": "篮球场",
"theme_primary": "#CA7F0E", # 二次修正 (原#D68910: 2.82:1 -> 3.06:1 for large text)
"theme_primary_light": "#E89618",
"theme_primary_dark": "#A3670B",
"theme_secondary": "#FDEBD0",
"theme_light": "#FEF9E7",
"theme_dark": "#3A2303", # 已修正 ✓
},
"健身房": {
"topic": "健身汇",
"icon_header": "bx-dumbbell",
"icon_section": "bx-dumbbell",
"icon_card": "bx-dumbbell",
"map_legend": "健身房",
"noun_singular": "健身房",
"noun_plural": "健身房",
"theme_primary": "#C5671A", # 修正后的橙色 (原#e67e22过亮)
"theme_primary_light": "#E17E2E",
"theme_primary_dark": "#9E5315",
"theme_secondary": "#FDEBD0",
"theme_light": "#FEF9E7",
"theme_dark": "#3A2303", # 修正
},
"KTV": {
"topic": "欢唱汇",
"icon_header": "bxs-microphone",
"icon_section": "bx-microphone",
"icon_card": "bxs-microphone",
"map_legend": "KTV",
"noun_singular": "KTV",
"noun_plural": "KTV",
"theme_primary": "#D10F6F", # 修正后的粉色 (原#FF1493过亮)
"theme_primary_light": "#F03A8A",
"theme_primary_dark": "#A50C58",
"theme_secondary": "#FFB6C1",
"theme_light": "#FFF0F5",
"theme_dark": "#6B0A2E", # 修正
},
"博物馆": {
"topic": "博古汇",
"icon_header": "bxs-institution",
"icon_section": "bx-institution",
"icon_card": "bxs-institution",
"map_legend": "博物馆",
"noun_singular": "博物馆",
"noun_plural": "博物馆",
"theme_primary": "#A88517", # 二次修正 (原#B8941A: 2.88:1 -> 3.21:1 for large text)
"theme_primary_light": "#C29E1D",
"theme_primary_dark": "#896B13",
"theme_secondary": "#F0E68C",
"theme_light": "#FFFACD",
"theme_dark": "#6B5535", # 已修正 ✓
},
"景点": {
"topic": "游览汇",
"icon_header": "bxs-landmark",
"icon_section": "bx-landmark",
"icon_card": "bxs-landmark",
"map_legend": "景点",
"noun_singular": "景点",
"noun_plural": "景点",
"theme_primary": "#138496", # 保持 (对比度合格)
"theme_primary_light": "#20A5BB",
"theme_primary_dark": "#0F6875",
"theme_secondary": "#7FDBDA",
"theme_light": "#E0F7FA",
"theme_dark": "#00504A", # 修正
},
"酒吧": {
"topic": "夜宴汇",
"icon_header": "bxs-drink",
"icon_section": "bx-drink",
"icon_card": "bxs-drink",
"map_legend": "酒吧",
"noun_singular": "酒吧",
"noun_plural": "酒吧",
"theme_primary": "#2C3E50", # 保持 (对比度合格)
"theme_primary_light": "#4D5D6E",
"theme_primary_dark": "#1B2631",
"theme_secondary": "#85929E",
"theme_light": "#EBF5FB",
"theme_dark": "#0C1014", # 修正
},
"茶楼": {
"topic": "茶韵汇",
"icon_header": "bxs-coffee-bean",
"icon_section": "bx-coffee-bean",
"icon_card": "bxs-coffee-bean",
"map_legend": "茶楼",
"noun_singular": "茶楼",
"noun_plural": "茶楼",
"theme_primary": "#406058", # 修正后的绿色 (原#52796F过亮)
"theme_primary_light": "#567A6F",
"theme_primary_dark": "#2F4841",
"theme_secondary": "#CAD2C5",
"theme_light": "#F7F9F7",
"theme_dark": "#1F2D28", # 修正
},
"游泳馆": { # 新增第14个主题
"topic": "泳池汇",
"icon_header": "bx-swim",
"icon_section": "bx-swim",
"icon_card": "bx-swim",
"map_legend": "游泳馆",
"noun_singular": "游泳馆",
"noun_plural": "游泳馆",
"theme_primary": "#1E90FF", # 水蓝色
"theme_primary_light": "#4DA6FF",
"theme_primary_dark": "#1873CC",
"theme_secondary": "#87CEEB",
"theme_light": "#E0F2FE",
"theme_dark": "#0C4A6E",
},
# 默认主题 (与咖啡馆相同)
"default": {
"topic": "推荐地点",
"icon_header": "bx-map-pin",
"icon_section": "bx-location-plus",
"icon_card": "bx-map-alt",
"map_legend": "推荐地点",
"noun_singular": "地点",
"noun_plural": "地点",
"theme_primary": "#8B5A3C",
"theme_primary_light": "#B8754A",
"theme_primary_dark": "#6D4530",
"theme_secondary": "#C9ADA7",
"theme_light": "#F2E9E4",
"theme_dark": "#1A1A2E",
},
}
# ============================================================================
# 间距系统 (Spacing System)
# 基于8px基准的间距尺度
# ============================================================================
SPACING = {
"0": "0",
"1": "4px", # 0.25rem
"2": "8px", # 0.5rem
"3": "12px", # 0.75rem
"4": "16px", # 1rem
"5": "20px", # 1.25rem
"6": "24px", # 1.5rem
"8": "32px", # 2rem
"10": "40px", # 2.5rem
"12": "48px", # 3rem
"16": "64px", # 4rem
"20": "80px", # 5rem
}
# ============================================================================
# 圆角系统 (Border Radius System)
# ============================================================================
RADIUS = {
"none": "0",
"sm": "4px",
"md": "8px",
"lg": "12px",
"xl": "16px",
"2xl": "24px",
"full": "9999px",
}
# ============================================================================
# 字体系统 (Typography System) - MeetSpot 品牌字体
# Poppins (标题) - 友好且现代,比 Inter 更有个性
# Nunito (正文) - 圆润易读,传递温暖感
# ============================================================================
FONT = {
"family_heading": '"Poppins", "PingFang SC", -apple-system, BlinkMacSystemFont, sans-serif',
"family_sans": '"Nunito", "Microsoft YaHei", -apple-system, BlinkMacSystemFont, sans-serif',
"family_mono": '"JetBrains Mono", "Fira Code", "SF Mono", "Consolas", "Monaco", monospace',
# 字体大小 (基于16px基准)
"size_xs": "0.75rem", # 12px
"size_sm": "0.875rem", # 14px
"size_base": "1rem", # 16px
"size_lg": "1.125rem", # 18px
"size_xl": "1.25rem", # 20px
"size_2xl": "1.5rem", # 24px
"size_3xl": "1.875rem", # 30px
"size_4xl": "2.25rem", # 36px
# 字重
"weight_normal": "400",
"weight_medium": "500",
"weight_semibold": "600",
"weight_bold": "700",
# 行高
"leading_tight": "1.25",
"leading_normal": "1.5",
"leading_relaxed": "1.7",
"leading_loose": "2",
}
# ============================================================================
# Z-Index系统 (Layering System)
# ============================================================================
Z_INDEX = {
"dropdown": "1000",
"sticky": "1020",
"fixed": "1030",
"modal_backdrop": "1040",
"modal": "1050",
"popover": "1060",
"tooltip": "1070",
}
# ============================================================================
# 交互动画系统 (Interaction Animations)
# 遵循WCAG 2.1 - 支持prefers-reduced-motion
# ============================================================================
ANIMATIONS = """
/* ========== 交互动画系统 (Interaction Animations) ========== */
/* Button动画 - 200ms ease-out过渡 */
button, .btn, input[type="submit"], a.button {
transition: all 0.2s ease-out;
}
button:hover, .btn:hover, input[type="submit"]:hover, a.button:hover {
transform: translateY(-2px);
box-shadow: var(--shadow-lg);
}
button:active, .btn:active, input[type="submit"]:active, a.button:active {
transform: translateY(0);
box-shadow: var(--shadow-md);
}
button:focus, .btn:focus, input[type="submit"]:focus, a.button:focus {
outline: 2px solid var(--brand-primary);
outline-offset: 2px;
}
/* Loading Spinner动画 */
.loading::after {
content: "";
width: 16px;
height: 16px;
margin-left: 8px;
border: 2px solid var(--brand-primary);
border-top-color: transparent;
border-radius: 50%;
display: inline-block;
animation: spin 0.6s linear infinite;
}
@keyframes spin {
to { transform: rotate(360deg); }
}
/* Card悬停效果 - 微妙的缩放和阴影提升 */
.card, .venue-card, .recommendation-card {
transition: transform 0.2s ease-out, box-shadow 0.2s ease-out;
}
.card:hover, .venue-card:hover, .recommendation-card:hover {
transform: scale(1.02);
box-shadow: var(--shadow-xl);
}
/* Fade-in渐显动画 - 400ms */
.fade-in {
animation: fadeIn 0.4s ease-out;
}
@keyframes fadeIn {
from {
opacity: 0;
transform: translateY(10px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
/* Slide-in滑入动画 */
.slide-in {
animation: slideIn 0.4s ease-out;
}
@keyframes slideIn {
from {
opacity: 0;
transform: translateX(-20px);
}
to {
opacity: 1;
transform: translateX(0);
}
}
/* WCAG 2.1无障碍支持 - 尊重用户的动画偏好 */
@media (prefers-reduced-motion: reduce) {
*,
*::before,
*::after {
animation-duration: 0.01ms !important;
animation-iteration-count: 1 !important;
transition-duration: 0.01ms !important;
scroll-behavior: auto !important;
}
}
"""
# ============================================================================
# 辅助方法
# ============================================================================
@classmethod
@lru_cache(maxsize=128)
def get_venue_theme(cls, venue_type: str) -> Dict[str, str]:
"""
根据场所类型获取主题配置
Args:
venue_type: 场所类型 (如"咖啡馆""图书馆")
Returns:
包含主题色彩和图标的字典
Example:
>>> theme = DesignTokens.get_venue_theme("咖啡馆")
>>> print(theme['theme_primary']) # "#8B5A3C"
"""
return cls.VENUE_THEMES.get(venue_type, cls.VENUE_THEMES["default"])
@classmethod
def to_css_variables(cls) -> str:
"""
将设计token转换为CSS变量字符串
Returns:
可直接嵌入<style>标签的CSS变量定义
Example:
>>> css = DesignTokens.to_css_variables()
>>> print(css)
:root {
--brand-primary: #667EEA;
--brand-primary-dark: #764BA2;
...
}
"""
lines = [":root {"]
# 品牌色
for key, value in cls.BRAND.items():
css_key = f"--brand-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 文字色
for key, value in cls.TEXT.items():
css_key = f"--text-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 背景色
for key, value in cls.BACKGROUND.items():
css_key = f"--bg-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 边框色
for key, value in cls.BORDER.items():
css_key = f"--border-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 阴影
for key, value in cls.SHADOW.items():
css_key = f"--shadow-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 间距
for key, value in cls.SPACING.items():
css_key = f"--spacing-{key}"
lines.append(f" {css_key}: {value};")
# 圆角
for key, value in cls.RADIUS.items():
css_key = f"--radius-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# 字体
for key, value in cls.FONT.items():
css_key = f"--font-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
# Z-Index
for key, value in cls.Z_INDEX.items():
css_key = f"--z-{key.replace('_', '-')}"
lines.append(f" {css_key}: {value};")
lines.append("}")
return "\n".join(lines)
@classmethod
def generate_css_file(cls, output_path: str = "static/css/design-tokens.css"):
"""
生成独立的CSS设计token文件
Args:
output_path: 输出文件路径
Example:
>>> DesignTokens.generate_css_file()
# 生成 static/css/design-tokens.css
"""
import os
os.makedirs(os.path.dirname(output_path), exist_ok=True)
with open(output_path, "w", encoding="utf-8") as f:
f.write("/* ============================================\n")
f.write(" * MeetSpot Design Tokens\n")
f.write(" * 自动生成 - 请勿手动编辑\n")
f.write(" * 生成源: app/design_tokens.py\n")
f.write(" * ==========================================*/\n\n")
f.write(cls.to_css_variables())
f.write("\n\n/* Compatibility fallbacks for older browsers */\n")
f.write(".no-cssvar {\n")
f.write(" /* Fallback for browsers without CSS variable support */\n")
f.write(f" color: {cls.TEXT['primary']};\n")
f.write(f" background-color: {cls.BACKGROUND['primary']};\n")
f.write("}\n\n")
# 追加交互动画系统
f.write(cls.ANIMATIONS)
# ============================================================================
# 全局单例访问 (方便快速引用)
# ============================================================================
COLORS = {
"brand": DesignTokens.BRAND,
"text": DesignTokens.TEXT,
"background": DesignTokens.BACKGROUND,
"border": DesignTokens.BORDER,
}
VENUE_THEMES = DesignTokens.VENUE_THEMES
# ============================================================================
# 便捷函数
# ============================================================================
def get_venue_theme(venue_type: str) -> Dict[str, str]:
"""便捷函数: 获取场所主题"""
return DesignTokens.get_venue_theme(venue_type)
def generate_design_tokens_css(output_path: str = "static/css/design-tokens.css"):
"""便捷函数: 生成CSS文件"""
DesignTokens.generate_css_file(output_path)

View File

@@ -0,0 +1,13 @@
class ToolError(Exception):
"""Raised when a tool encounters an error."""
def __init__(self, message):
self.message = message
class OpenManusError(Exception):
"""Base exception for all OpenManus errors"""
class TokenLimitExceeded(OpenManusError):
"""Exception raised when the token limit is exceeded"""

800
MeetSpot/app/llm.py Normal file
View File

@@ -0,0 +1,800 @@
import math
from typing import Dict, List, Optional, Union
import tiktoken
from openai import (APIError, AsyncAzureOpenAI, AsyncOpenAI,
AuthenticationError, OpenAIError, RateLimitError)
from openai.types.chat import ChatCompletion
from openai.types.chat.chat_completion_message import ChatCompletionMessage
from tenacity import (retry, retry_if_exception_type, stop_after_attempt,
wait_random_exponential)
from app.config import LLMSettings, config
from app.exceptions import TokenLimitExceeded
from app.logger import logger # Assuming a logger is set up in your app
from app.schema import (ROLE_VALUES, TOOL_CHOICE_TYPE, TOOL_CHOICE_VALUES,
Message, ToolChoice)
REASONING_MODELS = ["o1", "o3-mini"]
MULTIMODAL_MODELS = [
"gpt-4-vision-preview",
"gpt-4o",
"gpt-4o-mini",
"claude-3-opus-20240229",
"claude-3-sonnet-20240229",
"claude-3-haiku-20240307",
]
class TokenCounter:
# Token constants
BASE_MESSAGE_TOKENS = 4
FORMAT_TOKENS = 2
LOW_DETAIL_IMAGE_TOKENS = 85
HIGH_DETAIL_TILE_TOKENS = 170
# Image processing constants
MAX_SIZE = 2048
HIGH_DETAIL_TARGET_SHORT_SIDE = 768
TILE_SIZE = 512
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def count_text(self, text: str) -> int:
"""Calculate tokens for a text string"""
return 0 if not text else len(self.tokenizer.encode(text))
def count_image(self, image_item: dict) -> int:
"""
Calculate tokens for an image based on detail level and dimensions
For "low" detail: fixed 85 tokens
For "high" detail:
1. Scale to fit in 2048x2048 square
2. Scale shortest side to 768px
3. Count 512px tiles (170 tokens each)
4. Add 85 tokens
"""
detail = image_item.get("detail", "medium")
# For low detail, always return fixed token count
if detail == "low":
return self.LOW_DETAIL_IMAGE_TOKENS
# For medium detail (default in OpenAI), use high detail calculation
# OpenAI doesn't specify a separate calculation for medium
# For high detail, calculate based on dimensions if available
if detail == "high" or detail == "medium":
# If dimensions are provided in the image_item
if "dimensions" in image_item:
width, height = image_item["dimensions"]
return self._calculate_high_detail_tokens(width, height)
# Default values when dimensions aren't available or detail level is unknown
if detail == "high":
# Default to a 1024x1024 image calculation for high detail
return self._calculate_high_detail_tokens(1024, 1024) # 765 tokens
elif detail == "medium":
# Default to a medium-sized image for medium detail
return 1024 # This matches the original default
else:
# For unknown detail levels, use medium as default
return 1024
def _calculate_high_detail_tokens(self, width: int, height: int) -> int:
"""Calculate tokens for high detail images based on dimensions"""
# Step 1: Scale to fit in MAX_SIZE x MAX_SIZE square
if width > self.MAX_SIZE or height > self.MAX_SIZE:
scale = self.MAX_SIZE / max(width, height)
width = int(width * scale)
height = int(height * scale)
# Step 2: Scale so shortest side is HIGH_DETAIL_TARGET_SHORT_SIDE
scale = self.HIGH_DETAIL_TARGET_SHORT_SIDE / min(width, height)
scaled_width = int(width * scale)
scaled_height = int(height * scale)
# Step 3: Count number of 512px tiles
tiles_x = math.ceil(scaled_width / self.TILE_SIZE)
tiles_y = math.ceil(scaled_height / self.TILE_SIZE)
total_tiles = tiles_x * tiles_y
# Step 4: Calculate final token count
return (
total_tiles * self.HIGH_DETAIL_TILE_TOKENS
) + self.LOW_DETAIL_IMAGE_TOKENS
def count_content(self, content: Union[str, List[Union[str, dict]]]) -> int:
"""Calculate tokens for message content"""
if not content:
return 0
if isinstance(content, str):
return self.count_text(content)
token_count = 0
for item in content:
if isinstance(item, str):
token_count += self.count_text(item)
elif isinstance(item, dict):
if "text" in item:
token_count += self.count_text(item["text"])
elif "image_url" in item:
token_count += self.count_image(item)
return token_count
def count_tool_calls(self, tool_calls: List[dict]) -> int:
"""Calculate tokens for tool calls"""
token_count = 0
for tool_call in tool_calls:
if "function" in tool_call:
function = tool_call["function"]
token_count += self.count_text(function.get("name", ""))
token_count += self.count_text(function.get("arguments", ""))
return token_count
def count_message_tokens(self, messages: List[dict]) -> int:
"""Calculate the total number of tokens in a message list"""
total_tokens = self.FORMAT_TOKENS # Base format tokens
for message in messages:
tokens = self.BASE_MESSAGE_TOKENS # Base tokens per message
# Add role tokens
tokens += self.count_text(message.get("role", ""))
# Add content tokens
if "content" in message:
tokens += self.count_content(message["content"])
# Add tool calls tokens
if "tool_calls" in message:
tokens += self.count_tool_calls(message["tool_calls"])
# Add name and tool_call_id tokens
tokens += self.count_text(message.get("name", ""))
tokens += self.count_text(message.get("tool_call_id", ""))
total_tokens += tokens
return total_tokens
class LLM:
_instances: Dict[str, "LLM"] = {}
def __new__(
cls, config_name: str = "default", llm_config: Optional[LLMSettings] = None
):
if config_name not in cls._instances:
instance = super().__new__(cls)
instance.__init__(config_name, llm_config)
cls._instances[config_name] = instance
return cls._instances[config_name]
def __init__(
self, config_name: str = "default", llm_config: Optional[LLMSettings] = None
):
if not hasattr(self, "client"): # Only initialize if not already initialized
llm_config = llm_config or config.llm
llm_config = llm_config.get(config_name, llm_config["default"])
self.model = llm_config.model
self.max_tokens = llm_config.max_tokens
self.temperature = llm_config.temperature
self.api_type = llm_config.api_type
self.api_key = llm_config.api_key
self.api_version = llm_config.api_version
self.base_url = llm_config.base_url
# Add token counting related attributes
self.total_input_tokens = 0
self.total_completion_tokens = 0
self.max_input_tokens = (
llm_config.max_input_tokens
if hasattr(llm_config, "max_input_tokens")
else None
)
# Initialize tokenizer
try:
self.tokenizer = tiktoken.encoding_for_model(self.model)
except KeyError:
# If the model is not in tiktoken's presets, use cl100k_base as default
self.tokenizer = tiktoken.get_encoding("cl100k_base")
if self.api_type == "azure":
self.client = AsyncAzureOpenAI(
base_url=self.base_url,
api_key=self.api_key,
api_version=self.api_version,
)
else:
self.client = AsyncOpenAI(api_key=self.api_key, base_url=self.base_url)
self.token_counter = TokenCounter(self.tokenizer)
def count_tokens(self, text: str) -> int:
"""Calculate the number of tokens in a text"""
if not text:
return 0
return len(self.tokenizer.encode(text))
def count_message_tokens(self, messages: List[dict]) -> int:
return self.token_counter.count_message_tokens(messages)
def update_token_count(self, input_tokens: int, completion_tokens: int = 0) -> None:
"""Update token counts"""
# Only track tokens if max_input_tokens is set
self.total_input_tokens += input_tokens
self.total_completion_tokens += completion_tokens
logger.info(
f"Token usage: Input={input_tokens}, Completion={completion_tokens}, "
f"Cumulative Input={self.total_input_tokens}, Cumulative Completion={self.total_completion_tokens}, "
f"Total={input_tokens + completion_tokens}, Cumulative Total={self.total_input_tokens + self.total_completion_tokens}"
)
def check_token_limit(self, input_tokens: int) -> bool:
"""Check if token limits are exceeded"""
if self.max_input_tokens is not None:
return (self.total_input_tokens + input_tokens) <= self.max_input_tokens
# If max_input_tokens is not set, always return True
return True
def get_limit_error_message(self, input_tokens: int) -> str:
"""Generate error message for token limit exceeded"""
if (
self.max_input_tokens is not None
and (self.total_input_tokens + input_tokens) > self.max_input_tokens
):
return f"Request may exceed input token limit (Current: {self.total_input_tokens}, Needed: {input_tokens}, Max: {self.max_input_tokens})"
return "Token limit exceeded"
@staticmethod
def format_messages(
messages: List[Union[dict, Message]], supports_images: bool = False
) -> List[dict]:
"""
Format messages for LLM by converting them to OpenAI message format.
Args:
messages: List of messages that can be either dict or Message objects
supports_images: Flag indicating if the target model supports image inputs
Returns:
List[dict]: List of formatted messages in OpenAI format
Raises:
ValueError: If messages are invalid or missing required fields
TypeError: If unsupported message types are provided
Examples:
>>> msgs = [
... Message.system_message("You are a helpful assistant"),
... {"role": "user", "content": "Hello"},
... Message.user_message("How are you?")
... ]
>>> formatted = LLM.format_messages(msgs)
"""
formatted_messages = []
for message in messages:
# Convert Message objects to dictionaries
if isinstance(message, Message):
message = message.to_dict()
if isinstance(message, dict):
# If message is a dict, ensure it has required fields
if "role" not in message:
raise ValueError("Message dict must contain 'role' field")
# Process base64 images if present and model supports images
if supports_images and message.get("base64_image"):
# Initialize or convert content to appropriate format
if not message.get("content"):
message["content"] = []
elif isinstance(message["content"], str):
message["content"] = [
{"type": "text", "text": message["content"]}
]
elif isinstance(message["content"], list):
# Convert string items to proper text objects
message["content"] = [
(
{"type": "text", "text": item}
if isinstance(item, str)
else item
)
for item in message["content"]
]
# Add the image to content
message["content"].append(
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{message['base64_image']}"
},
}
)
# Remove the base64_image field
del message["base64_image"]
# If model doesn't support images but message has base64_image, handle gracefully
elif not supports_images and message.get("base64_image"):
# Just remove the base64_image field and keep the text content
del message["base64_image"]
if "content" in message or "tool_calls" in message:
formatted_messages.append(message)
# else: do not include the message
else:
raise TypeError(f"Unsupported message type: {type(message)}")
# Validate all messages have required fields
for msg in formatted_messages:
if msg["role"] not in ROLE_VALUES:
raise ValueError(f"Invalid role: {msg['role']}")
return formatted_messages
@retry(
wait=wait_random_exponential(min=1, max=60),
stop=stop_after_attempt(6),
retry=retry_if_exception_type(
(OpenAIError, Exception, ValueError)
), # Don't retry TokenLimitExceeded
)
async def ask(
self,
messages: List[Union[dict, Message]],
system_msgs: Optional[List[Union[dict, Message]]] = None,
stream: bool = True,
temperature: Optional[float] = None,
) -> str:
"""
Send a prompt to the LLM and get the response.
Args:
messages: List of conversation messages
system_msgs: Optional system messages to prepend
stream (bool): Whether to stream the response
temperature (float): Sampling temperature for the response
Returns:
str: The generated response
Raises:
TokenLimitExceeded: If token limits are exceeded
ValueError: If messages are invalid or response is empty
OpenAIError: If API call fails after retries
Exception: For unexpected errors
"""
try:
# Check if the model supports images
supports_images = self.model in MULTIMODAL_MODELS
# Format system and user messages with image support check
if system_msgs:
system_msgs = self.format_messages(system_msgs, supports_images)
messages = system_msgs + self.format_messages(messages, supports_images)
else:
messages = self.format_messages(messages, supports_images)
# Calculate input token count
input_tokens = self.count_message_tokens(messages)
# Check if token limits are exceeded
if not self.check_token_limit(input_tokens):
error_message = self.get_limit_error_message(input_tokens)
# Raise a special exception that won't be retried
raise TokenLimitExceeded(error_message)
params = {
"model": self.model,
"messages": messages,
}
if self.model in REASONING_MODELS:
params["max_completion_tokens"] = self.max_tokens
else:
params["max_tokens"] = self.max_tokens
params["temperature"] = (
temperature if temperature is not None else self.temperature
)
if not stream:
# Non-streaming request
response = await self.client.chat.completions.create(
**params, stream=False
)
if not response.choices or not response.choices[0].message.content:
raise ValueError("Empty or invalid response from LLM")
# Update token counts
self.update_token_count(
response.usage.prompt_tokens, response.usage.completion_tokens
)
return response.choices[0].message.content
# Streaming request, For streaming, update estimated token count before making the request
self.update_token_count(input_tokens)
response = await self.client.chat.completions.create(**params, stream=True)
collected_messages = []
completion_text = ""
async for chunk in response:
chunk_message = chunk.choices[0].delta.content or ""
collected_messages.append(chunk_message)
completion_text += chunk_message
print(chunk_message, end="", flush=True)
print() # Newline after streaming
full_response = "".join(collected_messages).strip()
if not full_response:
raise ValueError("Empty response from streaming LLM")
# estimate completion tokens for streaming response
completion_tokens = self.count_tokens(completion_text)
logger.info(
f"Estimated completion tokens for streaming response: {completion_tokens}"
)
self.total_completion_tokens += completion_tokens
return full_response
except TokenLimitExceeded:
# Re-raise token limit errors without logging
raise
except ValueError:
logger.exception(f"Validation error")
raise
except OpenAIError as oe:
logger.exception(f"OpenAI API error")
if isinstance(oe, AuthenticationError):
logger.error("Authentication failed. Check API key.")
elif isinstance(oe, RateLimitError):
logger.error("Rate limit exceeded. Consider increasing retry attempts.")
elif isinstance(oe, APIError):
logger.error(f"API error: {oe}")
raise
except Exception:
logger.exception(f"Unexpected error in ask")
raise
@retry(
wait=wait_random_exponential(min=1, max=60),
stop=stop_after_attempt(6),
retry=retry_if_exception_type(
(OpenAIError, Exception, ValueError)
), # Don't retry TokenLimitExceeded
)
async def ask_with_images(
self,
messages: List[Union[dict, Message]],
images: List[Union[str, dict]],
system_msgs: Optional[List[Union[dict, Message]]] = None,
stream: bool = False,
temperature: Optional[float] = None,
) -> str:
"""
Send a prompt with images to the LLM and get the response.
Args:
messages: List of conversation messages
images: List of image URLs or image data dictionaries
system_msgs: Optional system messages to prepend
stream (bool): Whether to stream the response
temperature (float): Sampling temperature for the response
Returns:
str: The generated response
Raises:
TokenLimitExceeded: If token limits are exceeded
ValueError: If messages are invalid or response is empty
OpenAIError: If API call fails after retries
Exception: For unexpected errors
"""
try:
# For ask_with_images, we always set supports_images to True because
# this method should only be called with models that support images
if self.model not in MULTIMODAL_MODELS:
raise ValueError(
f"Model {self.model} does not support images. Use a model from {MULTIMODAL_MODELS}"
)
# Format messages with image support
formatted_messages = self.format_messages(messages, supports_images=True)
# Ensure the last message is from the user to attach images
if not formatted_messages or formatted_messages[-1]["role"] != "user":
raise ValueError(
"The last message must be from the user to attach images"
)
# Process the last user message to include images
last_message = formatted_messages[-1]
# Convert content to multimodal format if needed
content = last_message["content"]
multimodal_content = (
[{"type": "text", "text": content}]
if isinstance(content, str)
else content if isinstance(content, list) else []
)
# Add images to content
for image in images:
if isinstance(image, str):
multimodal_content.append(
{"type": "image_url", "image_url": {"url": image}}
)
elif isinstance(image, dict) and "url" in image:
multimodal_content.append({"type": "image_url", "image_url": image})
elif isinstance(image, dict) and "image_url" in image:
multimodal_content.append(image)
else:
raise ValueError(f"Unsupported image format: {image}")
# Update the message with multimodal content
last_message["content"] = multimodal_content
# Add system messages if provided
if system_msgs:
all_messages = (
self.format_messages(system_msgs, supports_images=True)
+ formatted_messages
)
else:
all_messages = formatted_messages
# Calculate tokens and check limits
input_tokens = self.count_message_tokens(all_messages)
if not self.check_token_limit(input_tokens):
raise TokenLimitExceeded(self.get_limit_error_message(input_tokens))
# Set up API parameters
params = {
"model": self.model,
"messages": all_messages,
"stream": stream,
}
# Add model-specific parameters
if self.model in REASONING_MODELS:
params["max_completion_tokens"] = self.max_tokens
else:
params["max_tokens"] = self.max_tokens
params["temperature"] = (
temperature if temperature is not None else self.temperature
)
# Handle non-streaming request
if not stream:
response = await self.client.chat.completions.create(**params)
if not response.choices or not response.choices[0].message.content:
raise ValueError("Empty or invalid response from LLM")
self.update_token_count(
response.usage.prompt_tokens, response.usage.completion_tokens
)
return response.choices[0].message.content
# Handle streaming request
response = await self.client.chat.completions.create(**params)
collected_messages = []
completion_text = ""
async for chunk in response:
chunk_message = chunk.choices[0].delta.content or ""
collected_messages.append(chunk_message)
completion_text += chunk_message
print(chunk_message, end="", flush=True)
print() # Newline after streaming
full_response = "".join(collected_messages).strip()
if not full_response:
raise ValueError("Empty response from streaming LLM")
completion_tokens = self.count_tokens(completion_text)
logger.info(
f"Estimated completion tokens for streaming response with images: {completion_tokens}"
)
self.update_token_count(input_tokens, completion_tokens)
return full_response
except TokenLimitExceeded:
raise
except ValueError as ve:
logger.error(f"Validation error in ask_with_images: {ve}")
raise
except OpenAIError as oe:
logger.error(f"OpenAI API error: {oe}")
if isinstance(oe, AuthenticationError):
logger.error("Authentication failed. Check API key.")
elif isinstance(oe, RateLimitError):
logger.error("Rate limit exceeded. Consider increasing retry attempts.")
elif isinstance(oe, APIError):
logger.error(f"API error: {oe}")
raise
except Exception as e:
logger.error(f"Unexpected error in ask_with_images: {e}")
raise
@retry(
wait=wait_random_exponential(min=1, max=60),
stop=stop_after_attempt(6),
retry=retry_if_exception_type(
(OpenAIError, Exception, ValueError)
), # Don't retry TokenLimitExceeded
)
async def ask_tool(
self,
messages: List[Union[dict, Message]],
system_msgs: Optional[List[Union[dict, Message]]] = None,
timeout: int = 300,
tools: Optional[List[dict]] = None,
tool_choice: TOOL_CHOICE_TYPE = ToolChoice.AUTO, # type: ignore
temperature: Optional[float] = None,
**kwargs,
) -> ChatCompletionMessage | None:
"""
Ask LLM using functions/tools and return the response.
Args:
messages: List of conversation messages
system_msgs: Optional system messages to prepend
timeout: Request timeout in seconds
tools: List of tools to use
tool_choice: Tool choice strategy
temperature: Sampling temperature for the response
**kwargs: Additional completion arguments
Returns:
ChatCompletionMessage: The model's response
Raises:
TokenLimitExceeded: If token limits are exceeded
ValueError: If tools, tool_choice, or messages are invalid
OpenAIError: If API call fails after retries
Exception: For unexpected errors
"""
try:
# Validate tool_choice
if tool_choice not in TOOL_CHOICE_VALUES:
raise ValueError(f"Invalid tool_choice: {tool_choice}")
# Check if the model supports images
supports_images = self.model in MULTIMODAL_MODELS
# Format messages
if system_msgs:
system_msgs = self.format_messages(system_msgs, supports_images)
formatted_messages = system_msgs + self.format_messages(messages, supports_images)
else:
formatted_messages = self.format_messages(messages, supports_images)
# 验证消息序列确保tool消息前面有带tool_calls的assistant消息
valid_messages = []
tool_calls_ids = set() # 跟踪所有有效的tool_call IDs
for i, msg in enumerate(formatted_messages):
if isinstance(msg, dict):
role = msg.get("role")
tool_call_id = msg.get("tool_call_id")
else:
role = msg.role if hasattr(msg, "role") else None
tool_call_id = msg.tool_call_id if hasattr(msg, "tool_call_id") else None
# 如果是tool消息验证它引用的tool_call_id是否存在
if role == "tool" and tool_call_id:
if tool_call_id not in tool_calls_ids:
logger.warning(f"发现无效的工具消息 - tool_call_id '{tool_call_id}' 未找到关联的tool_calls将跳过此消息")
continue
# 如果是assistant消息且有tool_calls记录所有tool_call IDs
elif role == "assistant":
tool_calls = []
if isinstance(msg, dict) and "tool_calls" in msg:
tool_calls = msg.get("tool_calls", [])
elif hasattr(msg, "tool_calls") and msg.tool_calls:
tool_calls = msg.tool_calls
# 收集所有工具调用ID
for call in tool_calls:
if isinstance(call, dict) and "id" in call:
tool_calls_ids.add(call["id"])
elif hasattr(call, "id"):
tool_calls_ids.add(call.id)
# 添加有效消息
valid_messages.append(msg)
# 使用验证过的消息序列替换原来的消息
formatted_messages = valid_messages
# Calculate input token count
input_tokens = self.count_message_tokens(formatted_messages)
# If there are tools, calculate token count for tool descriptions
tools_tokens = 0
if tools:
for tool in tools:
tools_tokens += self.count_tokens(str(tool))
input_tokens += tools_tokens
# Check if token limits are exceeded
if not self.check_token_limit(input_tokens):
error_message = self.get_limit_error_message(input_tokens)
# Raise a special exception that won't be retried
raise TokenLimitExceeded(error_message)
# Validate tools if provided
if tools:
for tool in tools:
if not isinstance(tool, dict) or "type" not in tool:
raise ValueError("Each tool must be a dict with 'type' field")
# Set up the completion request
params = {
"model": self.model,
"messages": formatted_messages,
"tools": tools,
"tool_choice": tool_choice,
"timeout": timeout,
**kwargs,
}
if self.model in REASONING_MODELS:
params["max_completion_tokens"] = self.max_tokens
else:
params["max_tokens"] = self.max_tokens
params["temperature"] = (
temperature if temperature is not None else self.temperature
)
response: ChatCompletion = await self.client.chat.completions.create(
**params, stream=False
)
# Check if response is valid
if not response.choices or not response.choices[0].message:
print(response)
# raise ValueError("Invalid or empty response from LLM")
return None
# Update token counts
self.update_token_count(
response.usage.prompt_tokens, response.usage.completion_tokens
)
return response.choices[0].message
except TokenLimitExceeded:
# Re-raise token limit errors without logging
raise
except ValueError as ve:
logger.error(f"Validation error in ask_tool: {ve}")
raise
except OpenAIError as oe:
logger.error(f"OpenAI API error: {oe}")
if isinstance(oe, AuthenticationError):
logger.error("Authentication failed. Check API key.")
elif isinstance(oe, RateLimitError):
logger.error("Rate limit exceeded. Consider increasing retry attempts.")
elif isinstance(oe, APIError):
logger.error(f"API error: {oe}")
raise
except Exception as e:
logger.error(f"Unexpected error in ask_tool: {e}")
raise

42
MeetSpot/app/logger.py Normal file
View File

@@ -0,0 +1,42 @@
import sys
from datetime import datetime
from loguru import logger as _logger
from app.config import PROJECT_ROOT
_print_level = "INFO"
def define_log_level(print_level="INFO", logfile_level="DEBUG", name: str = None):
"""Adjust the log level to above level"""
global _print_level
_print_level = print_level
current_date = datetime.now()
formatted_date = current_date.strftime("%Y%m%d%H%M%S")
log_name = (
f"{name}_{formatted_date}" if name else formatted_date
) # name a log with prefix name
_logger.remove()
_logger.add(sys.stderr, level=print_level)
_logger.add(PROJECT_ROOT / f"logs/{log_name}.log", level=logfile_level)
return _logger
logger = define_log_level()
if __name__ == "__main__":
logger.info("Starting application")
logger.debug("Debug message")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical message")
try:
raise ValueError("Test error")
except Exception as e:
logger.exception(f"An error occurred: {e}")

View File

@@ -0,0 +1,9 @@
"""数据模型包。"""
from app.db.database import Base # noqa: F401
# 导入所有模型以确保它们注册到Base.metadata
from app.models.user import User # noqa: F401
from app.models.room import GatheringRoom, RoomParticipant # noqa: F401
from app.models.message import ChatMessage, VenueVote # noqa: F401

View File

@@ -0,0 +1,53 @@
"""聊天消息与投票记录模型。"""
import uuid
from datetime import datetime
from typing import Optional
from pydantic import BaseModel, Field
from sqlalchemy import Column, DateTime, ForeignKey, String, Text, UniqueConstraint, func
from app.db.database import Base
def _generate_uuid() -> str:
return str(uuid.uuid4())
class VenueVote(Base):
__tablename__ = "venue_votes"
__table_args__ = (UniqueConstraint("room_id", "venue_id", "user_id", name="uq_vote"),)
id = Column(String(36), primary_key=True, default=_generate_uuid)
room_id = Column(String(36), ForeignKey("gathering_rooms.id"), nullable=False)
venue_id = Column(String(100), nullable=False)
user_id = Column(String(36), ForeignKey("users.id"), nullable=False)
vote_type = Column(String(20), nullable=False)
created_at = Column(DateTime(timezone=True), server_default=func.now())
class ChatMessage(Base):
__tablename__ = "chat_messages"
id = Column(String(36), primary_key=True, default=_generate_uuid)
room_id = Column(String(36), ForeignKey("gathering_rooms.id"), nullable=False)
user_id = Column(String(36), ForeignKey("users.id"), nullable=False)
content = Column(Text, nullable=False)
created_at = Column(DateTime(timezone=True), server_default=func.now())
class ChatMessageCreate(BaseModel):
content: str = Field(..., min_length=1, description="聊天内容")
class VoteCreate(BaseModel):
venue_id: str
vote_type: str = Field(..., pattern="^(like|dislike)$")
class VoteRead(BaseModel):
venue_id: str
vote_type: str
user_id: str
created_at: Optional[datetime] = None

View File

@@ -0,0 +1,60 @@
"""聚会房间相关的ORM与Pydantic模型。"""
import uuid
from datetime import datetime
from typing import Optional, Tuple
from pydantic import BaseModel, Field
from sqlalchemy import Column, DateTime, Float, ForeignKey, String, Text, UniqueConstraint, func
from app.db.database import Base
def _generate_uuid() -> str:
return str(uuid.uuid4())
class GatheringRoom(Base):
__tablename__ = "gathering_rooms"
id = Column(String(36), primary_key=True, default=_generate_uuid)
name = Column(String(100), nullable=False)
description = Column(Text, default="")
host_user_id = Column(String(36), ForeignKey("users.id"), nullable=False)
created_at = Column(DateTime(timezone=True), server_default=func.now())
gathering_time = Column(DateTime(timezone=True))
status = Column(String(20), default="pending")
venue_keywords = Column(String(100), default="咖啡馆")
final_venue_json = Column(Text, nullable=True)
class RoomParticipant(Base):
__tablename__ = "room_participants"
__table_args__ = (UniqueConstraint("room_id", "user_id", name="uq_room_user"),)
id = Column(String(36), primary_key=True, default=_generate_uuid)
room_id = Column(String(36), ForeignKey("gathering_rooms.id"), nullable=False)
user_id = Column(String(36), ForeignKey("users.id"), nullable=False)
location_name = Column(String(200))
location_lat = Column(Float)
location_lng = Column(Float)
joined_at = Column(DateTime(timezone=True), server_default=func.now())
role = Column(String(20), default="member")
class GatheringRoomCreate(BaseModel):
name: str = Field(..., description="聚会名称")
description: str = Field("", description="聚会描述")
gathering_time: Optional[datetime] = Field(
None, description="聚会时间ISO 字符串"
)
venue_keywords: str = Field("咖啡馆", description="场所类型关键词")
class RoomParticipantRead(BaseModel):
user_id: str
nickname: str
location_name: Optional[str] = None
location_coords: Optional[Tuple[float, float]] = None
role: str

View File

@@ -0,0 +1,44 @@
"""用户相关SQLAlchemy模型与Pydantic模式。"""
import uuid
from datetime import datetime
from typing import Optional
from pydantic import BaseModel, Field
from sqlalchemy import Column, DateTime, String, func
from app.db.database import Base
def _generate_uuid() -> str:
return str(uuid.uuid4())
class User(Base):
__tablename__ = "users"
id = Column(String(36), primary_key=True, default=_generate_uuid)
phone = Column(String(20), unique=True, nullable=False)
nickname = Column(String(50), nullable=False)
avatar_url = Column(String(255), default="")
created_at = Column(DateTime(timezone=True), server_default=func.now())
last_login = Column(DateTime(timezone=True))
class UserCreate(BaseModel):
phone: str = Field(..., description="手机号")
nickname: Optional[str] = Field(None, description="昵称,可选")
avatar_url: Optional[str] = Field("", description="头像URL可选")
class UserRead(BaseModel):
id: str
phone: str
nickname: str
avatar_url: str = ""
created_at: datetime
last_login: Optional[datetime] = None
class Config:
from_attributes = True

214
MeetSpot/app/schema.py Normal file
View File

@@ -0,0 +1,214 @@
from enum import Enum
from typing import Any, List, Literal, Optional, Union
from pydantic import BaseModel, Field
class Role(str, Enum):
"""Message role options"""
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
TOOL = "tool"
ROLE_VALUES = tuple(role.value for role in Role)
ROLE_TYPE = Literal[ROLE_VALUES] # type: ignore
class ToolChoice(str, Enum):
"""Tool choice options"""
NONE = "none"
AUTO = "auto"
REQUIRED = "required"
TOOL_CHOICE_VALUES = tuple(choice.value for choice in ToolChoice)
TOOL_CHOICE_TYPE = Literal[TOOL_CHOICE_VALUES] # type: ignore
class AgentState(str, Enum):
"""Agent execution states"""
IDLE = "IDLE"
RUNNING = "RUNNING"
FINISHED = "FINISHED"
ERROR = "ERROR"
class Function(BaseModel):
name: str
arguments: str
class ToolCall(BaseModel):
"""Represents a tool/function call in a message"""
id: str
type: str = "function"
function: Function
class Message(BaseModel):
"""Represents a chat message in the conversation"""
role: ROLE_TYPE = Field(...) # type: ignore
content: Optional[str] = Field(default=None)
tool_calls: Optional[List[ToolCall]] = Field(default=None)
name: Optional[str] = Field(default=None)
tool_call_id: Optional[str] = Field(default=None)
base64_image: Optional[str] = Field(default=None)
def __add__(self, other) -> List["Message"]:
"""支持 Message + list 或 Message + Message 的操作"""
if isinstance(other, list):
return [self] + other
elif isinstance(other, Message):
return [self, other]
else:
raise TypeError(
f"unsupported operand type(s) for +: '{type(self).__name__}' and '{type(other).__name__}'"
)
def __radd__(self, other) -> List["Message"]:
"""支持 list + Message 的操作"""
if isinstance(other, list):
return other + [self]
else:
raise TypeError(
f"unsupported operand type(s) for +: '{type(other).__name__}' and '{type(self).__name__}'"
)
def to_dict(self) -> dict:
"""Convert message to dictionary format"""
message = {"role": self.role}
if self.content is not None:
message["content"] = self.content
if self.tool_calls is not None and self.role == Role.ASSISTANT:
message["tool_calls"] = [
tool_call.dict() if hasattr(tool_call, "dict") else tool_call
for tool_call in self.tool_calls
]
if self.name is not None and self.role == Role.TOOL:
message["name"] = self.name
if self.tool_call_id is not None and self.role == Role.TOOL:
message["tool_call_id"] = self.tool_call_id
# 不要在API调用中包含base64_image这不是OpenAI API消息格式的一部分
return message
@classmethod
def user_message(
cls, content: str, base64_image: Optional[str] = None
) -> "Message":
"""Create a user message"""
return cls(role=Role.USER, content=content, base64_image=base64_image)
@classmethod
def system_message(cls, content: str) -> "Message":
"""Create a system message"""
return cls(role=Role.SYSTEM, content=content)
@classmethod
def assistant_message(
cls, content: Optional[str] = None, base64_image: Optional[str] = None
) -> "Message":
"""Create an assistant message"""
return cls(role=Role.ASSISTANT, content=content, base64_image=base64_image)
@classmethod
def tool_message(
cls, content: str, name: str, tool_call_id: str, base64_image: Optional[str] = None
) -> "Message":
"""Create a tool message
Args:
content: The content/result of the tool execution
name: The name of the tool that was executed
tool_call_id: The ID of the tool call this message is responding to
base64_image: Optional base64 encoded image
"""
if not tool_call_id:
raise ValueError("tool_call_id is required for tool messages")
if not name:
raise ValueError("name is required for tool messages")
return cls(
role=Role.TOOL,
content=content,
name=name,
tool_call_id=tool_call_id,
base64_image=base64_image,
)
@classmethod
def from_tool_calls(
cls,
tool_calls: List[Any],
content: Union[str, List[str]] = "",
base64_image: Optional[str] = None,
**kwargs,
):
"""Create ToolCallsMessage from raw tool calls.
Args:
tool_calls: Raw tool calls from LLM
content: Optional message content
base64_image: Optional base64 encoded image
"""
# 确保tool_calls是正确格式的对象列表
formatted_calls = []
for call in tool_calls:
if hasattr(call, "id") and hasattr(call, "function"):
func_data = call.function
if hasattr(func_data, "model_dump"):
func_dict = func_data.model_dump()
else:
func_dict = {"name": func_data.name, "arguments": func_data.arguments}
formatted_call = {
"id": call.id,
"type": "function",
"function": func_dict
}
formatted_calls.append(formatted_call)
else:
# 如果已经是字典格式,直接使用
formatted_calls.append(call)
return cls(
role=Role.ASSISTANT,
content=content,
tool_calls=formatted_calls,
base64_image=base64_image,
**kwargs,
)
class Memory(BaseModel):
messages: List[Message] = Field(default_factory=list)
max_messages: int = Field(default=100)
def add_message(self, message: Message) -> None:
"""Add a message to memory"""
self.messages.append(message)
# Optional: Implement message limit
if len(self.messages) > self.max_messages:
self.messages = self.messages[-self.max_messages :]
def add_messages(self, messages: List[Message]) -> None:
"""Add multiple messages to memory"""
self.messages.extend(messages)
def clear(self) -> None:
"""Clear all messages"""
self.messages.clear()
def get_recent_messages(self, n: int) -> List[Message]:
"""Get n most recent messages"""
return self.messages[-n:]
def to_dict_list(self) -> List[dict]:
"""Convert messages to list of dicts"""
return [msg.to_dict() for msg in self.messages]

View File

@@ -0,0 +1,9 @@
from app.tool.base import BaseTool
from app.tool.meetspot_recommender import CafeRecommender
from app.tool.tool_collection import ToolCollection
__all__ = [
"BaseTool",
"CafeRecommender",
"ToolCollection",
]

101
MeetSpot/app/tool/base.py Normal file
View File

@@ -0,0 +1,101 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
class BaseTool(ABC, BaseModel):
name: str
description: str
parameters: Optional[dict] = None
class Config:
arbitrary_types_allowed = True
async def __call__(self, **kwargs) -> Any:
"""Execute the tool with given parameters."""
return await self.execute(**kwargs)
@abstractmethod
async def execute(self, **kwargs) -> Any:
"""Execute the tool with given parameters."""
def to_param(self) -> Dict:
"""Convert tool to function call format."""
return {
"type": "function",
"function": {
"name": self.name,
"description": self.description,
"parameters": self.parameters,
},
}
class ToolResult(BaseModel):
"""Represents the result of a tool execution."""
output: Any = Field(default=None)
error: Optional[str] = Field(default=None)
base64_image: Optional[str] = Field(default=None)
system: Optional[str] = Field(default=None)
class Config:
arbitrary_types_allowed = True
def __bool__(self):
return any(getattr(self, field) for field in self.__fields__)
def __add__(self, other: "ToolResult"):
def combine_fields(
field: Optional[str], other_field: Optional[str], concatenate: bool = True
):
if field and other_field:
if concatenate:
return field + other_field
raise ValueError("Cannot combine tool results")
return field or other_field
return ToolResult(
output=combine_fields(self.output, other.output),
error=combine_fields(self.error, other.error),
base64_image=combine_fields(self.base64_image, other.base64_image, False),
system=combine_fields(self.system, other.system),
)
def __str__(self):
return f"Error: {self.error}" if self.error else self.output
def replace(self, **kwargs):
"""Returns a new ToolResult with the given fields replaced."""
# return self.copy(update=kwargs)
return type(self)(**{**self.dict(), **kwargs})
class CLIResult(ToolResult):
"""A ToolResult that can be rendered as a CLI output."""
class ToolFailure(ToolResult):
"""A ToolResult that represents a failure."""
# 为 BaseTool 添加辅助方法
def _success_response(data) -> ToolResult:
"""创建成功的工具结果"""
import json
if isinstance(data, str):
text = data
else:
text = json.dumps(data, ensure_ascii=False, indent=2)
return ToolResult(output=text)
def _fail_response(msg: str) -> ToolResult:
"""创建失败的工具结果"""
return ToolResult(error=msg)
# 将辅助方法添加到 BaseTool
BaseTool.success_response = staticmethod(_success_response)
BaseTool.fail_response = staticmethod(_fail_response)

View File

@@ -0,0 +1,158 @@
"""File operation interfaces and implementations for local and sandbox environments."""
import asyncio
from pathlib import Path
from typing import Optional, Protocol, Tuple, Union, runtime_checkable
from app.config import SandboxSettings
from app.exceptions import ToolError
from app.sandbox.client import SANDBOX_CLIENT
PathLike = Union[str, Path]
@runtime_checkable
class FileOperator(Protocol):
"""Interface for file operations in different environments."""
async def read_file(self, path: PathLike) -> str:
"""Read content from a file."""
...
async def write_file(self, path: PathLike, content: str) -> None:
"""Write content to a file."""
...
async def is_directory(self, path: PathLike) -> bool:
"""Check if path points to a directory."""
...
async def exists(self, path: PathLike) -> bool:
"""Check if path exists."""
...
async def run_command(
self, cmd: str, timeout: Optional[float] = 120.0
) -> Tuple[int, str, str]:
"""Run a shell command and return (return_code, stdout, stderr)."""
...
class LocalFileOperator(FileOperator):
"""File operations implementation for local filesystem."""
encoding: str = "utf-8"
async def read_file(self, path: PathLike) -> str:
"""Read content from a local file."""
try:
return Path(path).read_text(encoding=self.encoding)
except Exception as e:
raise ToolError(f"Failed to read {path}: {str(e)}") from None
async def write_file(self, path: PathLike, content: str) -> None:
"""Write content to a local file."""
try:
Path(path).write_text(content, encoding=self.encoding)
except Exception as e:
raise ToolError(f"Failed to write to {path}: {str(e)}") from None
async def is_directory(self, path: PathLike) -> bool:
"""Check if path points to a directory."""
return Path(path).is_dir()
async def exists(self, path: PathLike) -> bool:
"""Check if path exists."""
return Path(path).exists()
async def run_command(
self, cmd: str, timeout: Optional[float] = 120.0
) -> Tuple[int, str, str]:
"""Run a shell command locally."""
process = await asyncio.create_subprocess_shell(
cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
try:
stdout, stderr = await asyncio.wait_for(
process.communicate(), timeout=timeout
)
return (
process.returncode or 0,
stdout.decode(),
stderr.decode(),
)
except asyncio.TimeoutError as exc:
try:
process.kill()
except ProcessLookupError:
pass
raise TimeoutError(
f"Command '{cmd}' timed out after {timeout} seconds"
) from exc
class SandboxFileOperator(FileOperator):
"""File operations implementation for sandbox environment."""
def __init__(self):
self.sandbox_client = SANDBOX_CLIENT
async def _ensure_sandbox_initialized(self):
"""Ensure sandbox is initialized."""
if not self.sandbox_client.sandbox:
await self.sandbox_client.create(config=SandboxSettings())
async def read_file(self, path: PathLike) -> str:
"""Read content from a file in sandbox."""
await self._ensure_sandbox_initialized()
try:
return await self.sandbox_client.read_file(str(path))
except Exception as e:
raise ToolError(f"Failed to read {path} in sandbox: {str(e)}") from None
async def write_file(self, path: PathLike, content: str) -> None:
"""Write content to a file in sandbox."""
await self._ensure_sandbox_initialized()
try:
await self.sandbox_client.write_file(str(path), content)
except Exception as e:
raise ToolError(f"Failed to write to {path} in sandbox: {str(e)}") from None
async def is_directory(self, path: PathLike) -> bool:
"""Check if path points to a directory in sandbox."""
await self._ensure_sandbox_initialized()
result = await self.sandbox_client.run_command(
f"test -d {path} && echo 'true' || echo 'false'"
)
return result.strip() == "true"
async def exists(self, path: PathLike) -> bool:
"""Check if path exists in sandbox."""
await self._ensure_sandbox_initialized()
result = await self.sandbox_client.run_command(
f"test -e {path} && echo 'true' || echo 'false'"
)
return result.strip() == "true"
async def run_command(
self, cmd: str, timeout: Optional[float] = 120.0
) -> Tuple[int, str, str]:
"""Run a command in sandbox environment."""
await self._ensure_sandbox_initialized()
try:
stdout = await self.sandbox_client.run_command(
cmd, timeout=int(timeout) if timeout else None
)
return (
0, # Always return 0 since we don't have explicit return code from sandbox
stdout,
"", # No stderr capture in the current sandbox implementation
)
except TimeoutError as exc:
raise TimeoutError(
f"Command '{cmd}' timed out after {timeout} seconds in sandbox"
) from exc
except Exception as exc:
return 1, "", f"Error executing command in sandbox: {str(exc)}"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,64 @@
"""Collection classes for managing multiple tools."""
import json
from typing import Any, Dict, List, Union
from app.exceptions import ToolError
from app.tool.base import BaseTool, ToolFailure, ToolResult
class ToolCollection:
"""A collection of defined tools."""
class Config:
arbitrary_types_allowed = True
def __init__(self, *tools: BaseTool):
self.tools = tools
self.tool_map = {tool.name: tool for tool in tools}
def __iter__(self):
return iter(self.tools)
def to_params(self) -> List[Dict[str, Any]]:
return [tool.to_param() for tool in self.tools]
async def execute(self, name: str, tool_input: Union[str, dict]) -> ToolResult:
"""Execute a tool by name with given input."""
tool = self.get_tool(name)
if not tool:
return ToolResult(error=f"Tool '{name}' not found")
# 确保 tool_input 是字典类型
if isinstance(tool_input, str):
try:
tool_input = json.loads(tool_input)
except json.JSONDecodeError:
return ToolResult(error=f"Invalid tool input format: {tool_input}")
result = await tool(**tool_input)
return result
async def execute_all(self) -> List[ToolResult]:
"""Execute all tools in the collection sequentially."""
results = []
for tool in self.tools:
try:
result = await tool()
results.append(result)
except ToolError as e:
results.append(ToolFailure(error=e.message))
return results
def get_tool(self, name: str) -> BaseTool:
return self.tool_map.get(name)
def add_tool(self, tool: BaseTool):
self.tools += (tool,)
self.tool_map[tool.name] = tool
return self
def add_tools(self, *tools: BaseTool):
for tool in tools:
self.add_tool(tool)
return self

View File

@@ -0,0 +1,101 @@
import asyncio
from typing import List
from tenacity import retry, stop_after_attempt, wait_exponential
from app.config import config
from app.tool.base import BaseTool
from app.tool.search import (
BaiduSearchEngine,
BingSearchEngine,
DuckDuckGoSearchEngine,
GoogleSearchEngine,
WebSearchEngine,
)
class WebSearch(BaseTool):
name: str = "web_search"
description: str = """Perform a web search and return a list of relevant links.
This function attempts to use the primary search engine API to get up-to-date results.
If an error occurs, it falls back to an alternative search engine."""
parameters: dict = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "(required) The search query to submit to the search engine.",
},
"num_results": {
"type": "integer",
"description": "(optional) The number of search results to return. Default is 10.",
"default": 10,
},
},
"required": ["query"],
}
_search_engine: dict[str, WebSearchEngine] = {
"google": GoogleSearchEngine(),
"baidu": BaiduSearchEngine(),
"duckduckgo": DuckDuckGoSearchEngine(),
"bing": BingSearchEngine(),
}
async def execute(self, query: str, num_results: int = 10) -> List[str]:
"""
Execute a Web search and return a list of URLs.
Args:
query (str): The search query to submit to the search engine.
num_results (int, optional): The number of search results to return. Default is 10.
Returns:
List[str]: A list of URLs matching the search query.
"""
engine_order = self._get_engine_order()
for engine_name in engine_order:
engine = self._search_engine[engine_name]
try:
links = await self._perform_search_with_engine(
engine, query, num_results
)
if links:
return links
except Exception as e:
print(f"Search engine '{engine_name}' failed with error: {e}")
return []
def _get_engine_order(self) -> List[str]:
"""
Determines the order in which to try search engines.
Preferred engine is first (based on configuration), followed by the remaining engines.
Returns:
List[str]: Ordered list of search engine names.
"""
preferred = "google"
if config.search_config and config.search_config.engine:
preferred = config.search_config.engine.lower()
engine_order = []
if preferred in self._search_engine:
engine_order.append(preferred)
for key in self._search_engine:
if key not in engine_order:
engine_order.append(key)
return engine_order
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=1, max=10),
)
async def _perform_search_with_engine(
self,
engine: WebSearchEngine,
query: str,
num_results: int,
) -> List[str]:
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None, lambda: list(engine.perform_search(query, num_results=num_results))
)

2
MeetSpot/config/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
# prevent the local config file from being uploaded to the remote repository
config.toml

View File

@@ -0,0 +1,22 @@
# MeetSpot配置文件示例
# 使用方法: 复制此文件为config.toml并填入您的API密钥
# 高德地图API配置
[amap]
api_key = "YOUR_AMAP_API_KEY" # 替换为您的高德Web服务API Key
security_js_code = "YOUR_AMAP_SECURITY_JS_CODE" # 替换为您的高德地图JS API安全密钥
# 如果使用OpenAI或其他LLM服务也在此配置
# [openai]
# api_key = "sk-YOUR_OPENAI_API_KEY"
# base_url = "YOUR_OPENAI_API_BASE_URL_IF_NEEDED"
# 日志配置
[log]
level = "info" # 可选: debug, info, warning, error, critical
file = "logs/meetspot.log"
# 服务器配置
[server]
host = "0.0.0.0"
port = 8000

View File

@@ -0,0 +1,59 @@
{
"_comment": "地址简称映射表 - 只保留简称到全称的映射POI检索会自动处理全称",
"university_aliases": {
"北大": "北京大学",
"清华": "清华大学",
"人大": "中国人民大学",
"北师大": "北京师范大学",
"北理工": "北京理工大学",
"北航": "北京航空航天大学",
"中财": "中央财经大学",
"对外经贸": "对外经济贸易大学",
"央美": "中央美术学院",
"北影": "北京电影学院",
"中戏": "中央戏剧学院",
"中音": "中央音乐学院",
"上交": "上海交通大学",
"上海交大": "上海交通大学",
"复旦": "复旦大学",
"同济": "同济大学",
"华师大": "华东师范大学",
"上戏": "上海戏剧学院",
"上音": "上海音乐学院",
"中大": "中山大学",
"华南理工": "华南理工大学",
"华工": "华南理工大学",
"暨大": "暨南大学",
"浙大": "浙江大学",
"南大": "南京大学",
"东南": "东南大学",
"华科": "华中科技大学",
"华中师大": "华中师范大学",
"武大": "武汉大学",
"西交": "西安交通大学",
"西大": "西北大学",
"西工大": "西北工业大学",
"哈工大": "哈尔滨工业大学",
"大连理工": "大连理工大学",
"吉大": "吉林大学",
"中科大": "中国科学技术大学",
"天大": "天津大学",
"南开": "南开大学",
"厦大": "厦门大学",
"山大": "山东大学",
"川大": "四川大学",
"重大": "重庆大学",
"兰大": "兰州大学",
"电子科大": "电子科技大学",
"中南": "中南大学",
"湖大": "湖南大学",
"西南": "西南大学"
},
"landmark_aliases": {
"国贸": "国贸CBD",
"CBD": "国贸CBD",
"外滩": "上海外滩",
"陆家嘴": "上海陆家嘴",
"南京路": "南京路步行街"
}
}

1104
MeetSpot/data/cities.json Normal file

File diff suppressed because it is too large Load Diff

BIN
MeetSpot/docs/AI客服.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 864 KiB

BIN
MeetSpot/docs/Wechat.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

BIN
MeetSpot/docs/homepage.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 546 KiB

BIN
MeetSpot/docs/logo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1013 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

BIN
MeetSpot/docs/show1.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 864 KiB

BIN
MeetSpot/docs/show2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 622 KiB

BIN
MeetSpot/docs/show3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 KiB

BIN
MeetSpot/docs/show4.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 455 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 716 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 674 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 448 KiB

View File

@@ -0,0 +1,86 @@
name: meetspot-dev
channels:
- conda-forge
- defaults
dependencies:
# Python版本
- python=3.11
# 核心Web框架
- fastapi=0.116.1
- uvicorn=0.35.0
- pydantic=2.11.7
- pydantic-core=2.33.2
# HTTP客户端
- httpx=0.28.1
- aiohttp=3.12.15
- aiofiles=24.1.0
# 模板引擎
- jinja2=3.1.6
# 表单处理
- python-multipart=0.0.20
# 日志
- loguru=0.7.3
# 配置解析
- tomli=2.1.0
# 日期处理
- python-dateutil=2.9.0
# 测试工具
- pytest>=7.4.0
- pytest-cov>=4.1.0
- pytest-asyncio>=0.21.0
- pytest-mock>=3.11.0
# 代码质量工具
- black>=23.7.0 # 代码格式化
- ruff>=0.1.0 # 快速linter
- mypy>=1.5.0 # 类型检查
- isort>=5.12.0 # import排序
# 开发工具
- ipython>=8.14.0 # 增强REPL
- ipdb>=0.13.13 # 调试器
- pre-commit>=3.3.0 # Git hooks
# 文档生成
- sphinx>=7.1.0
- sphinx-rtd-theme>=1.3.0
# Web开发工具
- beautifulsoup4>=4.12.0 # HTML解析SEO验证
- requests>=2.31.0 # HTTP请求测试
# 性能分析
- py-spy>=0.3.14 # 性能分析器
- memory_profiler>=0.61.0 # 内存分析
# pip依赖conda-forge暂无的包
- pip
- pip:
- jieba==0.42.1
- whitenoise==6.6.0
- slowapi==0.1.9
- markdown2==2.4.12
# 开发专用
- bandit>=1.7.5 # 安全检查
- vulture>=2.9.1 # 死代码检测
# 注意lighthouse-ci通过npm安装不是Python包
# 系统工具
- git
- make
- nodejs>=18.0.0 # Lighthouse需要
# 环境变量
variables:
PYTHONPATH: "${CONDA_PREFIX}/lib/python3.11/site-packages"
AMAP_API_KEY: ""
ENVIRONMENT: "development"

51
MeetSpot/environment.yml Normal file
View File

@@ -0,0 +1,51 @@
name: meetspot
channels:
- conda-forge
- defaults
dependencies:
# Python版本
- python=3.11
# 核心Web框架
- fastapi=0.116.1
- uvicorn=0.35.0
- pydantic=2.11.7
- pydantic-core=2.33.2
# HTTP客户端
- httpx=0.28.1
- aiohttp=3.12.15
- aiofiles=24.1.0
# 模板引擎
- jinja2=3.1.6
# 表单处理
- python-multipart=0.0.20
# 日志
- loguru=0.7.3
# 配置解析
- tomli=2.1.0
# 日期处理
- python-dateutil=2.9.0
# SEO相关依赖
- pip
- pip:
- jieba==0.42.1 # 中文分词conda-forge暂无
- whitenoise==6.6.0 # 静态文件服务
- slowapi==0.1.9 # API限流
- markdown2==2.4.12 # Markdown解析
# 系统工具(可选,提升开发体验)
- git
- make
# 环境变量(可选)
variables:
PYTHONPATH: "${CONDA_PREFIX}/lib/python3.11/site-packages"
AMAP_API_KEY: "" # 在.env或conda activate时设置

View File

@@ -0,0 +1,43 @@
{
"ci": {
"collect": {
"url": [
"http://localhost:8000/",
"http://localhost:8000/about",
"http://localhost:8000/faq"
],
"numberOfRuns": 3
},
"assert": {
"preset": "lighthouse:recommended",
"assertions": {
"categories:performance": ["error", {"minScore": 0.8}],
"categories:accessibility": ["error", {"minScore": 0.85}],
"categories:best-practices": ["error", {"minScore": 0.8}],
"categories:seo": ["error", {"minScore": 0.92}],
"color-contrast": "off",
"tap-targets": "off",
"link-name": "off",
"heading-order": "off",
"errors-in-console": "warn",
"font-display": "off",
"csp-xss": "off",
"installable-manifest": "off",
"maskable-icon": "off",
"service-worker": "off",
"splash-screen": "off",
"themed-omnibox": "off",
"unminified-css": "warn",
"unused-css-rules": "warn",
"uses-text-compression": "warn",
"render-blocking-resources": "warn"
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}

13
MeetSpot/package.json Normal file
View File

@@ -0,0 +1,13 @@
{
"name": "meetspot",
"version": "1.0.0",
"description": "MeetSpot智能会面点推荐系统",
"main": "web_server.py",
"scripts": {
"dev": "python web_server.py",
"start": "python web_server.py"
},
"engines": {
"python": "3.11"
}
}

View File

View File

@@ -0,0 +1,42 @@
id: PM-2025-001
created_at: '2026-01-13T05:56:50.491275Z'
source_commit: bb5c8b0
severity: high
title: 限制缓存大小以防止免费层内存溢出
description: 在免费层环境中,因缓存未限制大小导致内存溢出,触发 Render 512MB 内存限制错误,影响服务稳定性。
root_cause: Geocode 和 POI 缓存未设置大小限制,导致缓存无限增长,最终耗尽内存。
triggers:
files:
- app/tool/*.py
functions:
- CafeRecommender.geocode_cache
- CafeRecommender.poi_cache
patterns:
- geocode_cache
- poi_cache
- Field\(default_factory=dict\)
keywords:
- 缓存
- cache
- memory
- OOM
- 内存溢出
fix_pattern:
approach: 为缓存添加大小限制,并在超限时采用 FIFO 策略逐出最旧条目。
key_changes:
- 添加 GEOCODE_CACHE_MAX 和 POI_CACHE_MAX 常量,分别限制缓存大小为 100 和 50。
- 在 geocode_cache 和 poi_cache 写入逻辑中添加 FIFO 驱逐逻辑。
verification:
- 验证 geocode_cache 和 poi_cache 的大小是否符合限制。
- 检查 FIFO 驱逐逻辑是否正确执行。
- 在内存受限环境中运行服务,确保不会触发 OOM 错误。
- 测试缓存命中率是否符合预期,避免过多 API 调用。
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- geocoding
- memory
- cache
- performance
- api

View File

@@ -0,0 +1,35 @@
id: PM-2025-002
created_at: '2026-01-13T05:56:55.380581Z'
source_commit: 4d12fbe
severity: medium
title: 推荐系统搜索半径扩大至50公里以解决无结果问题
description: 在推荐系统中当使用fallback策略搜索地点无结果时搜索半径被扩大至50公里API最大值以确保获取更多结果。此变更影响了推荐系统的搜索范围。
root_cause: 原有的搜索半径限制导致在某些情况下无法获取足够的推荐结果。
triggers:
files:
- app/tool/*.py
functions:
- _search_pois
patterns:
- radius=10000
- radius=50000
keywords:
- fallback
- 搜索半径
- API最大
fix_pattern:
approach: 将搜索半径从10公里扩大到50公里以获取更多结果。
key_changes:
- radius=10000
- radius=50000
verification:
- 验证在扩大搜索半径后,推荐系统能够返回更多的结果。
- 确保API调用在50公里半径下正常工作。
- 检查日志信息是否正确反映搜索半径的变化。
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- recommender
- api
- search radius

View File

@@ -0,0 +1,36 @@
id: PM-2025-003
created_at: '2026-01-13T05:56:59.697193Z'
source_commit: 3ba4b1d
severity: high
title: 修复地理编码跨城市歧义问题,提升演示可靠性
description: 在地理编码过程中,某些地标名称可能导致跨城市的歧义,影响系统的准确性和演示的可靠性。此次修复通过添加地标映射解决了这一问题。
root_cause: 缺乏对常见地标的明确城市映射,导致地理编码结果不准确。
triggers:
files:
- app/tool/*.py
functions:
- _get_address
patterns:
- landmark_mapping\s*=\s*\{
keywords:
- geocoding
- landmark
- ambiguity
- 城市歧义
fix_pattern:
approach: 通过添加45个以上的地标映射确保地标名称能够正确解析到对应的城市。
key_changes:
- 新增 landmark_mapping 字典
- 在 _get_address 函数中增加地标映射逻辑
verification:
- 验证地标名称是否正确映射到对应城市
- 检查新增地标映射是否覆盖主要城市的常见地标
- 确保地理编码结果在演示中表现良好
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- geocoding
- data-mapping
- 城市歧义
- demo-reliability

View File

@@ -0,0 +1,45 @@
id: PM-2025-004
created_at: '2026-01-13T05:57:03.601645Z'
source_commit: a97fad3
severity: medium
title: 修复非合成动画以通过 Lighthouse 审核
description: 移除了使用 width 属性的非 GPU 加速动画,解决了 Lighthouse 中的非合成动画审核失败问题。此问题可能导致页面性能下降,影响用户体验。
root_cause: 部分动画使用了 width 和 box-shadow 等非 GPU 加速属性,导致 Lighthouse 审核失败。
triggers:
files:
- docs/*.md
- templates/pages/*.html
- image*.png
- '*.png'
functions: []
patterns:
- .*width.*
- .*box-shadow.*
keywords:
- non-composited animations
- Lighthouse
- GPU acceleration
- transform
- opacity
fix_pattern:
approach: 移除非 GPU 加速的动画属性,确保所有动画仅使用 transform 和 opacity。
key_changes:
- 移除了 slideProgress 和 typewriter 动画的 keyframes
- '将 pulseGlow 动画的 box-shadow 替换为 transform: scale()'
verification:
- 检查所有动画是否仅使用 transform 和 opacity 属性
- 运行 Lighthouse 审核,确保通过非合成动画检查
- 验证页面性能是否提升
related:
files_changed:
- '"docs/\347\216\260\345\234\272\345\261\225\346\274\224\350\257\235\346\234\257.md"'
- image copy 2.png
- image copy.png
- image.png
- templates/pages/home.html
- '"\347\272\277\344\270\213\350\201\232\344\274\232\346\225\260\346\215\256.png"'
tags:
- ui
- performance
- animation
- Lighthouse

View File

@@ -0,0 +1,43 @@
id: PM-2025-005
created_at: '2026-01-13T05:57:06.915651Z'
source_commit: b8b64fa
severity: high
title: 修复 Render 部署环境变量读取问题
description: 在 Render 部署时LLM 的环境变量未正确读取,导致无法生成 LLM 传输提示,影响智能推荐功能。
root_cause: 代码未优先读取环境变量 LLM_API_BASE 和 LLM_MODEL导致配置错误。
triggers:
files:
- app/config.py
- app/tool/*.py
functions:
- Config.__init__
- _get_llm
patterns:
- os.getenv\("LLM_API_BASE"
- os.getenv\("LLM_MODEL"
keywords:
- LLM_API_BASE
- LLM_MODEL
- Render
- 环境变量
fix_pattern:
approach: 优先从环境变量读取 LLM 配置,并在初始化时验证 API Key 的有效性。
key_changes:
- 在 config.py 中添加对 LLM_API_BASE 和 LLM_MODEL 环境变量的支持。
- 调整 _get_llm 函数,增加 API Key 验证逻辑。
- 添加 LLM 初始化状态的日志记录。
verification:
- 确保环境变量 LLM_API_BASE 和 LLM_MODEL 被正确读取。
- 验证 _get_llm 函数是否正确初始化 LLM 实例。
- 检查日志是否记录 LLM 初始化状态。
- 测试在 Render 部署环境下是否生成 LLM 传输提示。
related:
files_changed:
- app/config.py
- app/tool/meetspot_recommender.py
tags:
- llm
- api
- environment
- render
- deployment

View File

@@ -0,0 +1,36 @@
id: PM-2025-006
created_at: '2026-01-13T05:57:10.607031Z'
source_commit: b6831e5
severity: medium
title: 地图标记添加位置名称标签修复
description: 地图标记未显示位置名称标签,导致用户无法直观识别标记位置。此问题影响用户体验,尤其是在多个标记密集的情况下。
root_cause: 地图标记未包含位置名称标签的渲染逻辑。
triggers:
files:
- app/tool/*.py
functions:
- CafeRecommender
patterns:
- 'markerContent = `<div style="background-color: \${{color}};'
keywords:
- map
- marker
- label
- 位置名称
fix_pattern:
approach: 为地图标记添加位置名称标签,并调整样式以提高可读性。
key_changes:
- 为中心点和用户位置标记添加标签显示逻辑
- 调整标签样式,包括背景色、阴影和字体
verification:
- 检查地图标记是否正确显示位置名称标签
- 验证标签样式是否符合设计规范
- 确保其他标记(如场所标记)未受影响
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- ui
- map
- label
- user experience

View File

@@ -0,0 +1,42 @@
id: PM-2025-007
created_at: '2026-01-13T05:57:14.136672Z'
source_commit: defe545
severity: medium
title: 修复重复加载指示器并添加 Agent 品牌标识
description: 在加载过程中出现重复的加载指示器(“两个圈”),影响用户体验。问题还涉及品牌标识未正确显示。
root_cause: Toast 通知与内联加载指示器同时显示,导致重复加载动画;加载文本未包含品牌标识。
triggers:
files:
- public/meetspot_finder.html
functions:
- renderLoadingIndicator
- updateLoadingMessage
patterns:
- \bloading\b
- \bspinner\b
- \btoast\b
keywords:
- loading indicator
- spinner
- toast notification
- Agent branding
fix_pattern:
approach: 移除重复的加载指示器,保留内联加载动画,并更新加载文本以动态显示品牌标识和搜索关键词。
key_changes:
- 移除 toast 通知加载动画
- 保留内联加载指示器
- 更新加载文本以包含 Agent 品牌标识
- 动态显示加载消息中的搜索关键词
verification:
- 确保加载过程中仅显示一个加载指示器
- 检查加载文本是否正确显示 Agent 品牌标识
- 验证动态加载消息是否包含搜索关键词
- 确认用户界面在加载时无视觉冲突
related:
files_changed:
- public/meetspot_finder.html
tags:
- ui
- branding
- loading
- ux

View File

@@ -0,0 +1,41 @@
id: PM-2025-008
created_at: '2026-01-13T05:57:18.566066Z'
source_commit: 0843bfd
severity: medium
title: LLM交通建议超时保护修复
description: 在生成交通与停车建议时因未设置超时导致请求超时影响用户体验。修复后添加了15秒超时保护避免Render 30秒请求超时。
root_cause: 缺少超时保护导致请求在30秒后超时。
triggers:
files:
- app/tool/*.py
functions:
- _llm_generate_transport_tips
- _generate_default_transport_tips
patterns:
- asyncio.wait_for
- asyncio.TimeoutError
keywords:
- 超时
- timeout
- LLM
- 交通建议
- transport tips
fix_pattern:
approach: 添加15秒超时保护并在超时时使用默认建议。
key_changes:
- 使用asyncio.wait_for设置15秒超时
- 在超时时捕获asyncio.TimeoutError异常
- 记录超时警告日志
- 调用_generate_default_transport_tips提供默认建议
verification:
- 检查是否在15秒内返回交通建议
- 验证超时时是否使用默认建议
- 确认超时警告日志是否记录
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- timeout
- LLM
- transport
- asyncio

View File

@@ -0,0 +1,36 @@
id: PM-2025-009
created_at: '2026-01-13T05:57:21.963815Z'
source_commit: cb1784c
severity: high
title: 修复SEO缓存问题以解决Render冷启动问题
description: 由于Render冷启动导致Google Search Console出现“Couldn't fetch”错误影响了网站的SEO表现。通过扩展sitemap和robots的缓存策略解决了该问题。
root_cause: sitemap.xml和robots.txt的缓存时间过短导致在Render冷启动时无法及时响应搜索引擎爬虫请求。
triggers:
files:
- api/index.py
functions:
- add_cache_headers
patterns:
- max-age=3600
- max-age=86400, stale-while-revalidate=604800
keywords:
- sitemap.xml
- robots.txt
- Cache-Control
- cold start
fix_pattern:
approach: 增加缓存时间并添加stale-while-revalidate策略以确保在冷启动时仍能提供缓存内容。
key_changes:
- max-age从3600秒增加到86400秒
- 添加stale-while-revalidate=604800
verification:
- 确认sitemap.xml和robots.txt的Cache-Control头包含正确的max-age和stale-while-revalidate值
- 使用Google Search Console验证爬虫是否能够正常获取sitemap和robots文件
- 模拟Render冷启动环境确保缓存策略有效
related:
files_changed:
- api/index.py
tags:
- seo
- cache
- api

View File

@@ -0,0 +1,49 @@
id: PM-2025-010
created_at: '2026-01-13T05:57:29.616214Z'
source_commit: 9ebaacf
severity: high
title: 修复 Google Search Console 抓取超时问题
description: 由于 Render 冷启动导致 Google 爬虫超时,出现 GSC 'Couldn't fetch' 错误,影响了网站的 SEO 表现。
root_cause: Render 冷启动导致响应时间过长Google 爬虫无法及时获取 sitemap 和 robots.txt。
triggers:
files:
- api/routers/seo_pages.py
- .github/workflows/keep-alive.yml
functions:
- sitemap
- robots_txt
patterns:
- .*sitemap.*
- .*robots\.txt.*
- .*Cache-Control.*
- .*stale-while-revalidate.*
keywords:
- sitemap
- robots.txt
- Cache-Control
- stale-while-revalidate
- Googlebot
- Render cold start
fix_pattern:
approach: 通过增加缓存策略、提高 keep-alive 频率以及模拟 Googlebot 爬虫访问来解决冷启动问题。
key_changes:
- 为 sitemap.xml 和 robots.txt 添加长时间缓存策略max-age=86400, stale-while-revalidate=604800
- 提高 keep-alive 频率从 10 分钟到 5 分钟
- 为冷启动响应添加 --max-time 30 超时限制
- 模拟 Googlebot 爬虫访问以验证抓取可用性
verification:
- 检查 sitemap.xml 和 robots.txt 是否正确设置了 Cache-Control 头
- 验证 Cloudflare CDN 是否在冷启动期间提供缓存内容
- 通过 Google Search Console 验证爬虫抓取是否正常
- 模拟 Googlebot 爬虫访问,确保返回有效响应
- 检查 keep-alive 机制是否按预期运行
related:
files_changed:
- .github/workflows/keep-alive.yml
- api/routers/seo_pages.py
tags:
- seo
- api
- performance
- googlebot
- cache

View File

@@ -0,0 +1,39 @@
id: PM-2025-011
created_at: '2026-01-13T05:57:33.139741Z'
source_commit: 2456df8
severity: high
title: 修复SEO问题添加/sitemap.xml和/robots.txt显式路由
description: 由于StaticFiles挂载路径问题Google Search Console报告“无法读取站点地图”。此问题影响了搜索引擎对网站的抓取和索引。
root_cause: sitemap.xml和robots.txt文件路径不正确导致搜索引擎无法访问。
triggers:
files:
- api/index.py
- public/sitemap.xml
functions:
- sitemap
- robots
patterns:
- '@app.api_route\("/sitemap.xml"'
- '@app.api_route\("/robots.txt"'
keywords:
- sitemap
- robots.txt
- SEO
- StaticFiles
fix_pattern:
approach: 添加显式路由以正确提供sitemap.xml和robots.txt文件并设置适当的HTTP头。
key_changes:
- 在api/index.py中添加/sitemap.xml和/robots.txt路由
- 为sitemap.xml和robots.txt设置正确的Content-Type头
verification:
- 确保/sitemap.xml和/robots.txt路由返回正确的文件内容
- 验证Google Search Console中不再出现“无法读取站点地图”错误
- 检查sitemap.xml的lastmod日期是否更新
related:
files_changed:
- api/index.py
- public/sitemap.xml
tags:
- SEO
- api
- routing

View File

@@ -0,0 +1,44 @@
id: PM-2025-012
created_at: '2026-01-13T05:57:38.272426Z'
source_commit: bc3efc7
severity: medium
title: 修复UI距离选项和重定向路径问题
description: 在应用中距离选项更新为5/10/20/30/50公里并修复了重定向和资源路径问题。这些问题可能导致用户体验不佳和资源加载失败。
root_cause: 原始代码中距离选项设置不合理,且重定向和资源路径配置错误。
triggers:
files:
- app/tool/*.py
- public/*.html
functions:
- CafeRecommender
patterns:
- href="/css/
- href="/public/meetspot_finder.html"
- src="/js/
keywords:
- distance options
- redirect
- asset paths
- window.location.href
fix_pattern:
approach: 更新距离选项,修正重定向方式和资源路径。
key_changes:
- 距离选项更新为5/10/20/30/50公里
- 大城市默认距离设置为10公里
- 使用window.location.href替代window.open
- 返回链接指向根目录
- 更新CSS和JS资源路径
verification:
- 验证距离选项是否正确显示
- 检查重定向是否正常工作
- 确认资源路径是否正确加载
- 确保返回链接指向正确页面
related:
files_changed:
- app/tool/meetspot_recommender.py
- public/meetspot_finder.html
tags:
- ui
- redirect
- resource paths
- distance options

View File

@@ -0,0 +1,38 @@
id: PM-2025-013
created_at: '2026-01-13T05:57:42.048224Z'
source_commit: 384bf6f
severity: high
title: 修复环境变量加载问题确保AI聊天机器人正常运行
description: AI聊天机器人显示“配置中”因为未能加载.env文件中的环境变量影响了关键配置的使用。
root_cause: .env文件中的环境变量未被正确加载导致配置缺失。
triggers:
files:
- api/*.py
- web_server.py
functions:
- main
patterns:
- load_dotenv\(\)
keywords:
- .env
- 环境变量
- python-dotenv
- 配置中
fix_pattern:
approach: 使用python-dotenv库加载.env文件中的环境变量。
key_changes:
- 在api/index.py和web_server.py中添加load_dotenv()调用
verification:
- 确保.env文件中的所有关键环境变量都被正确加载。
- 验证AI聊天机器人不再显示“配置中”状态。
- 检查python-dotenv库是否在requirements.txt中正确列出。
related:
files_changed:
- api/index.py
- requirements.txt
- web_server.py
tags:
- environment
- configuration
- api
- chatbot

View File

@@ -0,0 +1,38 @@
id: PM-2025-014
created_at: '2026-01-13T05:57:45.466710Z'
source_commit: 5d694dc
severity: high
title: 修复 sitemap.xml 和 robots.txt 的 HEAD 请求支持
description: Google Search Console 报告 sitemap.xml 无法获取,原因是 HEAD 请求返回 405 Method Not
Allowed影响搜索引擎爬虫的文件访问检查。
root_cause: sitemap.xml 和 robots.txt 的 API 路由未支持 HEAD 请求,导致爬虫无法验证文件可访问性。
triggers:
files:
- api/routers/seo_pages.py
functions:
- sitemap
- robots_txt
patterns:
- '@router.api_route\(.*methods=\[.*HEAD.*\]\)'
keywords:
- HEAD method
- sitemap.xml
- robots.txt
- Google Search Console
fix_pattern:
approach: 为 sitemap.xml 和 robots.txt 的 API 路由添加 HEAD 请求支持。
key_changes:
- 将 @router.get 替换为 @router.api_route并添加 methods=["GET", "HEAD"] 参数。
- 确保 HEAD 请求返回正确的 HTTP 状态码和响应头。
verification:
- 验证 sitemap.xml 和 robots.txt 的 HEAD 请求是否返回 200 状态码。
- 检查 HEAD 请求是否正确返回 Content-Type 和其他必要的响应头。
- 通过 Google Search Console 确认问题已解决,文件可正常访问。
related:
files_changed:
- api/routers/seo_pages.py
tags:
- seo
- api
- crawler
- google

View File

@@ -0,0 +1,41 @@
id: PM-2025-015
created_at: '2026-01-13T05:57:50.946672Z'
source_commit: c13279a
severity: high
title: 智能城市推断修复简短地名解析错误
description: 用户输入简短地名时地理编码错误地解析到错误城市导致中心点计算错误和POI搜索失败前端显示数据处理异常。
root_cause: 高德API将简短地名解析到错误城市未能正确识别用户意图。
triggers:
files:
- app/tool/*.py
- public/*.html
functions:
- _geocode
- _smart_city_inference
patterns:
- geocode_result
- geocode_results
keywords:
- 智能城市推断
- geocode
- 解析错误
fix_pattern:
approach: 通过智能城市推断功能检测并纠正错误的城市解析,前端处理无结果的成功搜索。
key_changes:
- 添加智能城市推断功能
- 前端处理无结果的成功搜索
- 更新文档说明
verification:
- 验证简短地名能正确解析到预期城市
- 检查中心点坐标计算是否正确
- 确认前端显示友好的无结果提示
related:
files_changed:
- CLAUDE.md
- app/tool/meetspot_recommender.py
- public/meetspot_finder.html
tags:
- geocoding
- ui
- api
- error-handling

View File

@@ -0,0 +1,50 @@
id: PM-2025-016
created_at: '2026-01-13T05:57:54.839321Z'
source_commit: 9c847c1
severity: medium
title: 网站内容优化修复,提升用户体验
description: 修正了网站内容使其更具用户友好性移除了技术性术语并扩展了FAQ页面。影响用户对网站功能的理解和使用体验。
root_cause: 网站内容过于技术化,影响用户体验和理解。
triggers:
files:
- api/routers/seo_pages.py
- templates/pages/about.html
- templates/pages/faq.html
- templates/pages/home.html
- templates/pages/how_it_works.html
functions:
- faq_page
patterns:
- 移除技术性内容
- 扩展至10个实用问题
- 重写为用户导向内容
- 四步骤卡片式布局
keywords:
- 用户体验
- 技术术语
- FAQ
- 内容优化
fix_pattern:
approach: 通过参考README优化网站内容移除技术性术语增加用户友好描述和实用问题。
key_changes:
- 首页内容优化
- FAQ页面扩展
- 关于页面重写
- 使用指南布局调整
verification:
- 检查首页是否移除技术术语
- 确认FAQ页面包含10个问题
- 确保关于页面内容用户导向
- 验证使用指南布局是否为卡片式
related:
files_changed:
- api/routers/seo_pages.py
- templates/pages/about.html
- templates/pages/faq.html
- templates/pages/home.html
- templates/pages/how_it_works.html
tags:
- content
- ui
- user_experience
- documentation

View File

@@ -0,0 +1,34 @@
id: PM-2025-017
created_at: '2026-01-13T05:57:58.465498Z'
source_commit: 19f9e93
severity: medium
title: 修复变量命名冲突导致的flake8 F823错误
description: 在meetspot_recommender.py中变量命名冲突导致flake8 F823错误影响GitHub Actions CI的正常运行。
root_cause: 局部变量名与导入模块名冲突,导致变量在赋值前被引用。
triggers:
files:
- app/tool/*.py
functions:
- CafeRecommender
patterns:
- local variable 'html' referenced before assignment
keywords:
- flake8
- F823
- 命名冲突
- html
fix_pattern:
approach: 重命名冲突变量以避免与导入模块名重复。
key_changes:
- 将变量名从`html`改为`html_content`
verification:
- 确保变量名与导入模块名不冲突
- 运行flake8检查确保无F823错误
- 验证GitHub Actions CI通过
related:
files_changed:
- app/tool/meetspot_recommender.py
tags:
- ci
- linting
- python

View File

@@ -0,0 +1,44 @@
id: PM-2025-018
created_at: '2026-01-13T05:58:02.063142Z'
source_commit: a34acd4
severity: high
title: Bing验证文件访问优化 - 支持HEAD请求和防缓存
description: Bing站长工具验证失败HEAD请求返回405错误导致验证无法通过。同时CDN缓存问题导致验证文件不可达robots.txt配置可能阻止验证文件爬取。
root_cause: 验证文件路由未正确支持HEAD请求且未设置防缓存headers导致验证失败。
triggers:
files:
- api/index.py
- public/robots.txt
functions:
- google_verification
- bing_verification
patterns:
- '@app.api_route'
- Cache-Control
- robots.txt
keywords:
- BingSiteAuth.xml
- HEAD请求
- 防缓存
- robots.txt
fix_pattern:
approach: 优化验证文件路由支持HEAD请求并添加防缓存headers同时更新robots.txt明确允许验证文件爬取。
key_changes:
- 在api/index.py中为Google和Bing验证文件路由添加HEAD请求支持
- 为验证文件响应添加防缓存headers
- 更新robots.txt明确Allow验证文件路径
verification:
- 验证HEAD请求是否返回200 OK
- 检查验证文件是否包含防缓存headers
- 确认robots.txt中允许验证文件路径
- 通过curl命令测试验证文件的GET和HEAD请求是否正常
related:
files_changed:
- api/index.py
- public/robots.txt
tags:
- api
- SEO
- robots.txt
- CDN
- validation

View File

@@ -0,0 +1,32 @@
id: PM-2025-019
created_at: '2026-01-13T05:58:04.940805Z'
source_commit: a27c796
severity: medium
title: 修复CSS hover语法错误导致的样式问题
description: 在 public/index.html 中发现无效的 CSS hover 属性,导致页面样式在某些浏览器中无法正常显示。此问题影响了用户体验。
root_cause: 内联样式中使用了无效的 hover 属性,未被浏览器正确解析。
triggers:
files:
- public/index.html
functions: []
patterns:
- hover\s*:\s*[^;]+;
keywords:
- CSS
- hover
- 样式错误
fix_pattern:
approach: 移除无效的内联 hover 样式,确保 CSS 语法正确。
key_changes:
- 删除无效的 hover 属性
verification:
- 检查所有 CSS hover 属性的语法是否正确。
- 确保页面在不同浏览器中显示一致。
- 验证页面 hover 效果是否符合设计预期。
related:
files_changed:
- public/index.html
tags:
- ui
- css
- bugfix

View File

@@ -0,0 +1,39 @@
id: PM-2025-020
created_at: '2026-01-13T05:58:09.424945Z'
source_commit: aba844e
severity: medium
title: 修复二维码用途和描述错误
description: 二维码用途和描述错误,导致用户无法正确识别微信群和支付二维码,影响用户体验。
root_cause: 二维码图片和描述信息配置错误,导致用途混淆。
triggers:
files:
- docs/vx_chat.png
- public/index.html
functions: []
patterns:
- vx_chat.png
- vx_payment.png
keywords:
- 二维码
- 微信
- 支付
- 交流群
fix_pattern:
approach: 更新二维码图片及描述信息,确保用途明确。
key_changes:
- 修改vx_chat.png为微信交流群二维码
- 修改vx_payment.png为请喝咖啡支付码
- 移除个人微信二维码
- 优化支持页面的文案和图标
verification:
- 确认vx_chat.png显示为微信群二维码
- 确认vx_payment.png显示为支付二维码
- 检查支持页面文案和图标是否正确
related:
files_changed:
- docs/vx_chat.png
- public/index.html
tags:
- ui
- documentation
- user-experience

View File

@@ -0,0 +1,33 @@
id: PM-2025-021
created_at: '2026-01-13T05:58:12.599589Z'
source_commit: 355b89e
severity: high
title: 修复Google Search Console验证文件访问问题
description: 由于Google Search Console验证文件无法从根路径访问导致验证超时问题影响网站的搜索引擎优化。
root_cause: 缺少专门的路由来处理Google Search Console验证文件的请求。
triggers:
files:
- api/index.py
functions:
- google_verification
patterns:
- '@app.get\("/google48ac1a797739b7b0.html"\)'
keywords:
- Google Search Console
- 验证文件
- 超时
fix_pattern:
approach: 添加专门的路由以确保验证文件可从根路径访问。
key_changes:
- 添加google_verification函数
- 定义新的路由
verification:
- 确认google48ac1a797739b7b0.html文件可以从根路径访问
- 检查Google Search Console验证状态是否正常
related:
files_changed:
- api/index.py
tags:
- seo
- api
- verification

View File

@@ -0,0 +1,36 @@
id: PM-2025-022
created_at: '2026-01-13T05:58:16.981664Z'
source_commit: 53c15bb
severity: high
title: 修复图像响应的令牌计算问题
description: 在处理图像响应时,令牌计算存在错误,可能导致不正确的计费或资源分配问题。此问题影响了图像处理模块的准确性。
root_cause: 图像响应处理逻辑中缺少对令牌的正确计算。
triggers:
files:
- app/image_processing/*.py
functions:
- calculate_token_usage
- process_image_response
patterns:
- calculate\s*\(.*\)
- token_usage\s*=\s*.*
keywords:
- token
- image response
- accounting
- fix
fix_pattern:
approach: 修复了图像响应处理中的令牌计算逻辑,确保令牌使用量的准确性。
key_changes:
- 修正了 calculate_token_usage 函数中的计算公式
- 更新了 process_image_response 函数以正确处理令牌
verification:
- 检查 calculate_token_usage 函数的计算逻辑是否正确
- 验证 process_image_response 函数在不同图像输入下的令牌计算是否准确
- 确保所有图像响应的令牌使用量记录在案
related:
files_changed: []
tags:
- image processing
- token accounting
- bug fix

View File

@@ -0,0 +1,40 @@
id: PM-2025-023
created_at: '2026-01-13T05:58:20.762479Z'
source_commit: a57e614
severity: high
title: 修复CafeRecommender类缺失execute方法导致的运行时错误
description: 在系统运行时CafeRecommender类缺失execute方法导致服务不可用。此问题影响了推荐功能的正常使用。
root_cause: CafeRecommender类中缺失execute方法导致调用时出现属性错误。
triggers:
files:
- web_server.py
- app/__pycache__/*.pyc
functions:
- CafeRecommender.execute
patterns:
- def execute\(
- from app.tool.meetspot_recommender import CafeRecommender
keywords:
- CafeRecommender
- execute
- fallback
- MockResult
fix_pattern:
approach: 在CafeRecommender类中添加execute方法并实现回退机制。
key_changes:
- 在CafeRecommender类中添加execute方法
- 实现原始推荐器调用失败时的回退机制
- 创建fallback_result.html页面
verification:
- 检查CafeRecommender类中是否存在execute方法
- 验证execute方法在原始推荐器不可用时是否正确回退到MockResult
- 确保fallback_result.html页面在回退时正确显示
related:
files_changed:
- app/__pycache__/__init__.cpython-312.pyc
- app/__pycache__/exceptions.cpython-312.pyc
- web_server.py
tags:
- runtime_error
- recommendation_system
- fallback_mechanism

View File

@@ -0,0 +1,49 @@
id: PM-2025-024
created_at: '2026-01-13T05:58:25.760808Z'
source_commit: b481fbc
severity: high
title: 修复 Python 和 YAML 语法错误,优化工作流文件
description: 多个 Python 文件和 YAML 工作流文件存在语法错误,导致部分功能无法正常运行,影响了代码的正常执行和 CI/CD 流程的稳定性。
root_cause: 缺少必要的模块导入和工作流文件配置错误,导致语法错误和功能异常。
triggers:
files:
- app/tool/*.py
- .github/workflows/*.yml
functions:
- meetspot_recommender_fixed.py
- llm.py
patterns:
- import .*
- f".*"
- 'workflow: .*'
keywords:
- syntax error
- datetime
- ChatCompletion
- workflow
- f-string
fix_pattern:
approach: 添加缺失的模块导入,修复 HTML 模板中的 f-string 冲突,清理并保留有效的工作流文件。
key_changes:
- 在 meetspot_recommender_fixed.py 中添加 datetime 模块导入
- 在 llm.py 中添加 ChatCompletion 模块导入
- 修复 HTML 模板中 f-string 与 JavaScript 的冲突
- 删除重复或损坏的工作流文件,仅保留 ci.yml、auto-merge.yml 和 update-badges.yml
verification:
- 确保所有 Python 文件运行时无语法错误
- 验证 HTML 模板渲染是否正确
- 检查工作流文件是否能正常触发并执行
- 确认 CI/CD 流程是否稳定运行
related:
files_changed:
- .github/workflows/auto-merge-clean.yml
- .github/workflows/auto-merge-dependabot.yml
- .github/workflows/ci-clean.yml
- .github/workflows/ci-simple.yml
- app/tool/meetspot_recommender_fixed.py
tags:
- syntax
- ci/cd
- python
- yaml
- workflow

View File

@@ -0,0 +1,42 @@
id: PM-2025-025
created_at: '2026-01-13T05:58:29.615227Z'
source_commit: 810e8f3
severity: high
title: 修复llm.py和meetspot_recommender_fixed.py中的Python语法错误
description: 在llm.py和meetspot_recommender_fixed.py中存在Python语法错误导致应用程序无法正常运行。这些错误包括缺少导入和f-string语法问题。
root_cause: 缺少必要的导入和f-string语法使用不当。
triggers:
files:
- app/llm.py
- app/tool/*.py
functions:
- window.onload
patterns:
- from openai.types.chat import ChatCompletion
- f"{{.*}}"
keywords:
- ChatCompletion
- f-string
- JavaScript
- HTML
fix_pattern:
approach: 添加缺失的导入修复f-string语法错误并调整注释格式。
key_changes:
- 添加ChatCompletion导入到llm.py
- 将f-string中的单花括号替换为双花括号
- 将JavaScript中的//注释替换为/* */注释
verification:
- 确保所有必要的导入都存在
- 检查f-string语法是否正确
- 验证JavaScript注释格式是否正确
related:
files_changed:
- GITHUB_ACTIONS_FINAL_REPORT.md
- app/llm.py
- app/tool/meetspot_recommender_fixed.py
tags:
- syntax
- import
- f-string
- JavaScript
- HTML

View File

@@ -0,0 +1,40 @@
id: PM-2025-026
created_at: '2026-01-13T05:58:33.486366Z'
source_commit: b601e97
severity: high
title: 修复 GitHub Actions 工作流语法错误
description: 部分 GitHub Actions 工作流文件存在 YAML 语法错误,导致 CI/CD 无法正常运行。问题影响了自动化构建和依赖管理流程。
root_cause: 工作流文件中存在不规范的 YAML 语法,导致无法通过验证。
triggers:
files:
- .github/workflows/*.yml
functions: []
patterns:
- ^.*:\s+\[.*\]$
- ^.*:\s+\{.*\}$
keywords:
- syntax error
- YAML validation
- GitHub Actions
fix_pattern:
approach: 清理工作流文件,移除语法错误并简化复杂逻辑。
key_changes:
- 移除有问题的工作流文件,保留 ci-clean.yml 和 auto-merge-clean.yml
- 简化 update-badges.yml 文件,避免复杂的 shell 语法
- 新增 GITHUB_ACTIONS_CLEANUP_REPORT.md记录详细清理日志
verification:
- 确保所有工作流文件通过 YAML 验证
- 验证 CI/CD 流程是否正常运行
- 检查 update-badges.yml 的 shell 逻辑是否简化且可执行
- 确认 GITHUB_ACTIONS_CLEANUP_REPORT.md 包含完整清理记录
related:
files_changed:
- .github/workflows/auto-merge-clean.yml
- .github/workflows/ci-clean.yml
- .github/workflows/update-badges.yml
- GITHUB_ACTIONS_CLEANUP_REPORT.md
tags:
- ci
- github-actions
- yaml
- automation

Some files were not shown because too many files have changed in this diff Show More