Spaces:
Running
Running
owenkaplinsky
commited on
Commit
·
e8e1842
1
Parent(s):
ce77b83
Add MCP running
Browse files- README.md +17 -3
- project/chat.py +98 -148
- project/test.py +0 -1
README.md
CHANGED
|
@@ -24,7 +24,17 @@ It works live and responds quickly, keeping your workspace synchronized with eve
|
|
| 24 |
|
| 25 |
While it can handle a wide range of tasks, multi-step or highly complex changes work best when broken into smaller requests. Taking things one step at a time leads to more accurate and reliable results.
|
| 26 |
|
| 27 |
-
The File menu handles creating new projects, opening existing ones, and downloading your work. You can download just the generated Python code or the entire project as a JSON file. The Edit menu provides standard undo, redo, and a cleanup button to reorganize blocks. The Examples menu includes pre-built projects you can load to understand how to structure your own.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
The toolbox contains blocks for common operations: calling language models, making HTTP requests, extracting data from JSON, manipulating text, performing math, and working with lists. You connect these blocks to build your workflow.
|
| 30 |
|
|
@@ -61,8 +71,12 @@ Code execution happens in a sandboxed Python environment. User code is executed
|
|
| 61 |
|
| 62 |
The AI Assistant component is the sophisticated heart of the system. It continuously monitors the current workspace state and code. When you send a message, the system formats your entire block structure into a readable representation and includes it in the context sent to OpenAI. The model receives not just your question but a complete understanding of what you've built. The system includes a detailed system prompt that explains MCP concepts, the block syntax, and what actions the model can perform.
|
| 63 |
|
| 64 |
-
Based on the model's response, the system recognizes
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
The
|
| 67 |
|
| 68 |
API keys are managed through environment variables set at runtime. The system uses Gradio to automatically generate user interfaces based on the function signatures in your generated code, creating input and output fields that match your tool's parameters.
|
|
|
|
| 24 |
|
| 25 |
While it can handle a wide range of tasks, multi-step or highly complex changes work best when broken into smaller requests. Taking things one step at a time leads to more accurate and reliable results.
|
| 26 |
|
| 27 |
+
The File menu handles creating new projects, opening existing ones, and downloading your work. You can download just the generated Python code or the entire project as a JSON file. The Edit menu provides standard undo, redo, and a cleanup button to reorganize blocks. The Examples menu includes pre-built projects you can load to understand how to structure your own.
|
| 28 |
+
|
| 29 |
+
### API Keys
|
| 30 |
+
|
| 31 |
+
The system has two optional but recommended API keys:
|
| 32 |
+
|
| 33 |
+
**OpenAI API Key**: Required if you want to use the AI Assistant to help build your MCP blocks, or if your blocks include operations that call language models. Without it, you can still create and test blocks manually.
|
| 34 |
+
|
| 35 |
+
**Hugging Face API Key**: Required if you want to deploy your MCP as a live server on Hugging Face Spaces with the agent. Without it, you can build and test locally but won't be able to share your tool as a live server unless you manually create a space and upload the generated `app.py` file. When you deploy, creates a new Space, and uploads your tool. The Space automatically becomes an MCP server that other AI systems can connect to and call natively.
|
| 36 |
+
|
| 37 |
+
Set these keys through Settings before using features that depend on them. Both are optional: you can build and test tools without either key, but certain features won't be available.
|
| 38 |
|
| 39 |
The toolbox contains blocks for common operations: calling language models, making HTTP requests, extracting data from JSON, manipulating text, performing math, and working with lists. You connect these blocks to build your workflow.
|
| 40 |
|
|
|
|
| 71 |
|
| 72 |
The AI Assistant component is the sophisticated heart of the system. It continuously monitors the current workspace state and code. When you send a message, the system formats your entire block structure into a readable representation and includes it in the context sent to OpenAI. The model receives not just your question but a complete understanding of what you've built. The system includes a detailed system prompt that explains MCP concepts, the block syntax, and what actions the model can perform.
|
| 73 |
|
| 74 |
+
Based on the model's response, the system recognizes four special commands: run to execute your MCP with sample inputs, delete to remove a block by ID, create to add new blocks to your workspace, and deploy_to_huggingface to publish your tool as a live server. When the model issues these commands, they're executed immediately. For block modifications, the system uses Server-Sent Events to stream commands back to the frontend, which creates or deletes blocks in real time while you watch. This maintains real-time synchronization between the chat interface and the visual editor.
|
| 75 |
+
|
| 76 |
+
The AI Assistant can execute multiple actions per conversation turn. If the model decides it needs to run your code to see the result before suggesting improvements, it does that automatically. If it needs to delete a broken block and create a replacement, it performs both operations and then reports back with what happened. This looping continues for up to ten consecutive iterations per user message, allowing the AI to progressively refine your blocks without requiring you to send multiple messages.
|
| 77 |
+
|
| 78 |
+
When you're ready to share your MCP tool, you can deploy it directly to Hugging Face Spaces. Once deployed, the tool becomes a real MCP server that can be called by any AI system supporting the MCP protocol. The AI Assistant in MCP Blockly can immediately use your deployed tool—just ask it to call your MCP and it will invoke the actual live server.
|
| 79 |
|
| 80 |
+
The agent also monitors your Space's build status. Build typically takes 1-2 minutes. Once the Space reaches RUNNING status, all the tools you defined in your blocks become available for the AI to call natively. If you send a message while the Space is still building, the AI will let you know to wait a moment before your MCP tools become available.
|
| 81 |
|
| 82 |
API keys are managed through environment variables set at runtime. The system uses Gradio to automatically generate user interfaces based on the function signatures in your generated code, creating input and output fields that match your tool's parameters.
|
project/chat.py
CHANGED
|
@@ -5,7 +5,6 @@ from fastapi import FastAPI, Request
|
|
| 5 |
from fastapi.middleware.cors import CORSMiddleware
|
| 6 |
from fastapi.responses import StreamingResponse
|
| 7 |
from openai import OpenAI
|
| 8 |
-
import uvicorn
|
| 9 |
import gradio as gr
|
| 10 |
import asyncio
|
| 11 |
import queue
|
|
@@ -13,6 +12,7 @@ import json
|
|
| 13 |
import uuid
|
| 14 |
import time
|
| 15 |
from colorama import Fore, Style
|
|
|
|
| 16 |
|
| 17 |
# Initialize OpenAI client (will be updated when API key is set)
|
| 18 |
client = None
|
|
@@ -39,6 +39,13 @@ creation_results = {}
|
|
| 39 |
variable_queue = queue.Queue()
|
| 40 |
variable_results = {}
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
blocks_context = ""
|
| 43 |
try:
|
| 44 |
file_path = os.path.join(os.path.dirname(__file__), "blocks.txt")
|
|
@@ -90,125 +97,6 @@ async def set_api_key_chat(request: Request):
|
|
| 90 |
|
| 91 |
return {"success": True}
|
| 92 |
|
| 93 |
-
def execute_mcp(mcp_call):
|
| 94 |
-
"""Execute MCP call using the actual Python function from test.py"""
|
| 95 |
-
global stored_api_key, latest_blockly_chat_code
|
| 96 |
-
|
| 97 |
-
if stored_api_key:
|
| 98 |
-
os.environ["OPENAI_API_KEY"] = stored_api_key
|
| 99 |
-
|
| 100 |
-
try:
|
| 101 |
-
# Now, retrieve the real generated Python code from test.py
|
| 102 |
-
blockly_code = ""
|
| 103 |
-
try:
|
| 104 |
-
resp = requests.get(f"http://127.0.0.1:{os.getenv('PORT', 8080)}/get_latest_code")
|
| 105 |
-
if resp.ok:
|
| 106 |
-
blockly_code = resp.json().get("code", "")
|
| 107 |
-
except Exception as e:
|
| 108 |
-
print(f"[WARN] Could not fetch real Python code: {e}")
|
| 109 |
-
|
| 110 |
-
if not blockly_code.strip():
|
| 111 |
-
return "No Python code available from test.py"
|
| 112 |
-
|
| 113 |
-
# Parse the MCP call arguments
|
| 114 |
-
match = re.match(r'create_mcp\((.*)\)', mcp_call.strip())
|
| 115 |
-
if not match:
|
| 116 |
-
return "Invalid MCP call format"
|
| 117 |
-
|
| 118 |
-
params_str = match.group(1)
|
| 119 |
-
user_inputs = []
|
| 120 |
-
|
| 121 |
-
if params_str:
|
| 122 |
-
import ast
|
| 123 |
-
try:
|
| 124 |
-
dict_str = "{" + params_str.replace("=", ":") + "}"
|
| 125 |
-
param_dict = ast.literal_eval(dict_str)
|
| 126 |
-
user_inputs = [str(v) for v in param_dict.values()]
|
| 127 |
-
except Exception:
|
| 128 |
-
for pair in params_str.split(','):
|
| 129 |
-
if '=' in pair:
|
| 130 |
-
_, value = pair.split('=', 1)
|
| 131 |
-
user_inputs.append(value.strip().strip('"').strip("'"))
|
| 132 |
-
|
| 133 |
-
# Prepare to execute
|
| 134 |
-
result = ""
|
| 135 |
-
lines = blockly_code.split('\n')
|
| 136 |
-
filtered_lines = []
|
| 137 |
-
skip_mode = False
|
| 138 |
-
in_demo_block = False
|
| 139 |
-
|
| 140 |
-
for line in lines:
|
| 141 |
-
if 'import gradio' in line:
|
| 142 |
-
continue
|
| 143 |
-
if 'demo = gr.Interface' in line:
|
| 144 |
-
in_demo_block = True
|
| 145 |
-
skip_mode = True
|
| 146 |
-
continue
|
| 147 |
-
elif 'demo.launch' in line:
|
| 148 |
-
skip_mode = False
|
| 149 |
-
in_demo_block = False
|
| 150 |
-
continue
|
| 151 |
-
elif in_demo_block:
|
| 152 |
-
continue
|
| 153 |
-
if 'gr.' in line:
|
| 154 |
-
continue
|
| 155 |
-
if not skip_mode:
|
| 156 |
-
filtered_lines.append(line)
|
| 157 |
-
|
| 158 |
-
code_to_run = '\n'.join(filtered_lines)
|
| 159 |
-
|
| 160 |
-
def capture_result(msg):
|
| 161 |
-
nonlocal result
|
| 162 |
-
result = msg
|
| 163 |
-
|
| 164 |
-
env = {
|
| 165 |
-
"reply": capture_result,
|
| 166 |
-
"__builtins__": __builtins__,
|
| 167 |
-
}
|
| 168 |
-
|
| 169 |
-
exec("import os", env)
|
| 170 |
-
exec("import requests", env)
|
| 171 |
-
exec("import json", env)
|
| 172 |
-
|
| 173 |
-
exec(code_to_run, env)
|
| 174 |
-
|
| 175 |
-
if "create_mcp" in env:
|
| 176 |
-
import inspect
|
| 177 |
-
sig = inspect.signature(env["create_mcp"])
|
| 178 |
-
params = list(sig.parameters.values())
|
| 179 |
-
|
| 180 |
-
typed_args = []
|
| 181 |
-
for i, arg in enumerate(user_inputs):
|
| 182 |
-
if i >= len(params):
|
| 183 |
-
break
|
| 184 |
-
if arg is None or arg == "":
|
| 185 |
-
typed_args.append(None)
|
| 186 |
-
continue
|
| 187 |
-
anno = params[i].annotation
|
| 188 |
-
try:
|
| 189 |
-
if anno == int:
|
| 190 |
-
typed_args.append(int(float(arg)))
|
| 191 |
-
elif anno == float:
|
| 192 |
-
typed_args.append(float(arg))
|
| 193 |
-
elif anno == bool:
|
| 194 |
-
typed_args.append(str(arg).lower() in ("true", "1"))
|
| 195 |
-
elif anno == str or anno == inspect._empty:
|
| 196 |
-
typed_args.append(str(arg))
|
| 197 |
-
else:
|
| 198 |
-
typed_args.append(arg)
|
| 199 |
-
except Exception:
|
| 200 |
-
typed_args.append(arg)
|
| 201 |
-
|
| 202 |
-
result = env["create_mcp"](*typed_args)
|
| 203 |
-
|
| 204 |
-
return result if result else "No output generated"
|
| 205 |
-
|
| 206 |
-
except Exception as e:
|
| 207 |
-
print(f"[MCP EXECUTION ERROR] {e}")
|
| 208 |
-
import traceback
|
| 209 |
-
traceback.print_exc()
|
| 210 |
-
return f"Error executing MCP: {str(e)}"
|
| 211 |
-
|
| 212 |
def delete_block(block_id):
|
| 213 |
"""Delete a block from the Blockly workspace"""
|
| 214 |
try:
|
|
@@ -642,12 +530,9 @@ pinned: false
|
|
| 642 |
|
| 643 |
# {space_name}
|
| 644 |
|
| 645 |
-
This is
|
| 646 |
|
| 647 |
The tool has been automatically deployed to Hugging Face Spaces and is ready to use!
|
| 648 |
-
|
| 649 |
-
## About
|
| 650 |
-
Created using Blockly MCP Builder - a visual programming environment for building AI tools.
|
| 651 |
"""
|
| 652 |
|
| 653 |
api.upload_file(
|
|
@@ -660,6 +545,13 @@ Created using Blockly MCP Builder - a visual programming environment for buildin
|
|
| 660 |
space_url = f"https://huggingface.co/spaces/{repo_id}"
|
| 661 |
print(f"[DEPLOY SUCCESS] Space deployed: {space_url}")
|
| 662 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 663 |
return f"[TOOL] Successfully deployed to Hugging Face Space!\n\n**Space URL:** {space_url}"
|
| 664 |
|
| 665 |
except Exception as e:
|
|
@@ -693,8 +585,15 @@ explain your intended plan and the steps you will take, then perform the tool ca
|
|
| 693 |
|
| 694 |
---
|
| 695 |
|
| 696 |
-
###
|
| 697 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 698 |
|
| 699 |
---
|
| 700 |
|
|
@@ -785,6 +684,12 @@ Once the user has tested and is happy with their MCP tool, you can deploy it to
|
|
| 785 |
3. The tool will create a new Space, upload the code, and return a live URL
|
| 786 |
|
| 787 |
The deployed Space will be public and shareable with others.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 788 |
"""
|
| 789 |
|
| 790 |
tools = [
|
|
@@ -837,17 +742,6 @@ The deployed Space will be public and shareable with others.
|
|
| 837 |
"required": ["name"],
|
| 838 |
}
|
| 839 |
},
|
| 840 |
-
{
|
| 841 |
-
"type": "function",
|
| 842 |
-
"name": "run_mcp",
|
| 843 |
-
"description": "Runs the MCP with the given inputs. Create one parameter for each input that the user-created MCP allows.",
|
| 844 |
-
"parameters": {
|
| 845 |
-
"type": "object",
|
| 846 |
-
"properties": {},
|
| 847 |
-
"required": [],
|
| 848 |
-
"additionalProperties": True
|
| 849 |
-
}
|
| 850 |
-
},
|
| 851 |
{
|
| 852 |
"type": "function",
|
| 853 |
"name": "deploy_to_huggingface",
|
|
@@ -924,12 +818,77 @@ The deployed Space will be public and shareable with others.
|
|
| 924 |
current_iteration += 1
|
| 925 |
|
| 926 |
try:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 927 |
# Create Responses API call
|
| 928 |
response = client.responses.create(
|
| 929 |
model="gpt-4o",
|
| 930 |
-
instructions=
|
| 931 |
input=temp_input_items + [{"role": "user", "content": current_prompt}],
|
| 932 |
-
tools=
|
| 933 |
tool_choice="auto"
|
| 934 |
)
|
| 935 |
|
|
@@ -1003,15 +962,6 @@ The deployed Space will be public and shareable with others.
|
|
| 1003 |
tool_result = create_variable(name)
|
| 1004 |
result_label = "Create Var Operation"
|
| 1005 |
|
| 1006 |
-
elif function_name == "run_mcp":
|
| 1007 |
-
params = []
|
| 1008 |
-
for key, value in function_args.items():
|
| 1009 |
-
params.append(f"{key}=\"{value}\"")
|
| 1010 |
-
mcp_call = f"create_mcp({', '.join(params)})"
|
| 1011 |
-
print(Fore.YELLOW + f"Agent ran MCP with inputs: {mcp_call}." + Style.RESET_ALL)
|
| 1012 |
-
tool_result = execute_mcp(mcp_call)
|
| 1013 |
-
result_label = "MCP Execution Result"
|
| 1014 |
-
|
| 1015 |
elif function_name == "deploy_to_huggingface":
|
| 1016 |
space_name = function_args.get("space_name", "")
|
| 1017 |
print(Fore.YELLOW + f"Agent deploying to Hugging Face Space `{space_name}`." + Style.RESET_ALL)
|
|
|
|
| 5 |
from fastapi.middleware.cors import CORSMiddleware
|
| 6 |
from fastapi.responses import StreamingResponse
|
| 7 |
from openai import OpenAI
|
|
|
|
| 8 |
import gradio as gr
|
| 9 |
import asyncio
|
| 10 |
import queue
|
|
|
|
| 12 |
import uuid
|
| 13 |
import time
|
| 14 |
from colorama import Fore, Style
|
| 15 |
+
from huggingface_hub import HfApi
|
| 16 |
|
| 17 |
# Initialize OpenAI client (will be updated when API key is set)
|
| 18 |
client = None
|
|
|
|
| 39 |
variable_queue = queue.Queue()
|
| 40 |
variable_results = {}
|
| 41 |
|
| 42 |
+
# Global variable to store the deployed HF MCP server URL
|
| 43 |
+
current_mcp_server_url = None
|
| 44 |
+
|
| 45 |
+
# Global variable to track if a deployment just happened
|
| 46 |
+
deployment_just_happened = False
|
| 47 |
+
deployment_message = ""
|
| 48 |
+
|
| 49 |
blocks_context = ""
|
| 50 |
try:
|
| 51 |
file_path = os.path.join(os.path.dirname(__file__), "blocks.txt")
|
|
|
|
| 97 |
|
| 98 |
return {"success": True}
|
| 99 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 100 |
def delete_block(block_id):
|
| 101 |
"""Delete a block from the Blockly workspace"""
|
| 102 |
try:
|
|
|
|
| 530 |
|
| 531 |
# {space_name}
|
| 532 |
|
| 533 |
+
This is a MCP server created with [MCP Blockly](https://github.com/owenkaplinsky/mcp-blockly): a visual programming environment for building AI tools.
|
| 534 |
|
| 535 |
The tool has been automatically deployed to Hugging Face Spaces and is ready to use!
|
|
|
|
|
|
|
|
|
|
| 536 |
"""
|
| 537 |
|
| 538 |
api.upload_file(
|
|
|
|
| 545 |
space_url = f"https://huggingface.co/spaces/{repo_id}"
|
| 546 |
print(f"[DEPLOY SUCCESS] Space deployed: {space_url}")
|
| 547 |
|
| 548 |
+
# Store the MCP server URL globally for native MCP support
|
| 549 |
+
global current_mcp_server_url, deployment_just_happened, deployment_message
|
| 550 |
+
current_mcp_server_url = space_url
|
| 551 |
+
deployment_just_happened = True
|
| 552 |
+
deployment_message = f"Your MCP tool is being built on Hugging Face Spaces. This usually takes 1-2 minutes. Once it's ready, you'll be able to use the MCP tools defined in your blocks."
|
| 553 |
+
print(f"[MCP] Registered MCP server: {current_mcp_server_url}")
|
| 554 |
+
|
| 555 |
return f"[TOOL] Successfully deployed to Hugging Face Space!\n\n**Space URL:** {space_url}"
|
| 556 |
|
| 557 |
except Exception as e:
|
|
|
|
| 585 |
|
| 586 |
---
|
| 587 |
|
| 588 |
+
### Using Your MCP
|
| 589 |
+
Once you deploy your MCP to a Hugging Face Space, the model will automatically have access to all the tools you defined. Simply ask the model to use your MCP tools, and it will call them natively without manual intervention.
|
| 590 |
+
|
| 591 |
+
**Deployment workflow:**
|
| 592 |
+
1. Create and test your MCP using Blockly blocks
|
| 593 |
+
2. Deploy to a Hugging Face Space using the `deploy_to_huggingface` tool
|
| 594 |
+
3. After deployment, the MCP tool becomes immediately available in this chat
|
| 595 |
+
4. You may call this tool afterwards as needed. Do not immediately run the MCP
|
| 596 |
+
server after deploying it. The user must ask (you can ask if they want it)
|
| 597 |
|
| 598 |
---
|
| 599 |
|
|
|
|
| 684 |
3. The tool will create a new Space, upload the code, and return a live URL
|
| 685 |
|
| 686 |
The deployed Space will be public and shareable with others.
|
| 687 |
+
|
| 688 |
+
You NEVER need to deploy it more than once. If you deployed it, you can run it as many times as you want WITHOUT deploying again.
|
| 689 |
+
|
| 690 |
+
---
|
| 691 |
+
|
| 692 |
+
Note: Users can see tool response outputs verbatim. You don't have to repeat the tool response unless you want to.
|
| 693 |
"""
|
| 694 |
|
| 695 |
tools = [
|
|
|
|
| 742 |
"required": ["name"],
|
| 743 |
}
|
| 744 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 745 |
{
|
| 746 |
"type": "function",
|
| 747 |
"name": "deploy_to_huggingface",
|
|
|
|
| 818 |
current_iteration += 1
|
| 819 |
|
| 820 |
try:
|
| 821 |
+
# Build dynamic tools list with MCP support
|
| 822 |
+
dynamic_tools = tools.copy() if tools else []
|
| 823 |
+
|
| 824 |
+
# Inject MCP tool if a server is registered
|
| 825 |
+
global current_mcp_server_url, deployment_just_happened, deployment_message
|
| 826 |
+
space_building_status = None # Track if space is building
|
| 827 |
+
if current_mcp_server_url:
|
| 828 |
+
mcp_injection_successful = False
|
| 829 |
+
try:
|
| 830 |
+
# Try to verify the MCP server is available before injecting
|
| 831 |
+
space_is_running = False
|
| 832 |
+
try:
|
| 833 |
+
# Extract username and space name from URL
|
| 834 |
+
# URL format: https://huggingface.co/spaces/username/space_name
|
| 835 |
+
url_parts = current_mcp_server_url.split("/spaces/")
|
| 836 |
+
if len(url_parts) == 2:
|
| 837 |
+
space_id = url_parts[1]
|
| 838 |
+
api = HfApi()
|
| 839 |
+
runtime_info = api.get_space_runtime(space_id)
|
| 840 |
+
print(f"[MCP] Space runtime status: {runtime_info}")
|
| 841 |
+
# Check if space is running
|
| 842 |
+
if runtime_info and runtime_info.stage == "RUNNING":
|
| 843 |
+
space_is_running = True
|
| 844 |
+
# Space is running - deployment is complete
|
| 845 |
+
deployment_just_happened = False
|
| 846 |
+
print(f"[MCP] Space is RUNNING")
|
| 847 |
+
else:
|
| 848 |
+
# Space is not running - it's likely building
|
| 849 |
+
space_building_status = runtime_info.stage if runtime_info else "unknown"
|
| 850 |
+
print(f"[MCP] Space is not running yet (stage: {space_building_status})")
|
| 851 |
+
except Exception as check_error:
|
| 852 |
+
print(f"[MCP] Could not verify space runtime: {check_error}")
|
| 853 |
+
|
| 854 |
+
# Only inject the MCP tool if the space is verified running
|
| 855 |
+
if space_is_running:
|
| 856 |
+
def convert_repo_to_live_mcp(url):
|
| 857 |
+
# input: https://huggingface.co/spaces/user/space
|
| 858 |
+
# output: https://user-space.hf.space/gradio_api/mcp/sse
|
| 859 |
+
|
| 860 |
+
parts = url.split("/spaces/")
|
| 861 |
+
user, space = parts[1].split("/")
|
| 862 |
+
return f"https://{user}-{space}.hf.space/gradio_api/mcp/sse"
|
| 863 |
+
|
| 864 |
+
live_mcp_url = convert_repo_to_live_mcp(current_mcp_server_url)
|
| 865 |
+
|
| 866 |
+
mcp_tool = {
|
| 867 |
+
"type": "mcp",
|
| 868 |
+
"server_url": live_mcp_url,
|
| 869 |
+
"server_label": "user_mcp_server",
|
| 870 |
+
"require_approval": "never"
|
| 871 |
+
}
|
| 872 |
+
dynamic_tools.append(mcp_tool)
|
| 873 |
+
print(f"[MCP] Injected MCP tool for server: {current_mcp_server_url}")
|
| 874 |
+
else:
|
| 875 |
+
print(f"[MCP] Skipping MCP tool injection - space not running yet")
|
| 876 |
+
except Exception as mcp_error:
|
| 877 |
+
print(f"[MCP ERROR] Failed during MCP injection: {mcp_error}")
|
| 878 |
+
print(f"[MCP] Continuing without MCP tools...")
|
| 879 |
+
# Continue without MCP - don't crash
|
| 880 |
+
|
| 881 |
+
# Add deployment status message to instructions if deployment just happened and space is not running
|
| 882 |
+
deployment_instructions = instructions
|
| 883 |
+
if deployment_just_happened and space_building_status and space_building_status != "RUNNING":
|
| 884 |
+
deployment_instructions = instructions + f"\n\n**MCP DEPLOYMENT STATUS:** {deployment_message}"
|
| 885 |
+
|
| 886 |
# Create Responses API call
|
| 887 |
response = client.responses.create(
|
| 888 |
model="gpt-4o",
|
| 889 |
+
instructions=deployment_instructions,
|
| 890 |
input=temp_input_items + [{"role": "user", "content": current_prompt}],
|
| 891 |
+
tools=dynamic_tools,
|
| 892 |
tool_choice="auto"
|
| 893 |
)
|
| 894 |
|
|
|
|
| 962 |
tool_result = create_variable(name)
|
| 963 |
result_label = "Create Var Operation"
|
| 964 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 965 |
elif function_name == "deploy_to_huggingface":
|
| 966 |
space_name = function_args.get("space_name", "")
|
| 967 |
print(Fore.YELLOW + f"Agent deploying to Hugging Face Space `{space_name}`." + Style.RESET_ALL)
|
project/test.py
CHANGED
|
@@ -1,7 +1,6 @@
|
|
| 1 |
from fastapi import FastAPI, Request
|
| 2 |
from fastapi.middleware.cors import CORSMiddleware
|
| 3 |
import gradio as gr
|
| 4 |
-
import uvicorn
|
| 5 |
import os
|
| 6 |
import ast
|
| 7 |
|
|
|
|
| 1 |
from fastapi import FastAPI, Request
|
| 2 |
from fastapi.middleware.cors import CORSMiddleware
|
| 3 |
import gradio as gr
|
|
|
|
| 4 |
import os
|
| 5 |
import ast
|
| 6 |
|