Python Code Generation

Generate production-ready Python code from your visual workflows with asyncio support and comprehensive helper libraries.

Installation

Requirements

  • Python 3.9+
  • Package manager: pip
  • PyExecJS for Code nodes
pip install -r requirements.txt
# For Code nodes, requirements.txt includes:
# PyExecJS>=1.5.1  # JavaScript execution

Environment Variables

export OPENAI_API_KEY=your_openai_api_key_here

Usage Example

import asyncio
from circuitry import http_request, call_ai, execute_javascript
from circuitry_ui import show_message, confirm_action

# Generated function (example)
async def my_workflow(input_data: dict = {}) -> dict:
    data = input_data.copy()
    
    # HTTP request
    data = await http_request('https://api.example.com/data', method='GET')
    
    # User confirmation
    confirmation = await confirm_action(f"Process {len(data.get('items', []))} items?")
    if not confirmation['confirmed']:
        return {'cancelled': True}
    
    # AI processing
    analysis = await call_ai(f'Analyze this data: {data}', model='gpt-4o-mini')
    data['analysis'] = analysis['response']
    
    # Code node - execute JavaScript
    js_code = """
    data.processed = data.items.map(item => ({
        ...item,
        processed: true,
        timestamp: new Date().toISOString(),
        score: Math.random() * 100
    }));
    """
    data = await execute_javascript(js_code, data)
    
    # Show completion message
    await show_message(f"Processed {data['summary']['total']} items successfully", 'success')
    
    return data

# Use in your application
if __name__ == '__main__':
    result = asyncio.run(my_workflow({'items': [...]}))
    print(result)

Code Nodes

Python uses execute_javascript() for Code nodes:

# JavaScript code in Code node
js_code = """
data.processed = data.items.map(item => ({
    ...item,
    processed: true,
    timestamp: new Date().toISOString()
}));
"""

# Execute JavaScript with Python data
result = await execute_javascript(js_code, data)

Parallel Execution

Fork/join nodes use asyncio.gather():

import asyncio

# Fork node - parallel execution
async def parallel_branches(data):
    # Branch 1: API call
    async def branch1():
        return await http_request('https://api1.example.com/data')
    
    # Branch 2: AI processing
    async def branch2():
        return await call_ai('Process data', 'gpt-4o-mini')
    
    # Execute in parallel
    result1, result2 = await asyncio.gather(branch1(), branch2())
    
    # Join results
    return {
        **result1,
        'analysis': result2['response']
    }

Helper Functions

HTTP Requests

# GET request
data = await http_request('https://api.example.com/users')

# POST with data
result = await http_request(
    'https://api.example.com/users',
    method='POST',
    json={'name': 'John', 'email': 'john@example.com'},
    headers={'Authorization': 'Bearer token'}
)

AI Calls

# Simple AI call
response = await call_ai('Summarize this text', 'gpt-4o-mini')

# Advanced AI call with options
analysis = await call_ai(
    'Analyze sentiment', 
    'gpt-4',
    temperature=0.7,
    max_tokens=500
)

Deployment Options

FastAPI Server

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class WorkflowInput(BaseModel):
    data: dict

@app.post("/workflow")
async def run_workflow(input_data: WorkflowInput):
    try:
        result = await my_workflow(input_data.data)
        return {"success": True, "result": result}
    except Exception as e:
        return {"success": False, "error": str(e)}

Docker Container

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY workflow.py circuitry.py ./
CMD ["python", "workflow.py"]