零基础到实战:利用蓝耘MaaS API快速搭建AI工作流

『AI先锋杯·14天征文挑战第4期』 10w+人浏览 75人参与

零基础到实战:利用蓝耘MaaS API快速搭建AI工作流

摘要

随着人工智能技术的快速发展,模型即服务(MaaS)平台正在成为企业和开发者快速集成AI能力的重要途径。本文将全面介绍如何从零开始使用蓝耘MaaS平台,通过详细的代码示例和实战案例,帮助读者快速掌握API调用技巧,构建高效的AI工作流。文章涵盖平台核心功能、文本模型集成、视觉模型应用以及自动化工作流搭建,为不同技术背景的读者提供一站式学习指南。
在这里插入图片描述

一、MaaS平台概述与技术优势

1.1 什么是MaaS平台

模型即服务(Model-as-a-Service)是一种云计算服务模式,它将预训练的机器学习模型通过API接口提供给用户,使开发者无需从头训练模型即可获得高质量的AI能力。蓝耘MaaS平台面向企业开发者、创业者和非技术背景用户,提供开箱即用的热门AI模型服务。

MaaS平台的核心价值

  • 降低技术门槛:无需深厚的机器学习背景即可使用先进AI技术
  • 减少开发成本:避免昂贵的模型训练和硬件投入
  • 加速产品迭代:快速集成AI功能,缩短开发周期
  • 灵活扩展:根据业务需求弹性调整计算资源

1.2 蓝耘MaaS平台架构

蓝耘MaaS平台采用微服务架构,主要组件包括:

存储层
元数据存储
文件存储
日志存储
客户端应用
API网关
认证服务
计费服务
负载均衡
文本模型服务集群
视觉模型服务集群
音频模型服务集群
DeepSeek系列模型
通义系列模型
图像生成模型
视频生成模型
语音识别模型
语音合成模型

1.3 平台核心特性

根据产品文档,蓝耘MaaS平台具有以下典型特点:

  • 模型多样性:提供多种类型的机器学习模型,适应不同的业务需求
  • 易用性:用户通过简单API调用或对话框即可使用模型
  • 可扩展性:能够根据业务的发展需求,快速扩展或更新模型服务
  • 性能保障:平台提供高性能计算资源,确保模型运行的效率和稳定性
  • 数据隐私与安全:保障用户数据在使用过程中的安全性和隐私性

二、环境准备与平台接入

2.1 注册与认证(https://console.lanyun.net/#/register)

要使用蓝耘MaaS平台,首先需要完成注册和认证流程:

  1. 访问平台:打开蓝耘元生代码算云平台官方网站
  2. 注册账户:填写基本信息完成账户注册
  3. 身份验证:完成邮箱或手机验证
  4. 阅读协议:同意平台服务条款和API使用协议

2.2 获取API密钥

API密钥是访问平台服务的凭证
实际使用中,通过Web界面操作
具体步骤:

  1. 进入 API平台 > 立即接入管理
  2. 单击创建API KEY按钮
  3. 在弹出框中确认/更改API Key名称
  4. 单击创建
  5. 妥善保存API Key(不要直接写入代码)

2.3 环境配置

根据开发语言的不同,配置相应的开发环境:

Python环境配置
# 安装必要的Python包
# requirements.txt内容:
# openai>=1.0.0
# requests>=2.28.0
# python-dotenv>=0.19.0

import os
from dotenv import load_dotenv
import openai

# 加载环境变量
load_dotenv()

# 配置蓝耘MaaS客户端
def setup_maas_client():
    """
    配置并返回蓝耘MaaS客户端实例
    """
    # 从环境变量获取API密钥(推荐做法)
    api_key = os.getenv("LANYUN_API_KEY")
    if not api_key:
        raise ValueError("请设置LANYUN_API_KEY环境变量")
    
    # 配置OpenAI客户端(兼容接口)
    client = openai.OpenAI(
        api_key=api_key,
        base_url="https://maas-api.lanyun.net"  # 蓝耘MaaS API地址
    )
    
    return client

# 测试连接
def test_connection(client):
    """
    测试与API服务器的连接
    """
    try:
        # 获取模型列表来测试连接
        models = client.models.list()
        print("连接成功!可用模型数量:", len(models.data))
        return True
    except Exception as e:
        print(f"连接失败: {e}")
        return False
Node.js环境配置
// package.json依赖
// {
//   "dependencies": {
//     "openai": "^4.0.0"
//   }
// }

const OpenAI = require('openai');

// 配置MaaS客户端
function setupMaaSClient() {
    const apiKey = process.env.LANYUN_API_KEY;
    if (!apiKey) {
        throw new Error('请设置LANYUN_API_KEY环境变量');
    }

    const client = new OpenAI({
        apiKey: apiKey,
        baseURL: 'https://maas-api.lanyun.net'
    });

    return client;
}

// 测试连接
async function testConnection(client) {
    try {
        const models = await client.models.list();
        console.log(`连接成功!可用模型数量: ${models.data.length}`);
        return true;
    } catch (error) {
        console.error('连接失败:', error.message);
        return false;
    }
}

module.exports = { setupMaaSClient, testConnection };
环境变量配置

创建.env文件管理敏感信息:

# .env文件示例
LANYUN_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
DEFAULT_MODEL=/maas/deepseek-ai/DeepSeek-V3-0324
LOG_LEVEL=INFO
API_BASE_URL=https://maas-api.lanyun.net

2.4 权限与资源管理

了解平台的资源分配和权限管理机制:

class ResourceManager:
    """资源管理类,用于监控和管理API使用情况"""
    
    def __init__(self, client):
        self.client = client
        self.usage_stats = {
            'total_tokens': 0,
            'prompt_tokens': 0,
            'completion_tokens': 0,
            'api_calls': 0
        }
    
    def check_remaining_tokens(self):
        """
        检查剩余token额度
        返回: 剩余token数量
        """
        # 注意:实际实现需要调用平台提供的额度查询API
        # 这里是模拟实现
        total_free_tokens = 5000000  # 平台提供的免费token数量
        remaining = total_free_tokens - self.usage_stats['total_tokens']
        print(f"剩余Token额度: {remaining:,}")
        return remaining
    
    def update_usage(self, usage_info):
        """
        更新使用统计
        """
        self.usage_stats['total_tokens'] += usage_info.get('total_tokens', 0)
        self.usage_stats['prompt_tokens'] += usage_info.get('prompt_tokens', 0)
        self.usage_stats['completion_tokens'] += usage_info.get('completion_tokens', 0)
        self.usage_stats['api_calls'] += 1
        
    def print_usage_report(self):
        """
        打印使用情况报告
        """
        print("\n=== API使用情况报告 ===")
        print(f"总API调用次数: {self.usage_stats['api_calls']}")
        print(f"总Token使用量: {self.usage_stats['total_tokens']:,}")
        print(f"提示Token使用量: {self.usage_stats['prompt_tokens']:,}")
        print(f"补全Token使用量: {self.usage_stats['completion_tokens']:,}")
        
        # 计算预估成本(假设每1000个token $0.002)
        estimated_cost = (self.usage_stats['total_tokens'] / 1000) * 0.002
        print(f"预估成本: ${estimated_cost:.4f}")

三、文本模型API详解与实战

3.1 文本模型概览

蓝耘MaaS平台提供多种文本生成模型,主要包括:

模型系列模型名称API调用名称特点
DeepSeek-R1DeepSeek-R1-0528/maas/deepseek-ai/DeepSeek-R1-0528通用对话模型
DeepSeek-V3DeepSeek-V3-0324/maas/deepseek-ai/DeepSeek-V3-0324增强版对话模型
DeepSeek蒸馏版DeepSeek-R1-Distill-Quen-1.5B/maas/deepseek-r1:1.5b轻量高效
DeepSeek蒸馏版DeepSeek-R1-Distill-Quen-7B/maas/deepseek-r1:7b平衡性能与效率
通义系列QwQ-32B/maas/qwen/QwQ-32B阿里通义大模型
通义系列Qwen2.5-72B/maas/qwen/Qwen2.5-72B-Instruct最新通义大模型

在这里插入图片描述

3.2 基础文本生成

使用Chat Completions API进行基础文本生成:

def basic_text_generation(client, prompt, model=None):
    """
    基础文本生成示例
    """
    if model is None:
        model = os.getenv("DEFAULT_MODEL", "/maas/deepseek-ai/DeepSeek-V3-0324")
    
    try:
        response = client.chat.completions.create(
            model=model,
            messages=[
                {"role": "system", "content": "你是一个有帮助的AI助手。"},
                {"role": "user", "content": prompt}
            ],
            max_tokens=500,
            temperature=0.7,
            stream=False
        )
        
        # 提取回复内容
        completion = response.choices[0].message.content
        usage = response.usage.dict() if hasattr(response, 'usage') else {}
        
        return {
            "success": True,
            "content": completion,
            "usage": usage
        }
        
    except Exception as e:
        return {
            "success": False,
            "error": str(e)
        }

# 使用示例
if __name__ == "__main__":
    client = setup_maas_client()
    result = basic_text_generation(
        client, 
        "请解释一下机器学习中的过拟合现象以及如何避免它。"
    )
    
    if result["success"]:
        print("AI回复:", result["content"])
        print("Token使用:", result["usage"])
    else:
        print("请求失败:", result["error"])

3.3 流式文本生成

对于生成长文本内容,使用流式接口可以提供更好的用户体验:

def stream_text_generation(client, prompt, model=None, callback=None):
    """
    流式文本生成示例
    """
    if model is None:
        model = os.getenv("DEFAULT_MODEL", "/maas/deepseek-ai/DeepSeek-V3-0324")
    
    try:
        stream = client.chat.completions.create(
            model=model,
            messages=[
                {"role": "system", "content": "你是一个有帮助的AI助手。"},
                {"role": "user", "content": prompt}
            ],
            max_tokens=1000,
            temperature=0.7,
            stream=True
        )
        
        full_response = ""
        for chunk in stream:
            if chunk.choices[0].delta.content is not None:
                content = chunk.choices[0].delta.content
                full_response += content
                
                # 如果有回调函数,实时输出内容
                if callback:
                    callback(content)
        
        return {
            "success": True,
            "content": full_response
        }
        
    except Exception as e:
        return {
            "success": False,
            "error": str(e)
        }

# 使用示例
def print_chunk(content):
    """实时打印流式输出"""
    print(content, end="", flush=True)

if __name__ == "__main__":
    client = setup_maas_client()
    print("AI正在思考...")
    result = stream_text_generation(
        client,
        "写一篇关于人工智能未来发展的短文,不少于500字。",
        callback=print_chunk
    )

3.4 高级参数配置

深入了解和控制文本生成过程:

def advanced_text_generation(client, prompt, model=None, **kwargs):
    """
    高级文本生成,支持更多参数配置
    """
    if model is None:
        model = os.getenv("DEFAULT_MODEL", "/maas/deepseek-ai/DeepSeek-V3-0324")
    
    # 默认参数
    params = {
        "model": model,
        "messages": [
            {"role": "system", "content": "你是一个有帮助的AI助手。"},
            {"role": "user", "content": prompt}
        ],
        "max_tokens": kwargs.get("max_tokens", 500),
        "temperature": kwargs.get("temperature", 0.7),
        "top_p": kwargs.get("top_p", 0.9),
        "frequency_penalty": kwargs.get("frequency_penalty", 0),
        "presence_penalty": kwargs.get("presence_penalty", 0),
        "stop": kwargs.get("stop", None),
        "stream": kwargs.get("stream", False)
    }
    
    # 移除值为None的参数
    params = {k: v for k, v in params.items() if v is not None}
    
    try:
        response = client.chat.completions.create(**params)
        
        if params["stream"]:
            # 处理流式响应
            full_content = ""
            for chunk in response:
                if chunk.choices[0].delta.content is not None:
                    full_content += chunk.choices[0].delta.content
            content = full_content
        else:
            # 处理非流式响应
            content = response.choices[0].message.content
        
        usage = response.usage.dict() if hasattr(response, 'usage') else {}
        
        return {
            "success": True,
            "content": content,
            "usage": usage
        }
        
    except Exception as e:
        return {
            "success": False,
            "error": str(e)
        }

# 使用示例
if __name__ == "__main__":
    client = setup_maas_client()
    
    # 使用高级参数
    result = advanced_text_generation(
        client,
        "创作一首关于春天的诗。",
        temperature=0.9,  # 更高的创造性
        max_tokens=200,
        frequency_penalty=0.5,  # 减少重复
        presence_penalty=0.3,   # 鼓励新话题
        stop=["\n\n"]  # 停止序列
    )
    
    if result["success"]:
        print("生成的诗歌:")
        print(result["content"])
    else:
        print("错误:", result["error"])

3.5 函数调用功能

利用函数调用能力构建更智能的应用:

def function_calling_example(client):
    """
    函数调用示例:让AI决定需要调用哪些函数
    """
    # 定义可用的函数
    functions = [
        {
            "name": "get_weather",
            "description": "获取指定城市的天气信息",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市名称,如:北京、上海"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "温度单位"
                    }
                },
                "required": ["location"]
            }
        },
        {
            "name": "get_stock_price",
            "description": "获取股票当前价格",
            "parameters": {
                "type": "object",
                "properties": {
                    "symbol": {
                        "type": "string",
                        "description": "股票代码,如:AAPL, MSFT"
                    }
                },
                "required": ["symbol"]
            }
        }
    ]
    
    # 用户查询
    user_query = "今天北京天气怎么样?然后看看苹果公司的股价。"
    
    try:
        # 第一步:让AI决定是否需要调用函数以及调用哪个函数
        response = client.chat.completions.create(
            model="/maas/deepseek-ai/DeepSeek-V3-0324",
            messages=[{"role": "user", "content": user_query}],
            functions=functions,
            function_call="auto"  # 让模型自动决定
        )
        
        response_message = response.choices[0].message
        
        # 检查是否需要调用函数
        if response_message.function_call:
            function_name = response_message.function_call.name
            function_args = json.loads(response_message.function_call.arguments)
            
            print(f"AI决定调用函数: {function_name}")
            print(f"参数: {function_args}")
            
            # 这里可以添加实际调用这些函数的代码
            # 例如:
            # if function_name == "get_weather":
            #     result = get_weather(**function_args)
            # elif function_name == "get_stock_price":
            #     result = get_stock_price(**function_args)
            
            # 然后将结果返回给AI进行进一步处理
            
        else:
            print("AI直接回复:", response_message.content)
            
    except Exception as e:
        print(f"错误: {e}")

# 实际函数实现示例
def get_weather(location, unit="celsius"):
    """
    模拟获取天气信息的函数
    """
    # 实际应用中这里会调用天气API
    weather_data = {
        "location": location,
        "temperature": 22 if unit == "celsius" else 72,
        "condition": "晴朗",
        "humidity": 45,
        "unit": unit
    }
    return json.dumps(weather_data, ensure_ascii=False)

def get_stock_price(symbol):
    """
    模拟获取股票价格的函数
    """
    # 实际应用中这里会调用金融API
    import random
    price_data = {
        "symbol": symbol,
        "price": round(random.uniform(100, 200), 2),
        "currency": "USD",
        "change": round(random.uniform(-5, 5), 2)
    }
    return json.dumps(price_data, ensure_ascii=False)

3.6 批量处理与效率优化

处理大量文本时的高效方法:

import asyncio
from openai import AsyncOpenAI

class BatchProcessor:
    """批量文本处理类"""
    
    def __init__(self, api_key):
        self.client = AsyncOpenAI(
            api_key=api_key,
            base_url="https://maas-api.lanyun.net"
        )
        self.semaphore = asyncio.Semaphore(10)  # 控制并发数
    
    async def process_single(self, prompt, model=None, **kwargs):
        """处理单个提示"""
        async with self.semaphore:
            if model is None:
                model = os.getenv("DEFAULT_MODEL")
            
            try:
                response = await self.client.chat.completions.create(
                    model=model,
                    messages=[{"role": "user", "content": prompt}],
                    **kwargs
                )
                return {
                    "success": True,
                    "content": response.choices[0].message.content,
                    "usage": response.usage.dict()
                }
            except Exception as e:
                return {
                    "success": False,
                    "error": str(e),
                    "prompt": prompt
                }
    
    async def process_batch(self, prompts, model=None, **kwargs):
        """批量处理多个提示"""
        tasks = []
        for prompt in prompts:
            task = self.process_single(prompt, model, **kwargs)
            tasks.append(task)
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        return results

# 使用示例
async def main():
    processor = BatchProcessor(os.getenv("LANYUN_API_KEY"))
    
    # 准备批量提示
    prompts = [
        "总结一下机器学习的主要类型",
        "解释神经网络的基本原理",
        "什么是深度学习?",
        "监督学习和无监督学习有什么区别?"
    ]
    
    results = await processor.process_batch(
        prompts,
        max_tokens=200,
        temperature=0.3
    )
    
    # 处理结果
    successful = 0
    for i, result in enumerate(results):
        if isinstance(result, Exception):
            print(f"提示 {i} 处理出错: {result}")
        elif result["success"]:
            print(f"提示 {i}: {result['content'][:100]}...")
            successful += 1
        else:
            print(f"提示 {i} 失败: {result['error']}")
    
    print(f"\n成功处理: {successful}/{len(prompts)}")

# 运行批量处理
if __name__ == "__main__":
    asyncio.run(main())

四、视觉模型API实战应用

4.1 视觉模型概览

根据模型列表文档,蓝耘MaaS平台提供多种视觉生成模型:

模型类型模型名称API调用名称功能描述
图像到视频I2V-01I2V-01基于图像生成视频
图像到视频I2V-01-DirectorI2V-01-Director支持运镜控制的图像到视频
图像到视频I2V-01-liveI2V-01-live实时风格图像到视频
文本到视频T2V-01T2V-01基于文本生成视频
文本到视频T2V-01-DirectorT2V-01-Director支持运镜控制的文本到视频

4.2 视频生成API详解

视频生成采用异步任务模式,包含三个主要步骤:

class VideoGenerator:
    """视频生成工具类"""
    
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://maas-api.lanyun.net"
        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"
        }
    
    def create_video_generation_task(self, model, prompt, first_frame_image=None, 
                                   prompt_optimizer=True, subject_references=None):
        """
        创建视频生成任务
        返回: task_id
        """
        url = f"{self.base_url}/v1/video_generation"
        
        payload = {
            "model": model,
            "prompt": prompt,
            "promptOptimizer": prompt_optimizer
        }
        
        # 添加可选参数
        if first_frame_image:
            payload["firstFrameImage"] = first_frame_image
        
        if subject_references:
            payload["subjectReference"] = subject_references
        
        try:
            response = requests.post(
                url, 
                headers=self.headers, 
                data=json.dumps(payload)
            )
            response.raise_for_status()
            
            result = response.json()
            task_id = result.get("task_id")
            
            if task_id:
                print(f"视频生成任务创建成功,任务ID: {task_id}")
                return task_id
            else:
                raise Exception("未能获取任务ID")
                
        except Exception as e:
            print(f"创建视频任务失败: {e}")
            return None
    
    def query_task_status(self, task_id):
        """
        查询任务状态
        返回: 任务状态和结果(如果完成)
        """
        url = f"{self.base_url}/v1/query/video_generation?taskId={task_id}"
        
        try:
            response = requests.get(url, headers=self.headers)
            response.raise_for_status()
            
            result = response.json()
            status = result.get("status")
            
            print(f"任务状态: {status}")
            
            if status == "Success":
                video_url = result.get("videoDownloadUrl")
                print(f"视频生成完成! 下载URL: {video_url}")
                return {"status": status, "video_url": video_url}
            else:
                return {"status": status}
                
        except Exception as e:
            print(f"查询任务状态失败: {e}")
            return {"status": "Error", "error": str(e)}
    
    def wait_for_completion(self, task_id, check_interval=10, timeout=300):
        """
        等待任务完成
        """
        start_time = time.time()
        
        while time.time() - start_time < timeout:
            result = self.query_task_status(task_id)
            
            if result["status"] == "Success":
                return result
            elif result["status"] in ["Failed", "Error"]:
                print("任务失败")
                return result
            
            print(f"等待中...下一次检查在{check_interval}秒后")
            time.sleep(check_interval)
        
        print("任务超时")
        return {"status": "Timeout"}
    
    def download_video(self, video_url, save_path):
        """
        下载生成的视频
        """
        try:
            response = requests.get(video_url, stream=True)
            response.raise_for_status()
            
            with open(save_path, 'wb') as f:
                for chunk in response.iter_content(chunk_size=8192):
                    f.write(chunk)
            
            print(f"视频已保存到: {save_path}")
            return True
            
        except Exception as e:
            print(f"下载视频失败: {e}")
            return False

# 使用示例
def generate_video_example():
    """视频生成完整示例"""
    generator = VideoGenerator(os.getenv("LANYUN_API_KEY"))
    
    # 第一步:创建生成任务
    task_id = generator.create_video_generation_task(
        model="I2V-01-Director",
        prompt="一只蝴蝶在花丛中飞舞,阳光明媚,[推进]然后[左移]展示更多花朵",
        prompt_optimizer=True
    )
    
    if not task_id:
        return
    
    # 第二步:等待任务完成
    result = generator.wait_for_completion(task_id)
    
    if result["status"] == "Success":
        # 第三步:下载视频
        generator.download_video(
            result["video_url"],
            "generated_video.mp4"
        )

if __name__ == "__main__":
    generate_video_example()

4.3 运镜控制技术

I2V-01-Director模型支持高级运镜控制:

def advanced_director_controls():
    """
    高级运镜控制示例
    """
    generator = VideoGenerator(os.getenv("LANYUN_API_KEY"))
    
    # 不同的运镜指令示例
    camera_instructions = [
        # 单一运镜
        "一个美丽的城市天际线,[推进]展示细节",
        
        # 组合运镜(同时生效)
        "森林中的小路,[左摇, 上升]展示全景",
        
        # 顺序运镜(先后生效)
        "开始特写一朵花,[推进]展示花瓣细节,然后[拉远]显示整片花海",
        
        # 复杂场景
        "海滩日落场景,[固定]5秒,然后[右移]展示海岸线,最后[上升]显示全景"
    ]
    
    for i, instruction in enumerate(camera_instructions):
        print(f"\n生成示例 {i+1}: {instruction}")
        
        task_id = generator.create_video_generation_task(
            model="I2V-01-Director",
            prompt=instruction,
            prompt_optimizer=True
        )
        
        if task_id:
            # 在实际应用中,这里可以保存task_id以便后续检查
            print(f"任务已创建: {task_id}")

# 支持的运镜指令列表
CAMERA_MOVEMENTS = {
    "左右移": ["左移", "右移"],
    "左右摇": ["左摇", "右摇"],
    "推拉": ["推进", "拉远"],
    "升降": ["上升", "下降"],
    "上下摇": ["上摇", "下摇"],
    "变焦": ["变焦推近", "变焦拉远"],
    "特殊效果": ["晃动", "跟随", "固定"]
}

def validate_camera_instructions(prompt):
    """
    验证运镜指令的合理性
    """
    import re
    
    # 查找所有运镜指令
    instructions = re.findall(r'\[(.*?)\]', prompt)
    
    if not instructions:
        return True, "无运镜指令"
    
    valid_instructions = []
    for movement in CAMERA_MOVEMENTS.values():
        valid_instructions.extend(movement)
    
    invalid_instructions = []
    for instruction in instructions:
        # 处理组合指令
        moves = [move.strip() for move in instruction.split(',')]
        for move in moves:
            if move not in valid_instructions:
                invalid_instructions.append(move)
    
    if invalid_instructions:
        return False, f"无效的运镜指令: {invalid_instructions}"
    
    # 检查指令数量
    total_instructions = sum(len(inst.split(',')) for inst in instructions)
    if total_instructions > 6:
        return False, "运镜指令过多,建议不超过6个"
    
    return True, "指令有效"

4.4 图像处理与预处理

为视频生成准备输入图像:

class ImagePreprocessor:
    """图像预处理工具类"""
    
    @staticmethod
    def encode_image_to_base64(image_path):
        """
        将图像编码为base64字符串
        """
        import base64
        
        with open(image_path, "rb") as image_file:
            encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
        
        # 添加数据URI前缀
        return f"data:image/jpeg;base64,{encoded_string}"
    
    @staticmethod
    def resize_image(image_path, output_path, max_size=1024):
        """
        调整图像大小以适应API要求
        """
        from PIL import Image
        
        with Image.open(image_path) as img:
            # 计算新的尺寸,保持宽高比
            width, height = img.size
            if max(width, height) > max_size:
                if width > height:
                    new_width = max_size
                    new_height = int(height * (max_size / width))
                else:
                    new_height = max_size
                    new_width = int(width * (max_size / height))
                
                img = img.resize((new_width, new_height), Image.LANCZOS)
                img.save(output_path, "JPEG", quality=95)
                return output_path
        
        # 如果不需要调整大小,直接返回原路径
        return image_path
    
    @staticmethod
    def extract_frames_from_video(video_path, output_dir, frame_count=10):
        """
        从视频中提取帧作为参考
        """
        import cv2
        import os
        
        os.makedirs(output_dir, exist_ok=True)
        
        cap = cv2.VideoCapture(video_path)
        total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        
        frames = []
        for i in range(frame_count):
            # 均匀选择帧
            frame_idx = int((i / frame_count) * total_frames)
            cap.set(cv2.CAP_PROP_POS_FRAMES, frame_idx)
            ret, frame = cap.read()
            
            if ret:
                frame_path = os.path.join(output_dir, f"frame_{i:03d}.jpg")
                cv2.imwrite(frame_path, frame)
                frames.append(frame_path)
        
        cap.release()
        return frames

# 使用示例
def prepare_images_for_video():
    """准备视频生成所需的图像"""
    preprocessor = ImagePreprocessor()
    
    # 1. 调整输入图像大小
    input_image = "input_large.jpg"
    resized_image = preprocessor.resize_image(input_image, "input_resized.jpg")
    
    # 2. 编码为base64
    encoded_image = preprocessor.encode_image_to_base64(resized_image)
    
    # 3. 从参考视频提取帧
    reference_frames = preprocessor.extract_frames_from_video(
        "reference.mp4", 
        "extracted_frames",
        frame_count=5
    )
    
    # 编码参考帧
    encoded_references = []
    for frame in reference_frames:
        encoded_references.append(preprocessor.encode_image_to_base64(frame))
    
    return encoded_image, encoded_references

# 使用预处理后的图像生成视频
def generate_video_with_processed_images():
    """使用预处理后的图像生成视频"""
    first_frame, subject_references = prepare_images_for_video()
    
    generator = VideoGenerator(os.getenv("LANYUN_API_KEY"))
    
    task_id = generator.create_video_generation_task(
        model="I2V-01-Director",
        prompt="基于这张图像创建动画,风格参考提供的示例帧",
        first_frame_image=first_frame,
        subject_references=subject_references,
        prompt_optimizer=True
    )
    
    if task_id:
        print(f"视频生成任务已创建: {task_id}")
        return task_id

五、构建端到端AI工作流

5.1 工作流设计模式

设计高效的AI工作流需要考虑以下几个关键模式:

class AIWorkflowEngine:
    """AI工作流引擎"""
    
    def __init__(self, api_key):
        self.api_key = api_key
        self.text_client = setup_maas_client()
        self.video_generator = VideoGenerator(api_key)
        self.resource_manager = ResourceManager(self.text_client)
    
    def process_content_workflow(self, input_text, workflow_type="standard"):
        """
        处理内容创建工作流
        """
        workflows = {
            "standard": self.standard_content_workflow,
            "video": self.video_content_workflow,
            "social_media": self.social_media_workflow
        }
        
        workflow_func = workflows.get(workflow_type, self.standard_content_workflow)
        return workflow_func(input_text)
    
    def standard_content_workflow(self, input_text):
        """
        标准内容创建工作流:文本生成 → 优化 → 格式化
        """
        results = {}
        
        # 第一步:生成初稿
        draft_result = advanced_text_generation(
            self.text_client,
            f"根据以下主题创作内容: {input_text}",
            max_tokens=800,
            temperature=0.8
        )
        
        if not draft_result["success"]:
            return draft_result
        
        results["draft"] = draft_result["content"]
        self.resource_manager.update_usage(draft_result["usage"])
        
        # 第二步:优化内容
        optimization_result = advanced_text_generation(
            self.text_client,
            f"优化以下内容,使其更加生动和专业:\n\n{draft_result['content']}",
            max_tokens=1000,
            temperature=0.5
        )
        
        if not optimization_result["success"]:
            return optimization_result
        
        results["optimized"] = optimization_result["content"]
        self.resource_manager.update_usage(optimization_result["usage"])
        
        # 第三步:生成多个标题选项
        titles_result = advanced_text_generation(
            self.text_client,
            f"为以下内容生成5个吸引人的标题:\n\n{optimization_result['content']}",
            max_tokens=200,
            temperature=0.9
        )
        
        if titles_result["success"]:
            results["titles"] = titles_result["content"].split('\n')
            self.resource_manager.update_usage(titles_result["usage"])
        
        return results
    
    def video_content_workflow(self, input_text):
        """
        视频内容创建工作流:脚本 → 分镜 → 视频生成
        """
        results = {}
        
        # 第一步:生成视频脚本
        script_result = advanced_text_generation(
            self.text_client,
            f"创建关于'{input_text}'的30秒视频脚本,包含场景描述和运镜指令",
            max_tokens=600,
            temperature=0.7
        )
        
        if not script_result["success"]:
            return script_result
        
        results["script"] = script_result["content"]
        self.resource_manager.update_usage(script_result["usage"])
        
        # 第二步:从脚本中提取运镜指令
        # 这里可以添加自然语言处理来解析脚本中的运镜指令
        camera_instructions = self.extract_camera_instructions(script_result["content"])
        results["camera_instructions"] = camera_instructions
        
        # 第三步:生成视频(这里需要实际的第一帧图像)
        # 在实际应用中,可能需要先生成或提供第一帧图像
        print("视频脚本已生成,需要提供第一帧图像来生成视频")
        
        return results
    
    def extract_camera_instructions(self, script):
        """
        从脚本中提取运镜指令
        """
        instruction_result = advanced_text_generation(
            self.text_client,
            f"从以下视频脚本中提取所有运镜指令,以JSON格式返回:\n\n{script}",
            max_tokens=200,
            temperature=0.1
        )
        
        if instruction_result["success"]:
            try:
                # 尝试解析JSON响应
                instructions = json.loads(instruction_result["content"])
                return instructions
            except json.JSONDecodeError:
                # 如果不是有效的JSON,返回原始文本
                return instruction_result["content"]
        
        return []

# 使用示例
def run_complete_workflow():
    """运行完整的工作流示例"""
    workflow_engine = AIWorkflowEngine(os.getenv("LANYUN_API_KEY"))
    
    # 运行标准内容工作流
    results = workflow_engine.process_content_workflow(
        "人工智能在医疗诊断中的应用",
        workflow_type="standard"
    )
    
    print("=== 内容生成结果 ===")
    print("初稿长度:", len(results.get("draft", "")))
    print("优化后长度:", len(results.get("optimized", "")))
    print("生成的标题:", results.get("titles", []))
    
    # 显示资源使用情况
    workflow_engine.resource_manager.print_usage_report()

if __name__ == "__main__":
    run_complete_workflow()

5.2 错误处理与重试机制

构建健壮的AI工作流需要完善的错误处理:

class RobustAIWorkflow:
    """具有错误处理和重试机制的AI工作流"""
    
    def __init__(self, api_key, max_retries=3, retry_delay=2):
        self.api_key = api_key
        self.max_retries = max_retries
        self.retry_delay = retry_delay
        self.client = setup_maas_client()
    
    def execute_with_retry(self, func, *args, **kwargs):
        """
        带重试机制的API调用执行
        """
        for attempt in range(self.max_retries):
            try:
                result = func(*args, **kwargs)
                
                # 检查API返回的错误
                if hasattr(result, 'get') and result.get('success') is False:
                    error = result.get('error', '未知错误')
                    
                    # 如果是速率限制错误,等待更长时间
                    if 'rate limit' in error.lower():
                        wait_time = self.retry_delay * (attempt + 1) * 2
                        print(f"速率限制,等待 {wait_time} 秒后重试...")
                        time.sleep(wait_time)
                        continue
                    
                    # 如果是认证错误,不要重试
                    if 'auth' in error.lower() or 'key' in error.lower():
                        print("认证错误,停止重试")
                        break
                
                return result
                
            except requests.exceptions.ConnectionError as e:
                print(f"连接错误 (尝试 {attempt + 1}/{self.max_retries}): {e}")
                if attempt < self.max_retries - 1:
                    time.sleep(self.retry_delay * (attempt + 1))
                continue
                
            except requests.exceptions.Timeout as e:
                print(f"超时错误 (尝试 {attempt + 1}/{self.max_retries}): {e}")
                if attempt < self.max_retries - 1:
                    time.sleep(self.retry_delay * (attempt + 1))
                continue
                
            except Exception as e:
                print(f"未知错误 (尝试 {attempt + 1}/{self.max_retries}): {e}")
                if attempt < self.max_retries - 1:
                    time.sleep(self.retry_delay)
                continue
        
        # 所有重试都失败
        return {"success": False, "error": "所有重试尝试均失败"}
    
    def robust_text_generation(self, prompt, **kwargs):
        """带重试的文本生成"""
        def _call_api():
            return advanced_text_generation(self.client, prompt, **kwargs)
        
        return self.execute_with_retry(_call_api)
    
    def robust_video_generation(self, model, prompt, **kwargs):
        """带重试的视频生成"""
        generator = VideoGenerator(self.api_key)
        
        def _create_task():
            return generator.create_video_generation_task(model, prompt, **kwargs)
        
        task_id = self.execute_with_retry(_create_task)
        
        if task_id:
            # 等待任务完成,也带重试机制
            def _check_status():
                return generator.query_task_status(task_id)
            
            result = self.execute_with_retry(_check_status)
            return result
        
        return {"success": False, "error": "无法创建视频生成任务"}

# 使用示例
def demonstrate_robust_workflow():
    """演示健壮的工作流执行"""
    robust_workflow = RobustAIWorkflow(
        os.getenv("LANYUN_API_KEY"),
        max_retries=5,
        retry_delay=3
    )
    
    # 执行带重试的文本生成
    result = robust_workflow.robust_text_generation(
        "写一篇关于机器学习中强化学习的科普文章",
        max_tokens=1000,
        temperature=0.7
    )
    
    if result["success"]:
        print("生成成功!")
        print("内容预览:", result["content"][:200] + "...")
    else:
        print("生成失败:", result["error"])

if __name__ == "__main__":
    demonstrate_robust_workflow()

5.3 性能监控与优化

监控工作流性能并优化资源使用:

class PerformanceMonitor:
    """性能监控器"""
    
    def __init__(self):
        self.metrics = {
            'total_requests': 0,
            'successful_requests': 0,
            'failed_requests': 0,
            'total_tokens': 0,
            'total_time': 0,
            'requests_by_model': {},
            'avg_response_time': 0
        }
        self.start_time = time.time()
    
    def record_request(self, model, success, tokens_used, response_time):
        """记录请求指标"""
        self.metrics['total_requests'] += 1
        
        if success:
            self.metrics['successful_requests'] += 1
        else:
            self.metrics['failed_requests'] += 1
        
        self.metrics['total_tokens'] += tokens_used
        self.metrics['total_time'] += response_time
        
        # 按模型统计
        if model not in self.metrics['requests_by_model']:
            self.metrics['requests_by_model'][model] = {
                'count': 0,
                'tokens': 0
            }
        
        self.metrics['requests_by_model'][model]['count'] += 1
        self.metrics['requests_by_model'][model]['tokens'] += tokens_used
        
        # 更新平均响应时间
        if self.metrics['successful_requests'] > 0:
            self.metrics['avg_response_time'] = (
                self.metrics['total_time'] / self.metrics['successful_requests']
            )
    
    def get_performance_report(self):
        """生成性能报告"""
        current_time = time.time()
        uptime = current_time - self.start_time
        
        report = {
            'uptime_seconds': uptime,
            'uptime_human': str(datetime.timedelta(seconds=int(uptime))),
            'total_requests': self.metrics['total_requests'],
            'success_rate': (
                self.metrics['successful_requests'] / self.metrics['total_requests'] * 100
                if self.metrics['total_requests'] > 0 else 0
            ),
            'tokens_per_minute': (
                self.metrics['total_tokens'] / (uptime / 60)
                if uptime > 0 else 0
            ),
            'avg_response_time': self.metrics['avg_response_time'],
            'models_usage': self.metrics['requests_by_model']
        }
        
        return report
    
    def print_report(self):
        """打印性能报告"""
        report = self.get_performance_report()
        
        print("\n" + "="*50)
        print("AI工作流性能报告")
        print("="*50)
        print(f"运行时间: {report['uptime_human']}")
        print(f"总请求数: {report['total_requests']}")
        print(f"成功率: {report['success_rate']:.2f}%")
        print(f"Token使用速率: {report['tokens_per_minute']:.0f} tokens/分钟")
        print(f"平均响应时间: {report['avg_response_time']:.2f}秒")
        
        print("\n模型使用情况:")
        for model, stats in report['models_usage'].items():
            print(f"  {model}: {stats['count']} 次请求, {stats['tokens']} tokens")

# 集成性能监控的工作流
class MonitoredAIWorkflow(AIWorkflowEngine):
    """带性能监控的AI工作流"""
    
    def __init__(self, api_key):
        super().__init__(api_key)
        self.monitor = PerformanceMonitor()
    
    def monitored_text_generation(self, prompt, **kwargs):
        """带监控的文本生成"""
        start_time = time.time()
        
        result = advanced_text_generation(self.text_client, prompt, **kwargs)
        
        response_time = time.time() - start_time
        tokens_used = result.get('usage', {}).get('total_tokens', 0) if result['success'] else 0
        
        self.monitor.record_request(
            model=kwargs.get('model', 'default'),
            success=result['success'],
            tokens_used=tokens_used,
            response_time=response_time
        )
        
        return result
    
    def get_performance(self):
        """获取性能数据"""
        return self.monitor.get_performance_report()

# 使用示例
def run_monitored_workflow():
    """运行带监控的工作流"""
    workflow = MonitoredAIWorkflow(os.getenv("LANYUN_API_KEY"))
    
    # 执行多个任务
    topics = [
        "机器学习基础",
        "深度学习应用",
        "自然语言处理",
        "计算机视觉",
        "强化学习"
    ]
    
    for topic in topics:
        print(f"\n处理主题: {topic}")
        result = workflow.monitored_text_generation(
            f写一篇关于{topic}的简短介绍,
            max_tokens=300,
            temperature=0.7
        )
        
        if result['success']:
            print(f"生成成功,使用token: {result['usage']['total_tokens']}")
        else:
            print(f"生成失败: {result['error']}")
        
        # 稍微延迟,避免速率限制
        time.sleep(1)
    
    # 打印性能报告
    workflow.monitor.print_report()

if __name__ == "__main__":
    run_monitored_workflow()

六、部署与生产环境最佳实践

6.1 环境配置管理

使用配置文件管理不同环境的设置:

# config.py
import os
from dataclasses import dataclass
from typing import Optional

@dataclass
class EnvironmentConfig:
    """环境配置类"""
    name: str
    api_key: Optional[str] = None
    base_url: str = "https://maas-api.lanyun.net"
    default_model: str = "/maas/deepseek-ai/DeepSeek-V3-0324"
    log_level: str = "INFO"
    timeout: int = 30
    max_retries: int = 3

class ConfigManager:
    """配置管理器"""
    
    def __init__(self):
        self.environments = {
            'development': EnvironmentConfig(
                name='development',
                api_key=os.getenv('LANYUN_DEV_API_KEY'),
                log_level='DEBUG'
            ),
            'staging': EnvironmentConfig(
                name='staging',
                api_key=os.getenv('LANYUN_STAGING_API_KEY'),
                log_level='INFO'
            ),
            'production': EnvironmentConfig(
                name='production',
                api_key=os.getenv('LANYUN_PROD_API_KEY'),
                log_level='WARNING',
                max_retries=5,
                timeout=60
            )
        }
    
    def get_config(self, env_name=None):
        """获取环境配置"""
        if env_name is None:
            env_name = os.getenv('APP_ENV', 'development')
        
        config = self.environments.get(env_name)
        if not config:
            raise ValueError(f"未知环境: {env_name}")
        
        # 确保API密钥已设置
        if not config.api_key:
            raise ValueError(f"{env_name}环境的API密钥未设置")
        
        return config

# 使用示例
def setup_environment():
    """设置运行环境"""
    config_manager = ConfigManager()
    
    try:
        config = config_manager.get_config()
        print(f"使用环境: {config.name}")
        print(f"日志级别: {config.log_level}")
        
        # 根据配置创建客户端
        client = setup_maas_client()
        return client, config
        
    except ValueError as e:
        print(f"环境配置错误: {e}")
        return None, None

if __name__ == "__main__":
    client, config = setup_environment()
    if client:
        print("环境设置成功!")

6.2 安全最佳实践

确保API密钥和敏感数据的安全:

# security.py
import os
import logging
from cryptography.fernet import Fernet

class SecurityManager:
    """安全管理器"""
    
    def __init__(self, key_file='secret.key'):
        self.key_file = key_file
        self.cipher = self._setup_cipher()
    
    def _setup_cipher(self):
        """设置加密器"""
        if os.path.exists(self.key_file):
            with open(self.key_file, 'rb') as f:
                key = f.read()
        else:
            key = Fernet.generate_key()
            with open(self.key_file, 'wb') as f:
                f.write(key)
            # 设置文件权限
            os.chmod(self.key_file, 0o600)
        
        return Fernet(key)
    
    def encrypt_data(self, data):
        """加密数据"""
        if isinstance(data, str):
            data = data.encode()
        return self.cipher.encrypt(data).decode()
    
    def decrypt_data(self, encrypted_data):
        """解密数据"""
        if isinstance(encrypted_data, str):
            encrypted_data = encrypted_data.encode()
        return self.cipher.decrypt(encrypted_data).decode()
    
    def secure_api_key_storage(self, api_key, env_file='.env.secure'):
        """安全存储API密钥"""
        encrypted_key = self.encrypt_data(api_key)
        
        with open(env_file, 'w') as f:
            f.write(f"ENCRYPTED_API_KEY={encrypted_key}\n")
        
        # 设置文件权限
        os.chmod(env_file, 0o600)
        print(f"API密钥已安全存储到 {env_file}")
    
    def load_api_key(self, env_file='.env.secure'):
        """加载加密的API密钥"""
        from dotenv import load_dotenv
        load_dotenv(env_file)
        
        encrypted_key = os.getenv('ENCRYPTED_API_KEY')
        if not encrypted_key:
            raise ValueError("未找到加密的API密钥")
        
        return self.decrypt_data(encrypted_key)

# 使用示例
def demonstrate_security():
    """演示安全功能"""
    security = SecurityManager()
    
    # 加密并存储API密钥
    api_key = "sk-你的真实API密钥"
    security.secure_api_key_storage(api_key)
    
    # 加载并使用API密钥
    try:
        loaded_key = security.load_api_key()
        print("API密钥加载成功")
        
        # 使用加载的密钥创建客户端
        client = setup_maas_client_with_key(loaded_key)
        print("客户端创建成功")
        
    except Exception as e:
        print(f"安全错误: {e}")

def setup_maas_client_with_key(api_key):
    """使用API密钥创建客户端"""
    import openai
    return openai.OpenAI(
        api_key=api_key,
        base_url="https://maas-api.lanyun.net"
    )

if __name__ == "__main__":
    demonstrate_security()

6.3 日志与监控

实现完整的日志和监控系统:

# logging_config.py
import logging
import logging.handlers
import json
from datetime import datetime

class JSONFormatter(logging.Formatter):
    """JSON日志格式化器"""
    
    def format(self, record):
        log_data = {
            'timestamp': datetime.utcnow().isoformat() + 'Z',
            'level': record.levelname,
            'message': record.getMessage(),
            'logger': record.name,
            'module': record.module,
            'function': record.funcName,
            'line': record.lineno
        }
        
        if hasattr(record, 'extra_data'):
            log_data.update(record.extra_data)
        
        if record.exc_info:
            log_data['exception'] = self.formatException(record.exc_info)
        
        return json.dumps(log_data)

def setup_logging(env='development'):
    """设置日志系统"""
    logger = logging.getLogger()
    
    if env == 'production':
        logger.setLevel(logging.INFO)
        
        # 文件处理器(JSON格式)
        file_handler = logging.handlers.RotatingFileHandler(
            'app.log',
            maxBytes=10*1024*1024,  # 10MB
            backupCount=5
        )
        file_handler.setFormatter(JSONFormatter())
        logger.addHandler(file_handler)
        
        # 控制台处理器(简单格式)
        console_handler = logging.StreamHandler()
        console_handler.setFormatter(logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        ))
        logger.addHandler(console_handler)
        
    else:  # 开发环境
        logger.setLevel(logging.DEBUG)
        console_handler = logging.StreamHandler()
        console_handler.setFormatter(logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        ))
        logger.addHandler(console_handler)
    
    return logger

# 增强的日志记录器
class AILogger:
    """AI工作流专用日志记录器"""
    
    def __init__(self, name='ai_workflow'):
        self.logger = logging.getLogger(name)
        self.metrics = {
            'start_time': datetime.now(),
            'api_calls': 0,
            'tokens_used': 0,
            'errors': 0
        }
    
    def log_api_call(self, model, prompt_length, response_length, success=True):
        """记录API调用"""
        self.metrics['api_calls'] += 1
        self.metrics['tokens_used'] += (prompt_length + response_length)
        
        if not success:
            self.metrics['errors'] += 1
        
        self.logger.info(
            "API调用记录",
            extra={
                'extra_data': {
                    'model': model,
                    'prompt_tokens': prompt_length,
                    'completion_tokens': response_length,
                    'total_tokens': prompt_length + response_length,
                    'success': success,
                    'cumulative_tokens': self.metrics['tokens_used']
                }
            }
        )
    
    def log_workflow_start(self, workflow_name, input_data):
        """记录工作流开始"""
        self.logger.info(
            f"工作流开始: {workflow_name}",
            extra={
                'extra_data': {
                    'workflow': workflow_name,
                    'input': str(input_data)[:200]  # 限制输入长度
                }
            }
        )
    
    def log_workflow_end(self, workflow_name, result):
        """记录工作流结束"""
        runtime = (datetime.now() - self.metrics['start_time']).total_seconds()
        
        self.logger.info(
            f"工作流完成: {workflow_name}",
            extra={
                'extra_data': {
                    'workflow': workflow_name,
                    'runtime_seconds': runtime,
                    'total_api_calls': self.metrics['api_calls'],
                    'total_tokens_used': self.metrics['tokens_used'],
                    'total_errors': self.metrics['errors'],
                    'success': result.get('success', False)
                }
            }
        )
    
    def get_metrics(self):
        """获取当前指标"""
        return self.metrics.copy()

# 使用示例
def demonstrate_logging():
    """演示日志功能"""
    logger = AILogger()
    
    # 模拟工作流执行
    logger.log_workflow_start("内容生成", "人工智能主题")
    
    # 模拟API调用
    logger.log_api_call(
        model="/maas/deepseek-ai/DeepSeek-V3-0324",
        prompt_length=100,
        response_length=500,
        success=True
    )
    
    logger.log_api_call(
        model="/maas/deepseek-ai/DeepSeek-V3-0324",
        prompt_length=50,
        response_length=300,
        success=True
    )
    
    # 工作流完成
    logger.log_workflow_end("内容生成", {"success": True, "content_length": 800})
    
    print("指标统计:", logger.get_metrics())

if __name__ == "__main__":
    # 先设置日志系统
    setup_logging('development')
    demonstrate_logging()

七、实战案例:构建智能内容创作平台

7.1 项目架构设计

监控告警
存储层
蓝耘MaaS集成
核心服务
日志系统
性能监控
告警系统
元数据库
文件存储
缓存层
文本模型API
视觉模型API
异步任务队列
内容生成服务
工作流引擎
视频生成服务
批量处理服务
用户界面
API网关
认证服务

7.2 核心实现代码

# app/core/content_platform.py
import os
import json
import asyncio
from typing import Dict, List, Optional
from dataclasses import dataclass
from datetime import datetime

@dataclass
class ContentRequest:
    """内容请求数据类"""
    content_type: str  # article, video, social_media等
    topic: str
    length: str  # short, medium, long
    style: Optional[str] = None
    additional_instructions: Optional[str] = None

@dataclass
class GeneratedContent:
    """生成的内容数据类"""
    content: str
    metadata: Dict
    tokens_used: int
    generation_time: float

class ContentCreationPlatform:
    """智能内容创作平台"""
    
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.workflow_engine = AIWorkflowEngine(api_key)
        self.monitor = PerformanceMonitor()
        self.logger = AILogger('content_platform')
    
    async def create_content(self, request: ContentRequest) -> GeneratedContent:
        """
        创建内容的主要方法
        """
        start_time = datetime.now()
        
        self.logger.log_workflow_start(
            f"content_creation_{request.content_type}",
            request.__dict__
        )
        
        try:
            # 根据内容类型选择工作流
            if request.content_type == "article":
                result = await self._create_article(request)
            elif request.content_type == "video":
                result = await self._create_video_content(request)
            elif request.content_type == "social_media":
                result = await self._create_social_media_content(request)
            else:
                raise ValueError(f"不支持的内容类型: {request.content_type}")
            
            # 计算生成时间
            generation_time = (datetime.now() - start_time).total_seconds()
            
            content = GeneratedContent(
                content=result["content"],
                metadata={
                    "content_type": request.content_type,
                    "topic": request.topic,
                    "style": request.style,
                    "tokens_used": result.get("tokens_used", 0)
                },
                tokens_used=result.get("tokens_used", 0),
                generation_time=generation_time
            )
            
            self.logger.log_workflow_end(
                f"content_creation_{request.content_type}",
                {"success": True, "content_length": len(content.content)}
            )
            
            return content
            
        except Exception as e:
            self.logger.logger.error(
                f"内容创建失败: {str(e)}",
                exc_info=True
            )
            self.logger.log_workflow_end(
                f"content_creation_{request.content_type}",
                {"success": False, "error": str(e)}
            )
            raise
    
    async def _create_article(self, request: ContentRequest) -> Dict:
        """创建文章内容"""
        # 构建提示
        prompt = self._build_article_prompt(request)
        
        # 确定生成长度
        max_tokens = self._get_max_tokens(request.length)
        
        # 调用API
        result = self.workflow_engine.monitored_text_generation(
            prompt,
            max_tokens=max_tokens,
            temperature=0.7
        )
        
        if not result["success"]:
            raise Exception(f"文章生成失败: {result['error']}")
        
        return {
            "content": result["content"],
            "tokens_used": result["usage"]["total_tokens"]
        }
    
    async def _create_video_content(self, request: ContentRequest) -> Dict:
        """创建视频内容"""
        # 首先生成视频脚本
        script_prompt = self._build_video_script_prompt(request)
        script_result = self.workflow_engine.monitored_text_generation(
            script_prompt,
            max_tokens=500,
            temperature=0.8
        )
        
        if not script_result["success"]:
            raise Exception(f"视频脚本生成失败: {script_result['error']}")
        
        # 这里可以添加视频生成逻辑
        # 在实际应用中,可能需要用户提供第一帧图像或使用默认图像
        
        return {
            "content": script_result["content"],
            "tokens_used": script_result["usage"]["total_tokens"],
            "metadata": {
                "type": "video_script",
                "requires_video_generation": True
            }
        }
    
    def _build_article_prompt(self, request: ContentRequest) -> str:
        """构建文章生成提示"""
        base_prompt = f"写一篇关于{request.topic}的"
        
        # 根据长度调整
        if request.length == "short":
            base_prompt += "简短文章,300-500字"
        elif request.length == "medium":
            base_prompt += "中等长度文章,800-1000字"
        else:  # long
            base_prompt += "详细文章,1500-2000字"
        
        # 添加风格要求
        if request.style:
            base_prompt += f",风格要求: {request.style}"
        
        # 添加额外指令
        if request.additional_instructions:
            base_prompt += f"。其他要求: {request.additional_instructions}"
        
        return base_prompt
    
    def _build_video_script_prompt(self, request: ContentRequest) -> str:
        """构建视频脚本提示"""
        prompt = f创建关于{request.topic}的视频脚本,包含场景描述和运镜指令"
        
        if request.length == "short":
            prompt += "(30秒时长)"
        elif request.length == "medium":
            prompt += "(60秒时长)"
        else:
            prompt += "(90秒时长)"
        
        if request.style:
            prompt += f",风格: {request.style}"
        
        if request.additional_instructions:
            prompt += f"。额外要求: {request.additional_instructions}"
        
        return prompt
    
    def _get_max_tokens(self, length: str) -> int:
        """根据长度获取最大token数"""
        token_mapping = {
            "short": 600,
            "medium": 1200,
            "long": 2500
        }
        return token_mapping.get(length, 1000)

# 使用示例
async def main():
    """主运行示例"""
    platform = ContentCreationPlatform(os.getenv("LANYUN_API_KEY"))
    
    # 创建文章请求
    article_request = ContentRequest(
        content_type="article",
        topic="人工智能在气候变化解决方案中的应用",
        length="medium",
        style="科普风格,面向普通读者",
        additional_instructions="包含实际案例和数据支持"
    )
    
    try:
        content = await platform.create_content(article_request)
        print("内容生成成功!")
        print(f"使用Token: {content.tokens_used}")
        print(f"生成时间: {content.generation_time:.2f}秒")
        print("\n生成的内容:")
        print(content.content[:500] + "..." if len(content.content) > 500 else content.content)
        
    except Exception as e:
        print(f"内容生成失败: {e}")

if __name__ == "__main__":
    asyncio.run(main())

7.3 前端集成示例

提供简单的Web界面集成示例:

# app/web/api.py
from fastapi import FastAPI, HTTPException, Depends
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import List
import uvicorn

from app.core.content_platform import ContentCreationPlatform, ContentRequest

app = FastAPI(title="智能内容创作平台API")

# CORS中间件
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# 依赖注入
def get_content_platform():
    """获取内容平台实例"""
    api_key = os.getenv("LANYUN_API_KEY")
    if not api_key:
        raise HTTPException(status_code=500, detail="API密钥未配置")
    return ContentCreationPlatform(api_key)

# 请求模型
class ContentCreationRequest(BaseModel):
    content_type: str
    topic: str
    length: str = "medium"
    style: Optional[str] = None
    additional_instructions: Optional[str] = None

class ContentResponse(BaseModel):
    success: bool
    content: Optional[str] = None
    tokens_used: Optional[int] = None
    generation_time: Optional[float] = None
    error: Optional[str] = None

# API路由
@app.post("/api/content/create", response_model=ContentResponse)
async def create_content(
    request: ContentCreationRequest,
    platform: ContentCreationPlatform = Depends(get_content_platform)
):
    """创建内容端点"""
    try:
        content_request = ContentRequest(**request.dict())
        result = await platform.create_content(content_request)
        
        return ContentResponse(
            success=True,
            content=result.content,
            tokens_used=result.tokens_used,
            generation_time=result.generation_time
        )
        
    except Exception as e:
        return ContentResponse(
            success=False,
            error=str(e)
        )

@app.get("/api/health")
async def health_check():
    """健康检查端点"""
    return {"status": "healthy", "timestamp": datetime.now().isoformat()}

@app.get("/api/models")
async def get_available_models():
    """获取可用模型列表"""
    # 这里可以扩展为从平台动态获取模型列表
    return {
        "text_models": [
            "/maas/deepseek-ai/DeepSeek-V3-0324",
            "/maas/deepseek-ai/DeepSeek-R1-0528",
            "/maas/qwen/QwQ-32B"
        ],
        "video_models": [
            "I2V-01",
            "I2V-01-Director",
            "T2V-01"
        ]
    }

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

7.4 部署脚本与配置

# docker-compose.yml
version: '3.8'

services:
  content-platform:
    build: .
    ports:
      - "8000:8000"
    environment:
      - APP_ENV=production
      - LANYUN_API_KEY=${LANYUN_API_KEY}
    volumes:
      - ./logs:/app/logs
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/ssl/certs
    depends_on:
      - content-platform
    restart: unless-stopped

  monitor:
    image: grafana/grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    restart: unless-stopped

volumes:
  grafana-data:
# Dockerfile
FROM python:3.9-slim

WORKDIR /app

# 安装系统依赖
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

# 复制依赖文件并安装
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# 复制应用代码
COPY . .

# 创建非root用户
RUN useradd --create-home --shell /bin/bash app
USER app

# 暴露端口
EXPOSE 8000

# 启动命令
CMD ["uvicorn", "app.web.api:app", "--host", "0.0.0.0", "--port", "8000"]

结论

通过本文的全面介绍,我们展示了如何从零开始利用蓝耘MaaS API构建完整的AI工作流。从环境配置、API基础使用到高级工作流设计和生产环境部署,我们覆盖了实际应用中的各个方面。

关键收获

  1. 低门槛接入:蓝耘MaaS平台提供兼容OpenAI的API接口,大大降低了集成难度
  2. 多样化模型:平台提供从文本生成到视频创建的多种AI能力,满足不同场景需求
  3. 灵活的工作流:通过组合不同的API调用,可以构建复杂而强大的AI应用
  4. 企业级可靠性:平台提供完善的错误处理、监控和扩展机制

未来展望

随着AI技术的不断发展,MaaS平台将继续演进,我们预期以下发展趋势:

  1. 多模态融合:文本、图像、视频和音频能力的更深层次整合
  2. 个性化定制:针对特定行业和用例的定制化模型服务
  3. 实时性能提升:更低延迟的API响应和更高效的流式处理
  4. 开发工具完善:更丰富的SDK、调试工具和可视化界面

蓝耘MaaS平台为开发者和企业提供了强大的AI能力,使得即使没有深厚机器学习背景的团队也能快速构建智能应用。随着技术的不断成熟和平台的持续发展,AI工作流将成为更多产品的标准配置。

学习资源

通过持续学习和实践,您可以充分利用MaaS平台的强大能力,构建出改变世界的AI应用。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值