> 技术文档 > 微软GraphRAG +本地模型+Gradio 简单测试笔记_graphrag对接国内模型

微软GraphRAG +本地模型+Gradio 简单测试笔记_graphrag对接国内模型


安装

pip install graphragmkdir -p ./ragtest/input#将文档拷贝至 ./ragtest/input/ 下python -m graphrag.index --init --root ./ragtest

修改settings.yaml

encoding_model: cl100k_baseskip_workflows: []llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat model: qwen2-instruct model_supports_json: true # recommended if this is available for your model. # max_tokens: 4000 # request_timeout: 180.0 api_base: http://192.168.2.2:9997/v1/ # api_version: 2024-02-15-preview # organization:  # deployment_name:  # tokens_per_minute: 150_000 # set a leaky bucket throttle # requests_per_minute: 10_000 # set a leaky bucket throttle # max_retries: 10 # max_retry_wait: 10.0 # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times concurrent_requests: 5 # the number of parallel inflight requests that may be madeparallelization: stagger: 0.3 # num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings: ## parallelization: override the global parallelization settings for embeddings async_mode: threaded # or asyncio llm: api_key: ${GRAPHRAG_API_KEY} type: openai_embedding # or azure_openai_embedding model: bge-large-zh-v1.5 api_base: http://127.0.0.1:9997/v1/ # api_version: 2024-02-15-preview # organization:  # deployment_name:  # tokens_per_minute: 150_000 # set a leaky bucket throttle # requests_per_minute: 10_000 # set a leaky bucket throttle # max_retries: 10 # max_retry_wait: 10.0 # s