使用离线本地模型
#114
by
whereAlone
- opened
对于需要加载本地离线模型而不让其通过hf下载,可以通过下面这样的方式使用
from kokoro import KPipeline, KModel
from IPython.display import display, Audio
import soundfile as sf
# 参数中config与model修改为本地模型路径
model = KModel(config='/root/autodl-tmp/Kokoro-82M/config.json', model='/root/autodl-tmp/Kokoro-82M/kokoro-v1_0.pth')
pipeline = KPipeline(lang_code='z', model=model)
text = '中國人民不信邪也不怕邪,不惹事也不怕事,任何外國不要指望我們會拿自己的核心利益做交易,不要指望我們會吞下損害我國主權、安全、發展利益的苦果!'
# 下方voice修改为本地对应模型路径
generator = pipeline(
text, voice='/root/autodl-tmp/Kokoro-82M/voices/zm_yunxi.pt', # <= change voice here
speed=0.85, split_pattern=r'\n+'
)
for i, (gs, ps, audio) in enumerate(generator):
print(i) # i => index
print(gs) # gs => graphemes/text
print(ps) # ps => phonemes
display(Audio(data=audio, rate=24000, autoplay=i==0))
sf.write(f'{i}.wav', audio, 24000)
model = KModel(config='/root/autodl-tmp/Kokoro-82M/config.json', model='/root/autodl-tmp/Kokoro-82M/kokoro-v1_0.pth').to('cuda')
可以在gpu上跑,速度提升10倍