pythainlp.chat

class pythainlp.chat.ChatBotModel[source]
model: WangChanGLM
__init__() None[source]

Chat using AI generation

history: list[tuple[str, str]]
reset_chat() None[source]

Reset chat by cleaning history

load_model(model_name: str = 'wangchanglm', return_dict: bool = True, load_in_8bit: bool = False, device: str = 'cuda', torch_dtype: 'torch.dtype' | None = None, offload_folder: str = './', low_cpu_mem_usage: bool = True) None[source]

Load model

Parameters:
  • model_name (str) – Model name (Now, we support wangchanglm only)

  • return_dict (bool) – return_dict

  • load_in_8bit (bool) – load model in 8bit

  • device (str) – device (cpu, cuda or other)

  • torch_dtype (Optional[torch.dtype]) – torch_dtype

  • offload_folder (str) – offload folder

  • low_cpu_mem_usage (bool) – low cpu mem usage

chat(text: str) str[source]

Chatbot

Parameters:

text (str) – text for asking chatbot with.

Returns:

answer from chatbot.

Return type:

str

Example:

from pythainlp.chat import ChatBotModel
import torch

chatbot = ChatBotModel()
chatbot.load_model(device="cpu", torch_dtype=torch.bfloat16)

print(chatbot.chat("สวัสดี"))
# output: ยินดีที่ได้รู้จัก

print(chatbot.history)
# output: [('สวัสดี', 'ยินดีที่ได้รู้จัก')]