Skip to main content

ChatClovaX

This notebook provides a quick overview for getting started with Naver's HyperCLOVA X chat models via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the API reference.

CLOVA Studio has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide documentation.

Overview​

Integration details​

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatClovaXlangchain-community❌beta/❌❌PyPI - DownloadsPyPI - Version

Model features​

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
βŒβŒβŒβŒβŒβŒβœ…βœ…βœ…βŒ

Setup​

Before using the chat model, you must go through the three steps below.

  1. Creating NAVER Cloud Platform account
  2. Apply to use CLOVA Studio
  3. Find API Keys after creating CLOVA Studio Test App or Service App (See here.)

Credentials​

CLOVA Studio requires 2 keys (NCP_CLOVASTUDIO_API_KEY and NCP_APIGW_API_KEY).

  • NCP_CLOVASTUDIO_API_KEY is issued per Test App or Service App
  • NCP_APIGW_API_KEY is issued per account

The two API Keys could be found by clicking App Request Status > Service App, Test App List > β€˜Details’ button for each app in CLOVA Studio

Installation​

The LangChain Naver integration lives in the langchain-community package:

# install package
!pip install -qU langchain-community

Instantiation​

Now we can instantiate our model object and generate chat completions:

from langchain_community.chat_models import ChatClovaX

llm = ChatClovaX(
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
# clovastudio_api_key="..." # if you prefer to pass api key in directly instead of using env vars
# task_id="..." # if you want to use fine-tuned model
# service_app=False (default) # True if using service app
# include_ai_filters=False (default) ## True if you want to detect inappropriate content
# other params...
)
API Reference:ChatClovaX

Invocation​

messages = [
(
"system",
"You are a helpful assistant that translates English to Korean. Translate the user sentence.",
),
("human", "I love using NAVER AI."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)

Chaining​

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}. Translate the user sentence.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "Korean",
"input": "I love using NAVER AI.",
}
)
API Reference:ChatPromptTemplate

Additional functionalities​

Fine-tuning​

You can call fine-tuned CLOVA X models by passing in your corresponding task_id parameter. (You don’t need to specify the model_name parameter when calling fine-tuned model.)

You can check task_id from corresponding Test App or Service App details.

fine_tuned_model = ChatClovaX(
task_id='abcd123e',
temperature=0.5,
)

fine_tuned_model(messages)

Service App​

When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See here.)

For a Service App, a corresponding NCP_CLOVASTUDIO_API_KEY is issued and can only be called with the API Key.

#### Setting environment variables

NCP_CLOVASTUDIO_API_KEY="please input your clova studio api key."
NCP_APIGW_API_KEY="please input your NCP gateway api key."
llm = ChatClovaX(
service_app=True, # True if you want to use your service app, default value is False.
# clovastudio_api_key="..." # if you prefer to pass api key in directly instead of using env vars
# apigw_api_key="..." # if you prefer to pass gateway key in directly instead of using env vars
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
# other params...
)
ai_msg = llm.invoke(messages)

AI Filter​

AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See here for details.

llm = ChatClovaX(
model="HCX-DASH-001",
temperature=0.5,
max_tokens=None,
max_retries=2,
include_ai_filters=True, # True if you want to enabled ai filter
# other params...
)

ai_msg = llm.invoke(messages)
print(ai_msg.response_metadata['ai_filter'])

API reference​

For detailed documentation of all ChatClovaX features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.naver.ChatClovaX.html


Was this page helpful?


You can also leave detailed feedback on GitHub.