Getting Error when calling openai/gpt-oss-20b from HuggingFaceEndpoint
from langchain_huggingface import HuggingFaceEndpoint
import os
llm = HuggingFaceEndpoint(
repo_id="openai/gpt-oss-20b",
provider="hf-inference", # use HF’s own inference API
max_new_tokens=256,
temperature=0.7,
huggingfacehub_api_token="",
)
response = llm.invoke("Explain quantum mechanics in simple terms.")
print(response)
When we are invoking this then we are getting error saying: Bad Request: The endpoint is paused, ask a maintainer to restart it
How we can use openai/gpt-oss-20b from HuggingFaceEndpoint, can anyone help on this !
Yes i am facing the same error, i think there is some error on the provider side
public async Task GenerateAsync(string prompt)
{
try
{
var textGen = _kernel.GetRequiredService();
var response = await textGen.GetTextContentsAsync(prompt);
return response.FirstOrDefault()?.Text ?? string.Empty;
}
catch (Exception ex)
{
_logger.LogError(ex, "Error generating text with HuggingFace model.");
throw;
}
}
If anyone knows, please tell me