Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnhandledPromiseRejectionWarning #2

Open
ciaranodriscoll opened this issue Feb 20, 2023 · 1 comment
Open

UnhandledPromiseRejectionWarning #2

ciaranodriscoll opened this issue Feb 20, 2023 · 1 comment

Comments

@ciaranodriscoll
Copy link

Hi,
Thanks for the tutorial. Appreciate any advice. I am trying to run your example however I'm hitting an error. Setting it all up it doesn't work.

I get the following error when I check the Function URL:

UnhandledPromiseRejectionWarning: Unhandled promise rejection: TypeError: Cannot read properties of undefined (reading 'toLowerCase')
at Object.exports.handler (/var/task/handlers/ZN9d08e2533c3fdaa4ff6d62aa20db2c6e.js:4:29)
at Object.exports.handler (/var/task/node_modules/runtime-handler/index.js:339:10)
at Runtime.exports.handler (/var/task/runtime-handler.js:17:17)
at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1089:29)

@duffek
Copy link

duffek commented Jul 11, 2023

Ran into a similar issue. I ended up adding some error handling to see what is happening. Turns out I was getting the status code 429 which is rate limit reached. But I had a brand new account... turns out the system was busy. I had to setup a paid account and this all went way and started working for me.

Here is my code for reference:

const { Configuration, OpenAIApi } = require("openai");
exports.handler = async function(context, event, callback) {
  const twiml = new Twilio.twiml.MessagingResponse();
  const inbMsg = event.Body.toLowerCase().trim();
  const configuration = new Configuration({
    apiKey: process.env.OPENAI_API_KEY
});
  const openai = new OpenAIApi(configuration);
  try {
    const response = await openai.createCompletion({
        model: "text-davinci-003",
        prompt: inbMsg,
        temperature: 0.7, //A number between 0 and 1 that determines how many creative risks the engine takes when generating text.
        max_tokens: 160, // Maximum completion length.
        frequency_penalty: 0.7 // # between 0 and 1. The higher this value, the bigger the effort the model will make in not repeating itself.
        });
    twiml.message(response.data.choices[0].text);
    callback(null, twiml);
  } catch(error) {
    twiml.message(`OpenAI Error: ${error} + ${inbMsg}`);
    callback(null, twiml);
  }
};

Note: This txt messages errors back to you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants