Read the statement by Michael Teeuw here.
ChatGpt intergration
-
@SILLEN-0 what is the data format of the response?, text, json, ???
you said
‘say banana’
did it return a wav file?
if u add the MMM-Logging module, then u can see all the logs in one place (output of npm start)
socket-io requires a serializable object
says error is in getResponse() function,
so, did your module get multiple requests concurrently and send them on?
I would make the notification string more specific. ‘gpt question’
there are hundreds of notifications going on, possible there was another module sending that same string…
-
@SILLEN-0 also that is the error part of the pm2 logs --lines output
when I am developing
I don’t use pm2,
I just donpm start >somefile.txt 2>&1
in the MagicMirror folder
then all the console output is in one file.
adding the logging module will reflect all the modulename.js log statements tooAND you can debug on the mm UI , ctrl-shift-i,
select the sources tab, find your module name.js in the left nav,and you can put stops on lines of code and step thru it
-
@sdetweil hey. I’m sorry for not being able too answer but i have other stuff going on and a math test coming up. The text is sent over as a string and the module does recieved the string and gets a response from the API it is after that the issue is caused. I Google the error and people say that there is probably a loop in the code somewhere. I will try with the debug module when i have time. I really want too get this too work.
-
@SILLEN-0 the data back from the api looks like this
status: 200, statusText: 'OK', headers: { date: 'Sun, 29 Jan 2023 23:56:49 GMT', 'content-type': 'application/json', 'content-length': '343', connection: 'close', 'access-control-allow-origin': '*', 'cache-control': 'no-cache, must-revalidate', 'openai-model': 'text-davinci-003', 'openai-organization': 'user-ieyrbalt3azhd9sd82splktg', 'openai-processing-ms': '2790', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'x-request-id': 'd341b29147db89ee20886107e22b41a4' }, config: { transitional: [Object], adapter: [Function: httpAdapter], transformRequest: [Array], transformResponse: [Array], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: [Object], method: 'post', data: '{"model":"text-davinci-003","prompt":"What is the weather like today?","temperature":0.7}', url: 'https://api.openai.com/v1/completions' }, request: ClientRequest { _events: [Object: null prototype], _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: false, _last: true, chunkedEncoding: false, shouldKeepAlive: false, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, _contentLength: null, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: false, socket: [TLSSocket], _header: 'POST /v1/completions HTTP/1.1\r\n' + 'Accept: application/json, text/plain, */*\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.1.0\r\n' + 'Authorization: Bearer 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\r\n' + 'Content-Length: 89\r\n' + 'Host: api.openai.com\r\n' + 'Connection: close\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: [Agent], socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, path: '/v1/completions', _ended: true, res: [IncomingMessage], aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: [Writable], [Symbol(kCapture)]: false, [Symbol(kNeedDrain)]: false, [Symbol(corked)]: 0, [Symbol(kOutHeaders)]: [Object: null prototype], [Symbol(kUniqueHeaders)]: null }, data: { id: 'cmpl-6eBpQkPEmW4z1PoxRTTprs8Lhoqn4', object: 'text_completion', created: 1675036608, model: 'text-davinci-003', choices: [Array], usage: [Object] } } answer: { status: 200, statusText: 'OK', headers: { date: 'Sun, 29 Jan 2023 23:56:49 GMT', 'content-type': 'application/json', 'content-length': '343', connection: 'close', 'access-control-allow-origin': '*', 'cache-control': 'no-cache, must-revalidate', 'openai-model': 'text-davinci-003', 'openai-organization': 'user-ieyrbalt3azhd9sd82splktg', 'openai-processing-ms': '2790', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'x-request-id': 'd341b29147db89ee20886107e22b41a4' }, config: { transitional: [Object], adapter: [Function: httpAdapter], transformRequest: [Array], transformResponse: [Array], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: [Object], method: 'post', data: '{"model":"text-davinci-003","prompt":"What is the weather like today?","temperature":0.7}', url: 'https://api.openai.com/v1/completions' }, request: ClientRequest { _events: [Object: null prototype], _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: false, _last: true, chunkedEncoding: false, shouldKeepAlive: false, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, _contentLength: null, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: false, socket: [TLSSocket], _header: 'POST /v1/completions HTTP/1.1\r\n' + 'Accept: application/json, text/plain, */*\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.1.0\r\n' + 'Authorization: Bearer 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\r\n' + 'Content-Length: 89\r\n' + 'Host: api.openai.com\r\n' + 'Connection: close\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: [Agent], socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, path: '/v1/completions', _ended: true, res: [IncomingMessage], aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: [Writable], [Symbol(kCapture)]: false, [Symbol(kNeedDrain)]: false, [Symbol(corked)]: 0, [Symbol(kOutHeaders)]: [Object: null prototype], [Symbol(kUniqueHeaders)]: null }, data: { id: 'cmpl-6eBpQkPEmW4z1PoxRTTprs8Lhoqn4', object: 'text_completion', created: 1675036608, model: 'text-davinci-003', choices: [Array], usage: [Object] } }
i don’t see the text of the answer
-
@sdetweil Hey any updates on this?
-
@sdetweil Do you have any updates regarding the speech capture process?
-
@PraiseU I wasn’t working on any speech capture integration.
I outlined the possibilities
-
@sdetweil Oh okay! I found this Module https://github.com/TheStigh/MMM-VoiceCommander. Do you think it can be useful? or does it still have the problem with accuracy?
-
@PraiseU said in ChatGpt intergration:
@sdetweil Oh okay! I found this Module https://github.com/TheStigh/MMM-VoiceCommander. Do you think it can be useful? or does it still have the problem with accuracy?
yes, it is built on mmm-voice
-
@sdetweil Can you shed some light on what you mean by this “this is why the MMM-GoogleAssistant provides mechanisms (recipe) to use the captured text for non- google uses” And how can i approach this? Thanks a lot