Prisma PHP AI

Prisma PHP AI is a powerful tool that leverages the easy integration of Chatbots and AI into your PHP applications. It provides a simple and user-friendly way to create chatbots and AI applications in PHP. With Prisma PHP AI, you can effortlessly build chatbots and AI applications that enhance your PHP projects. It offers a seamless experience for integrating advanced AI capabilities into your PHP codebase.

Getting Started with Prisma PHP AI

Prisma PHP AI Class is located in the /src/app/Lib/AI/ChatGPTClient directory. To get started with Prisma PHP AI, you need to import the ChatGPTClient class and create an instance of it. The ChatGPTClient class provides methods for interacting with the OpenAI GPT-3 API and processing the AI's response.

  1. Create a ChatGPT API Key from OpenAI and set it in your environment variables, for example: CHATGPT_API_KEY=sk-your-key.
  2. Create an instance of the ChatGPTClient class and pass the API key as a parameter to the constructor.
  3. Call the $chatGPTMessage = $chatGPTClient->sendMessage($conversationHistory, 'Generate a short title for this chat.'); method to send a message to the OpenAI API and retrieve the AI's response.
  4. You can use the formatGPTResponseToHTML method to format the AI's response into a more HTML-friendly format. While it may not provide complete support, it's a good starting point for displaying the AI's response in your PHP application.

Example Usage

<?php

  use Lib\AI\ChatGPTClient;
  use Lib\Auth\Auth;
  use Lib\Prisma\Classes\Prisma;
  
  $chatGPTClient = new ChatGPTClient();
  $auth = Auth::getInstance();
  $prisma = Prisma::getInstance();
  
  $user = $auth->getPayload();
  
  function generateTitle()
  {
      global $chatGPTClient, $prisma, $user;
  
      $userMessagesResponse = $prisma->chat->findMany([
          'where' => [
              'userId' => $user->id
          ]
      ]);
  
      if (!$userMessagesResponse) return false;
  
      $conversationHistory = $userMessagesResponse ? array_column($userMessagesResponse, 'userMessage') : [];
  
      $chatGPTMessage = $chatGPTClient->sendMessage($conversationHistory, 'Generate a short title for this chat.');
      return ['title' => $chatGPTMessage];
  }
  
  ?>

Note: The $chatGPTClient variable is an instance of the ChatGPTClient class, which is used to interact with the OpenAI GPT-3 API. The $chatGPTClient variable provides access to the methods for sending messages to the OpenAI API and processing the AI's response.

Example Usage: Formatting AI Response to HTML

<div class="chat chat-start">
      <div class="chat-bubble chat-bubble-neutral/20 overflow-x-auto">
          <?= $chatGPTClient->formatGPTResponseToHTML($chat->aiResponse) ?>
      </div>
  </div>

Note: The $chatGPTClient variable is an instance of the ChatGPTClient class, which is used to interact with the OpenAI GPT-3 API. The $chatGPTClient variable provides access to the methods for sending messages to the OpenAI API and processing the AI's response.

To determine the appropriate model to use

Go to the /src/app/Lib/AI/ChatGPTClient.php file and look for the determineModel private method. This method is responsible for determining the appropriate model to use based on the conversation history. The method uses a simple heuristic to select the model based on the length of the conversation history and the presence of certain keywords. You can customize this method to suit your specific needs and improve the accuracy of the model selection process.

private function determineModel(array $conversationHistory)
  {
      // Example logic for model selection
      $messageCount = count($conversationHistory);
      $totalTokens = array_reduce($conversationHistory, function ($carry, $item) {
          return $carry + str_word_count($item['content'] ?? '');
      }, 0);
  
      // If the conversation is long or complex, use a model with more tokens
      if ($totalTokens > 4000 || $messageCount > 10) {
          return 'gpt-3.5-turbo-16k'; // Use the model with a larger token limit
      }
  
      // Default to the standard model for shorter conversations
      return 'gpt-3.5-turbo';
  }

In that method, you can customize the logic to determine the appropriate model based on your specific requirements. For example, you can consider factors such as the length of the conversation history, the complexity of the conversation, the presence of specific keywords, or any other relevant information to select the most suitable model for generating chat responses. By default, the code uses the gpt-3.5-turbo model. Feel free to modify the logic in the determineModel method to improve the accuracy of the model selection process, for more information about the OpenAI GPT-3 models, you can refer to the OpenAI GPT-3 Chat Guide.