In this guide, we'll walk through creating an AI-powered chatbot from scratch. We will be using Azure AI Studio, Neon Postgres as the backend database, React for the frontend interface and Express for the backend API.
We'll deploy a GPT-4 model to Azure AI Studio, which we will then use to build a support chatbot that can answer questions, store conversations, and learn from interactions over time.
Prerequisites
Before we begin, make sure you have:
- An Azure account with an active subscription
- A Neon account and project
- Basic familiarity with SQL and JavaScript/TypeScript
- Node.js 18.x or later installed
Setting up Your Development Environment
If you haven't already, follow these steps to set up your development environment:
Create a Neon Project
- Navigate to the Neon Console
- Click "New Project"
- Select Azure as your cloud provider
- Choose East US 2 as your region
- Give your project a name (e.g., "chatbot-db")
- Click "Create Project"
Save your connection details - you'll need these to configure your chatbot's database connection.
Create the Database Schema
A standard chatbot needs to store conversations and track how users interact with it. We'll create a database schema in Neon Postgres that stores messages, tracks user data, and helps us understand how well the chatbot is performing.
Our schema will include 4 tables:
users
: Stores user informationconversations
: Manages chat sessionsmessages
: Stores the messages between users and the botfeedback
: Records user ratings and comments
Connect to your Neon database and execute the following SQL statements to create the tables:
With our 4 tables in place, we have a schema which allows us to:
- Track user interactions and store user data
- Manage chat sessions and track when they started
- Store messages between users and the bot
- Collect feedback on messages to improve the chatbot
Set Up Azure AI Studio Project
With your Neon database ready, let's set up Azure AI Studio to deploy our own GPT-4 model.
In order to access the Azure OpenAI Studio, you need to create an Azure OpenAI resource. Here's how you can do that:
- Go to the Azure OpenAI resources portal
- Click the "Create new Azure OpenAI resource" button
- Fill in the required fields like the subscription, resource group, region and name
- Click the "Next" button
- For the network settings, you can leave the default settings so that all networks can access the resource or you can restrict access to specific networks
- Click "Next" and under "Review + create" click the "Create" button to create the resource
This will create a new Azure OpenAI resource for you. The deployment might take a few minutes to complete.
Once the deployment is completed, you can again visit the Azure OpenAI portal, and you should see your newly created resource there with type "OpenAI".
Deploy the Azure OpenAI Model
With the Azure OpenAI resource set up, we can now deploy the GPT-4 model. To deploy the Azure OpenAI model:
- Go to the Azure OpenAI portal again
- Click on your OpenAI resource that you created earlier
- Click on the "Model catalog" tab
- Find and click on the "gpt-4" model from the list
- Click the "Deploy" button
- Wait for deployment to complete - you'll receive an Endpoint URL and API key
There are other models available in the Azure OpenAI Studio, but for this guide, we'll use the GPT-4 model for our chatbot.
After deployment, click "Open in playground" to test the model. The playground is a web interface where you can:
- Test your model by chatting with it directly
- Add training data to help the model understand your specific needs
- Adjust settings like:
- Maximum response length (how long answers can be)
- Temperature (higher = more creative, lower = more focused)
- Top P and Presence Penalty (control response variety)
Feel free to experiment with these settings to see how they affect the model's responses.
Setting Up Model Instructions
You can give the model instructions about how it should behave. Think of this like training a new colleague - you're telling them:
- What they should do
- What they shouldn't do
- How they should talk to users
- What information they can access
For example, you might write:
These instructions will be included with every message to the model. The model will follow these instructions for all conversations.
Testing Your Instructions
After setting up instructions for the model, you can test them in the playground, for example:
- Try different types of questions in the playground
- Check if the model follows your guidelines
- Adjust the instructions if needed
- Save the instructions when you're happy with the responses
Additionally, you can add training data to help the model understand your specific needs. To learn more about training data, check the Azure OpenAI Studio documentation.
Building the Backend
With the Azure OpenAI model deployed, we can now build the backend API that will interact with the model and store chat data in our Neon database.
But before we start building our backend, let's quickly look at how to get the API code from Azure OpenAI Studio. This will help us make sure that we're using the correct API format.
Getting the API Code from Azure OpenAI Studio
-
In the Azure OpenAI Studio playground, click "View code"
-
From the dropdown menu, select "JSON"
-
Under "Key authentication" you'll see a sample request like this:
This shows us the exact format we need to use when making API calls to Azure OpenAI.
Setting Up the Project
First, let's create a new Node.js project and install the dependencies that we'll need for our chatbot backend.
Create a new project folder and initialize a new Node.js project:
After that, install the required packages:
The packages we're installing are:
express
: Web framework for building our API endpointspg
: PostgreSQL client for connecting to Neondotenv
: Environment variable managementcors
: Handles Cross-Origin Resource Sharing for our frontendaxios
: Makes HTTP requests to Azure OpenAI API
Project Structure
Before we start, let's organize our project files in a way that makes our code easy to maintain and update. We'll use a standard Node.js project structure that separates our code into different directories based on functionality:
config
: Holds configuration files, including database connection settingsservices
: Contains the core business logic for chat functionality and OpenAI integrationroutes
: Manages API endpoints and request handlingutils
: Stores helper functions and shared utilities- The
.env
file will store our sensitive configuration values like API keys
The project structure will look like this:
This structure will help us keep our code organized and makes it easier for other developers to understand and work with the project.
Environment Configuration
Before we start coding, let's set up our environment configuration.
Create a .env
file in your project root with the following configuration:
You'll need to replace <your_password>
, <your_host>
, <your-resource-name>
, and <your-api-key>
with your actual values.
You can get your Azure OpenAI API key from the Azure OpenAI Studio portal under the Chat playground.
Database Configuration
Next, let's set up the database connection. We'll use the pg
package to connect to our Neon Postgres database.
Create a src/config/database.js
file with the following code:
This sets up a connection to the Neon Postgres database using the pg
package. We use the DATABASE_URL
environment variable to connect to the database.
OpenAI Service
Next, let's create a service to interact with the Azure OpenAI API. This service will handle sending messages to the GPT-4 model that we deployed earlier.
Create a src/services/openaiService.js
file with the following code:
This creates a service that handles all communication with Azure OpenAI. It does two main things:
- Checks that we have all the required Azure OpenAI settings (API key, endpoint, and deployment name) when the service starts up
- Provides a
generateResponse
method that:- Takes a user's message and any previous conversation history
- Sends it to our deployed GPT-4 model on Azure
- Returns the model's response
The service includes the bot's base instructions (as a marketing assistant in this example) and error handling for common issues like authentication problems.
Feel free to adjust the instructions and settings to match your chatbot's needs.
Chat Service
Next, let's create a service to manage chat interactions. This service will handle user messages, conversation history, and saving messages to the database.
Create src/services/chatService.js
with the following code:
This chat service manages all our conversations and messages. There is a lot going on in this service, so let's break it down:
- Creates or finds users in our database
- Message Handling:
- Saves messages from both users and the bot
- Retrieves conversation history
- Conversation Flow:
- Starts new conversations
- Processes incoming messages
- Gets responses from Azure OpenAI
- Stores everything in our Neon database
We are also using database transactions to make sure that all related data (messages, user info, and conversations) is saved correctly, with rollback support if anything fails. This helps maintain data consistency in our chat application.
For this service you can think of it as the coordinator between our database, the Azure AI model, and our chat interface which we'll build next.
Chat API Routes Implementation
With our services in place, let's create the API routes that will handle incoming requests from our chat interface.
Create a src/routes/chatRoutes.js
file with the following:
The above are our API endpoints that our chat interface will use to communicate with the backend. We set up three main routes:
/start
: Creates a new conversation for a user/message
: Handles sending messages and getting responses from the bot/history
: Retrieves past messages from a conversation
Each route connects to our chat service to perform its specific task. We'll use these routes to build our chat interface in the next section.
Server Setup
Finally, let's set up our Express server to run our chatbot API. We'll also add a health check endpoint and error handling middleware.
Create server.js
in your project root with the following content:
The above is our main application file that brings everything together. It sets up an Express server with:
- CORS support to allow frontend access
- JSON parsing for API requests
- Our chat routes at
/api/chat
- A health check endpoint to monitor the server
We also include an error handling middleware to catch any unhandled exceptions and log them to the console for easier debugging.
Running the Application
Starting the server is straightforward - just run node server.js
. Once started, the server will:
- Connect to your Neon database
- Listen for chat requests
- Be ready to handle messages from the chat interface
You can now send requests to http://localhost:3000/api/chat
(or whichever port you configured) to interact with your chatbot.
Creating the React Frontend
With our backend API ready, let's create a React frontend for our chatbot using Tailwind CSS for styling. We'll use TypeScript for type safety and Vite for faster development.
Create React Project
First, let's create a new React project using Vite:
Then navigate to project directory:
And install the dependencies for the project:
After that, let's install the necessary packages for our chatbot interface such as Tailwind CSS and Axios:
Also, let's install some additional utilities for our chatbot interface like clsx, Heroicons, and date-fns:
The clsx
package is used to conditionally apply CSS classes, @heroicons/react
provides SVG icons, and date-fns
helps with date formatting. Those are not required but will make our chat interface a bit more user-friendly.
Configure Tailwind CSS
With Tailwind CSS installed, let's set it up in our project. Start by initializing a Tailwind CSS:
This will create a tailwind.config.js
file in your project root. Update the file with the following configuration:
The above configuration extends the default Tailwind theme with custom colors for our chatbot interface and also specifies the content files to process.
After that, add the Tailwind directives to the src/index.css
file:
This will apply the Tailwind CSS styles to our project, so we can use them in our components.
Create Environment Configuration
Our chatbot interface will need to connect to the backend API to send and receive messages. Let's set up the API URL in our environment configuration.
Create .env
file in project root:
Make sure to replace the VITE_API_URL
with the actual URL of your backend API. This will allow our chatbot interface to communicate with the backend application.
Project Structure
For our chat interface, let's organize our React components into a maintainable structure:
components/Chat
: Contains all chat-related components like message bubbles and input fieldscomponents/Layout
: Holds reusable layout componentshooks
: Stores custom React hooks for managing chat functionalitytypes
: Defines TypeScript interfaces for our chat data
This structure will allow us to separate our code into logical pieces, so that it will be easier to find and update specific parts of the application.
The project structure will look like this:
With our project structure in place, let's start building our chatbot interface. We will create the components for our chat interface starting with the types and basic components, then bringing it all together.
1. Define Message Types
First, let's define TypeScript types for our chat messages:
This defines a Message
interface that tracks:
- Who sent the message (
sender
) - Message content (
content
) - When it was sent (
timestamp
)
We'll use this type to manage chat messages in our application.
2. Create the Layout Container
Next, let's create a container component which will provide a consistent spacing and width for our chat interface:
The container is a simple component that wraps all our chat components in a centered, responsive layout.
3. Build the Message Bubble Component
Each chat message will be displayed as a bubble with different styles for user and bot messages:
The chat bubble component:
- Takes a
message
prop with sender and content - Uses different styles for user vs bot messages
- Shows message timestamp and aligns user messages to the right, bot messages to the left
We are going to use this component to render chat messages in the chat interface.
4. Create the Message Input Component
Next, let's build an input field for users to type messages and a submit button:
This component includes:
- A text input field for messages
- A submit button with loading state
- A simple form handling with
preventDefault()
The input field will allow users to type messages and submit them to the chatbot over the Azure OpenAI API that we set up earlier.
5. Create the Chat Hook
After building the basic components, let's create a custom hook to manage chat state and interactions. This hook will handle sending messages, loading states, and API calls to the backend:
The hook handles:
- Message state management
- API calls to the backend
- Error handling with user feedback
This hook will be used in the main chat interface component to manage chat interactions and state updates.
6. Build the Main Chat Interface
Finally, let's combine everything into the main chat interface component:
The main interface component:
- Uses our chat hook for state management
- Renders message history with
ChatBubble
components - Handles message input with
ChatInput
component
This component will display the chat interface with message bubbles, input field, and submit button for users to interact with the chatbot.
7. Update App Component
Finally, we can update the main App component to use our chat interface and wrap it in a container for layout:
This wraps our chat interface in the container component for proper layout and spacing.
You can now start the development server to see your chat interface:
Visit http://localhost:5173
to test the chatbot interface. It will automatically connect to your backend API running on port 3000. Make sure your backend server is running before testing the chat interface.
Conclusion
In this guide, we've built a simple AI-powered chatbot widget that combines Azure AI Studio with Neon's serverless Postgres database. This implementation works well for documentation websites and help systems, where the chatbot can be embedded as a widget to provide immediate assistance to users.
When the Azure OpenAI model is trained on your specific documentation or knowledge base, the chatbot can provide accurate, relevant responses about your product or service. This creates a seamless experience for anonymous users who can get quick answers without searching through documentation.
Also, by capturing chat interactions, user queries, bot responses, and feedback in your database, you can analyze where users face challenges and identify areas for documentation improvement.
As a next step, you can further train your Azure OpenAI model with more specific data to improve its accuracy and relevance and extend the chatbot's functionality to handle more complex queries and tasks.
Additional Resources
Need help?
Join our Discord Server to ask questions or see what others are doing with Neon. Users on paid plans can open a support ticket from the console. For more details, see Getting Support.