using google gemini in nextjs app
user profile avatar
Tech Wizard

Published on • 🕑4 min read

How to Use Gemini API in NextJs

3likes4

Blog views1.1K

Listen to this blog

AI apps are evolving rapidly, which is great news for developers and users. I'm interested in using AI to generate blog article summaries to make it easier for readers to grasp the main points before diving into the full article.

This article discusses the process of integrating the Gemini API in NextJS and the various functionalities it offers. You'll be surprised to learn how straightforward incorporating the API into your app is. Before I begin, I'd like to shout out to Avinash Prasad for providing a simple guide."

Step 1: Obtain the Gemini API Key

To begin, head over to Google AI Studio and create a free GEMINI API Key. This key will enable you to communicate with various Gemini Models. Save the key in .env file as GEMINI_API_KEY.

//.env
GEMINI_API_KEY="YOUR_KEY_HERE"

Step 2: Install Google Generative AI Package

Install the Google AI JavaScript SDK enables you to use Google’s generative AI models. Make sure you are in your project’s root directory and run the following command in your terminal:

npm install @google/generative-ai

Step 3: Create An API Route to Handle Requests

In the API folder, create a router handle that helps you communicate with Gemini. For this tutorial, I named the file Gemini and added a route.ts file. Requests to Gemini will go to /API/Gemini

//api/gemini/route.ts
//I recommend using typescript for type safety
import { NextRequest, NextResponse } from "next/server";
import { GoogleGenerativeAI } from "@google/generative-ai";
export async function POST(req: NextRequest, res: NextResponse) {
  try {
    const apiKey = process.env.GEMINI_API_KEY as string;
//to avoid type-error where typescript complains the api-key could be null
    if (!apiKey) {
      throw new Error(
        "GEMINI_API_KEY is not defined in the environment variables."
      );
    }
    const genAI = new GoogleGenerativeAI(apiKey);
    const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" }); //you can choose other models such as gemini-pro and gemini
    const { message } = await req.json();
    const result = await model.generateContent(message);
    const response = await result.response.text();
    return NextResponse.json({ message: response });
  } catch (error) {
    console.error(error);
    return NextResponse.json(
      { error: "Something went wrong" },
      { status: 503 }
    );
  }
}

Step 4: Create A Helper Function

Since this is a post route, we need a function that receives the message body and fetches data from this route. I recommend creating a function in a lib file for reusability and type safety.

//lib/generate
import { baseUrl } from ".";
type Data = {
  message: string;
};
export async function handleGenerate(data: Data) {
  try {
    const response = await fetch(`${baseUrl}/gemini`, {
      method: "POST",
      headers: {
        Accept: "application/json",
        "Content-Type": "application/json",
      },
      body: JSON.stringify(data),
    });
    if (!response.ok) {
      throw new Error(`HTTP error! Status: ${response.status}`);
    }
    const  data = await response.json();
    return data;
  } catch (error) {
    console.error(error);
    return null;
  }
}

Step 4: Fetching Data

We can now import this function and call it whenever we need to contact Gemini by passing the message prop.

//example component chat.js
import {handleGenerate} from "@/lib/generate"
export default function Chat(){
return (
  <form action={handleGenerate}>
       <input type="text" name="message" id="message" placeholder="type your message here"></input>
       <button type="submit" className="add styling">Send</button>
</form>
)};

This simple code uses server actions to send the message. However, we need to handle what to do with the data and might need to useState to store the response we get from the handleGenerate

Step 5: Rendering Data

//example component chat.js
("use client");
import react, { useState } from "react";
import { handleGenerate } from "@/lib/generate";
export default function Chat() {
  const [message, setMessage] = useState("");
  const [response, setResponse] = useState("");
  async function handleSubmit() {
    const data = await handleGenerate(message);
    setResponse(data);
    setMessage("");
  }
  const parse = new DOMParser().parseFromString(htmlString, "text/html");
  return (
    <form action={handleSubmit}>
      <input
        type="text"
        name="message"
        value={message}
        onChange={(e) => setMessage(e.target.value)}
        placeholder="type your message here"></input>
      <button type="submit" className="add styling">
        Send
      </button>
      <div className="bg-transparent rounded mt-7 p-2 flex justify-center">
        {parse(response)}
      </div>
    </form>
  );
}

Advanced 

Depending on your use case, you might need to stream the incoming data to the client. For example, if you are creating a chatbot. You can stream the data by using Gemini in combination with FastAPI to return chucks:

import { NextRequest, NextResponse } from "next/server";
import { GoogleGenerativeAI } from "@google/generative-ai";
export async function POST(req: NextRequest, res: NextResponse) {
  try {
    const apiKey = process.env.GEMINI_API_KEY as string;
    if (!apiKey) {
      throw new Error(
        "GEMINI_API_KEY is not defined in the environment variables."
      );
    }
    const genAI = new GoogleGenerativeAI(apiKey);
    const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
    const { message } = await req.json();
    const result = await model.generateContentStream(message);
        for await (const chunk of result.stream) {
            const chunkText = chunk.text();
            console.log(chunkText);
         return NextResponse.json({ message: chunkText });
         }
  } catch (error) {
    console.error(error);
    return NextResponse.json(
      { error: "Something went wrong" },
      { status: 503 }
    );
  }
}

Conclusion

Integrating the Gemini model into your app is as simple as adding the above code block. However, note there are rate limitations and you might need to modify your model to limit the tokens utilized per each response.

Resources:

Like what you see? Share with a Friend

3 Comments

4 Likes

Comments (3)

sort comments

Before you comment please read our community guidelines


Please Login or Register to comment

user profile avatar

the don

online

Published on

✨ AI ✨

user profile avatar

Tech Wizard

online

Published on

One thing to note is that DOM Parser might give you errors saying it does not exist since you need to ensure the window object exists by checking typeof window !==undefined. Alternatively you can use html parser

user profile avatar

Tutor Juliet

online

Published on

Gemini is better since OpenAI charges for their API Keys