Gemini Search Tool (Grounding with Google Search)¶
Gemini models on this platform can use tools (tools) either through the Google native Generate Content interface or the OpenAI Chat Completions compatible interface.
This article focuses on how to enable the Google Search Grounding (web search) tool for Gemini under the new API platform, and compares it with the Google official REST examples.
Official Reference
🌐 Grounding with Google Search (Overview)¶
The Google search tool google_search connects the Gemini model with live internet content, allowing the answers to be:
- More authentic and reliable: Generated based on live web content, reducing hallucinations.
- Supportive of real-time information: Answering recent events and highly time-sensitive questions.
- With source citations: Returning web links and citation snippets through the
groundingMetadatafield, facilitating the display of clickable citations on the frontend.
Compared to OpenAI's Web search tool (see gpt/search-tool, reference OpenAI Web search docs), Gemini's google_search is very similar in design:
As long as the tool is enabled in the request, the model will automatically decide whether to initiate a search as needed, and carry structured citation information in the response.
🧩 Basic REST Calling Example (curl)¶
The following example demonstrates how to enable the google_search tool on this platform and in the Google official interface, asking about the champion of Euro 2024.
Our Platform REST Example (Via New API Gateway)¶
curl "https://api-cs-al.naci-tech.com/v1beta/models/gemini-3-flash-preview:generateContent?key=$API_KEY" \
-H "Content-Type: application/json" \
-X POST \
-d '{
"contents": [
{
"parts": [
{
"text": "Who won the Euro 2024? Please provide the answer and attach 2-3 reference links."
}
]
}
],
"tools": [
{
"google_search": {}
}
]
}'
Google Official REST Example¶
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent?key=$GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-X POST \
-d '{
"contents": [
{
"parts": [
{ "text": "Who won the Euro 2024? Please provide 2-3 reference links." }
]
}
],
"tools": [
{
"google_search": {}
}
]
}'
The request bodies of both are fully compatible; the main differences are:
- Gateway domain and authentication method: Our site uses
https://api-cs-al.naci-tech.com/...+$API_KEY, while Google official useshttps://generativelanguage.googleapis.com/...+$GEMINI_API_KEY. - Model name: It is recommended to use the latest Gemini 3 / 2.5 models that officially support
google_search(see "Supported Models" below).
⚙️ How grounding works¶
Referring to Google's official documentation, when you enable the google_search tool in a request, the overall process is as follows:
- User request: Your application sends the user's question (e.g., "Who won the Euro 2024?") along with the
google_searchtool to the Gemini API. - Prompt analysis: The model first analyzes the current question to determine if calling Google Search is needed to improve the answer quality.
- Automatic search: If it thinks search is helpful, the model automatically constructs one or more search queries and calls Google Search.
- Result processing: The model reads the search results, synthesizes information from multiple sources, and generates the final answer.
- Return grounded response: The candidates returned by the API will contain regular text answers as well as the
groundingMetadatafield, indicating which web content the answer is based on.
For developers, the experience is very simple:
Just enable the google_search tool in the request, and the model and Google Search will automatically handle the rest.
📦 Understanding the Response: groundingMetadata Structure¶
When Gemini uses Google Search for grounding, a candidate (candidates[i]) in the response will contain a groundingMetadata field. Below is a simplified example:
{
"candidates": [
{
"content": {
"parts": [
{
"text": "Spain defeated England 2:1 in the Euro 2024 final to win the championship."
}
],
"role": "model"
},
"groundingMetadata": {
"webSearchQueries": [
"UEFA Euro 2024 winner",
"who won euro 2024"
],
"groundingChunks": [
{ "web": { "uri": "https://www.uefa.com/...", "title": "UEFA.com" } },
{ "web": { "uri": "https://www.aljazeera.com/...", "title": "Al Jazeera" } }
],
"groundingSupports": [
{
"segment": {
"startIndex": 0,
"endIndex": 50,
"text": "Spain defeated England 2:1 in the Euro 2024 final"
},
"groundingChunkIndices": [0, 1]
}
]
}
}
]
}
The key fields have the same meanings as in the official documentation:
webSearchQueries: The actual search queries issued by the model, helpful for debugging and understanding the model's search intent.groundingChunks: Each element represents an external web source (e.g.,uriandtitle), similar to a "citation source list".groundingSupports: Associates a text snippet in the model's answer (segment.startIndex~segment.endIndex) with one or moregroundingChunks, meaning "which web content this snippet is based on".
With these fields, you can build fine-grained citation displays on the frontend (e.g., showing [1][2] at the end of a sentence), allowing users to click and jump directly to the corresponding source.
🔗 Displaying Inline Citations on the Frontend¶
Google's official documentation provides Python / JavaScript examples demonstrating how to convert groundingSupports and groundingChunks into text with inline citations.
Here is a pseudocode to illustrate the common processing pattern (the logic is consistent with the official documentation, but simplified):
- Extract from the response:
- Original text:
text = candidate.content.parts[0].text - Supports array:
supports = candidate.groundingMetadata.groundingSupports - Citation source array:
chunks = candidate.groundingMetadata.groundingChunks - Sort
supportsbysegment.endIndexin descending order to avoid disrupting subsequent indices when inserting citations. - For each
support: - Read
support.groundingChunkIndices, and find the correspondingchunks[i].web.uri. - Construct a citation string for each index, e.g.,
[1](uri1),[2](uri2). - Insert the constructed citation string into
textafter thesegment.endIndexposition. - Finally, return
text_with_citationswith Markdown links, which can be directly rendered on the frontend.
Pseudocode illustration (JavaScript style):
function addCitations(response) {
let text = response.text;
const supports = response.candidates[0]?.groundingMetadata?.groundingSupports ?? [];
const chunks = response.candidates[0]?.groundingMetadata?.groundingChunks ?? [];
const sortedSupports = [...supports].sort(
(a, b) => (b.segment?.endIndex ?? 0) - (a.segment?.endIndex ?? 0)
);
for (const support of sortedSupports) {
const endIndex = support.segment?.endIndex;
if (endIndex == null || !support.groundingChunkIndices?.length) continue;
const citationLinks = support.groundingChunkIndices
.map((i) => {
const uri = chunks[i]?.web?.uri;
return uri ? `[${i + 1}](${uri})` : null;
})
.filter(Boolean);
if (citationLinks.length > 0) {
const citationString = citationLinks.join(", ");
text = text.slice(0, endIndex) + citationString + text.slice(endIndex);
}
}
return text;
}
You can customize the UI display style on top of this (e.g., hover to display webpage titles, open links in new tabs, etc.).
✅ Supported Models¶
According to Google's official documentation, the following models currently support Grounding with Google Search (the google_search tool):
| Model | Supports Google Search Grounding |
|---|---|
gemini-3.1-flash-image-preview |
✔︎ |
gemini-3.1-pro-preview |
✔︎ |
gemini-3-pro-image-preview |
✔︎ |
gemini-3-flash-preview |
✔︎ |
gemini-2.5-pro |
✔︎ |
gemini-2.5-flash |
✔︎ |
gemini-2.5-flash-lite |
✔︎ |
gemini-2.0-flash |
✔︎ |
Note: Older models used the
google_search_retrievaltool; for current models, please uniformly usegoogle_search.
When calling on this platform, it is recommended to prioritize the latest generation of Gemini 3 / 2.5 models, such as:
- For fast scenarios:
gemini-3-flash-preview,gemini-2.5-flash - For high-precision scenarios:
gemini-3.1-pro-preview,gemini-2.5-pro