Browser Infrastructure for AI Agents
Agent Browser lets you execute agentic workflows on remote browsers that never get blocked. Infinitely scalable, headless or headful, and powered by the world’s most reliable proxy network.
Navigate any website like a human would
- Seamlessly access any public website using browser fingerprinting and CAPTCHA solving.
- Spin up unlimited parallel sessions from any geolocation without losing performance.
- Leverage headful or headless browsers to control context, cookies and tabs.
- Seamless integration through API or MCP, with no need for per-site configuration.
Make the Web AI-Ready
Seamlessly access any public website using browser fingerprinting and CAPTCHA solving.
Spin up unlimited parallel sessions from any geolocation without losing performance.
Leverage headful or headless browsers to control context, cookies and tabs.
Seamless integration through API or MCP, with no need for per-site configuration.
Bright Data Powers the World's Top Brands
Bright Data allows Autonomous AI agents to navigate websites, find information and perform actions automatically in a simple to integrate, consistent and reliable environment
Power your most complex workflows
Agent interaction
- Enable agentic task automations
- Fill forms, search, and more
- Quick start with low latency
- Ensure secure, isolated sessions
Stealth browsing
- Use geolocation proxies
- Human-like fingerprinting
- Automatically solve CAPTCHAs
- Manage cookies & session
AI-ready data pipeline
- Discover relevant data sources
- Real-time or batch collection
- Structured or unstructured output
- Integrate seamlessly via MCP
Headless & Headful browsers for unlimited, cost-effective web access and navigation
Human-like fingerprints
Emulate real users' browsers to simulate a human experience
Stealth mode
Ethically bypass bot detection and solve CAPTCHAs
Low latency sessions
Sub-second connection and stable sessions ensuring smooth interaction
Set referral headers
Simulate traffic originating from popular or trusted websites
Manage cookies and sessions
Prevent potential blocks imposed by cookie-related factors
Automatic retries and IP rotation
Continually retry requests, and rotate IPs, in the background
Worldwide geo-coverage
Access localized content from any country, city, state or ASN
Browser automation support
Compatible with Playwright, Puppeteer and Selenium
Enterprise-grade security
Browser instances can integrate with enterprise VPN and sign-on
const pw = require('playwright');
const SBR_CDP = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222';
async function main() {
console.log('Connecting to Scraping Browser...');
const browser = await pw.chromium.connectOverCDP(SBR_CDP);
try {
const page = await browser.newPage();
console.log('Connected! Navigating to https://example.com...');
await page.goto('https://example.com');
console.log('Navigated! Scraping page content...');
const html = await page.content();
console.log(html);
} finally {
await browser.close();
}
}
main().catch(err => {
console.error(err.stack || err);
process.exit(1);
});
import asyncio
from playwright.async_api import async_playwright
SBR_WS_CDP = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222'
async def run(pw):
print('Connecting to Scraping Browser...')
browser = await pw.chromium.connect_over_cdp(SBR_WS_CDP)
try:
page = await browser.new_page()
print('Connected! Navigating to https://example.com...')
await page.goto('https://example.com')
print('Navigated! Scraping page content...')
html = await page.content()
print(html)
finally:
await browser.close()
async def main():
async with async_playwright() as playwright:
await run(playwright)
if __name__ == '__main__':
asyncio.run(main())
const puppeteer = require('puppeteer-core');
const SBR_WS_ENDPOINT = 'wss://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9222';
async function main() {
console.log('Connecting to Scraping Browser...');
const browser = await puppeteer.connect({
browserWSEndpoint: SBR_WS_ENDPOINT,
});
try {
const page = await browser.newPage();
console.log('Connected! Navigating to https://example.com...');
await page.goto('https://example.com');
console.log('Navigated! Scraping page content...');
const html = await page.content();
console.log(html)
} finally {
await browser.close();
}
}
main().catch(err => {
console.error(err.stack || err);
process.exit(1);
});
const { Builder, Browser } = require('selenium-webdriver');
const SBR_WEBDRIVER = 'https://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9515';
async function main() {
console.log('Connecting to Scraping Browser...');
const driver = await new Builder()
.forBrowser(Browser.CHROME)
.usingServer(SBR_WEBDRIVER)
.build();
try {
console.log('Connected! Navigating to https://example.com...');
await driver.get('https://example.com');
console.log('Navigated! Scraping page content...');
const html = await driver.getPageSource();
console.log(html);
} finally {
driver.quit();
}
}
main().catch(err => {
console.error(err.stack || err);
process.exit(1);
});
from selenium.webdriver import Remote, ChromeOptions
from selenium.webdriver.chromium.remote_connection import ChromiumRemoteConnection
SBR_WEBDRIVER = 'https://brd-customer-CUSTOMER_ID-zone-ZONE_NAME:[email protected]:9515'
def main():
print('Connecting to Scraping Browser...')
sbr_connection = ChromiumRemoteConnection(SBR_WEBDRIVER, 'goog', 'chrome')
with Remote(sbr_connection, options=ChromeOptions()) as driver:
print('Connected! Navigating to https://example.com...')
driver.get('https://example.com')
print('Navigated! Scraping page content...')
html = driver.page_source
print(html)
if __name__ == '__main__':
main()
Easily integrate your tech stack
- Run your Puppeteer, Selenium or Playwright scripts
- Automated proxy management and web unlocking
- Get data in unstructured or structured formats
- Get data in unstructured or structured formats
Agent Browser
Scalable browser infrastructure with autonomous unlocking
FAQ
What is Agent Browser?
Agent Browser is a serverless browsing infrastructure that allows you to deploy and control cloud browsers with built-in website unblocking capabilities. Agent Browser automatically manages all website unlocking operations under the hood, including: CAPTCHA solving, browser fingerprinting, automatic retries, selecting headers, cookies, & JavaScript rendering, and more, so you can save time and resources.
When do I need to use an Agent Browser?
When building and running AI agents, developers use cloud browsers to search and retrieve information, navigate websites, take action and extract data. Same as a human would, but autonomously and at scale.
Is Agent Browser a headless browser or a headfull browser?
Scraping Browser is a GUI browser (aka "headfull" browser) that uses a graphic user interface. However, a developer will experience Agent Browser as headless, interacting with the browser through an API or MCP. Agent Browser, however, is opened as a GUI Browser on Bright Data’s infrastructure.
What’s the difference between headfull & headless browsers for scraping?
In choosing an automated browser, developers can choose from a headless or a GUI/headful browser. The term “headless browser” refers to a web browser without a graphical user interface. When used with a proxy, headless browsers can be used to scrape data, but they are easily detected by bot-protection software, making large-scale data scraping difficult. GUI browsers, like Agent Browser (aka "headfull"), use a graphical user interface. Bot detection software is less likely to detect GUI browsers.
Is the Agent Browser compatible with browser automation frameworks?
Yes, Agent Browser is fully compatible with Puppeteer, Selenium and Playwright.
When should I use Agent Browser instead of other Bright Data proxy products?
Agent Browser is an automated browser optimized for autonomous AI Agents, providing them with the power of Web Unlocker's automated unlocking capabilities for multi-steps workflows. While Web Unlocker works with one-step requests, Agent Browser is best when an AI agent needs to interact with a websites. It is also ideal for any data scraping project that requires browsers, scaling, and automated management of all website unblocking actions.