Skip to content

Commit 40a3806

Browse files
Merge pull request #86 from firecrawl/updates-clean-browser-legacy-code
Updates clean browser legacy code
2 parents aafbd3e + d1eae86 commit 40a3806

4 files changed

Lines changed: 66 additions & 245 deletions

File tree

README.md

Lines changed: 60 additions & 239 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ npm install -g firecrawl-cli
1111
Or set up everything in one command (install CLI globally, authenticate, and add skills across all detected coding editors):
1212

1313
```bash
14-
npx -y firecrawl-cli@latest init -y --browser
14+
npx -y firecrawl-cli@1.13.0 init -y --browser
1515
```
1616

1717
- `-y` runs setup non-interactively
@@ -475,80 +475,53 @@ firecrawl agent abc123-def456-... --wait --poll-interval 10
475475

476476
---
477477

478-
### `browser` - Browser sandbox sessions (Deprecated)
478+
### `interact` - Interact with scraped pages
479479

480-
> **Deprecated:** Prefer `scrape` + `interact` instead. Interact lets you scrape a page and then click, fill forms, and navigate without managing sessions manually. See the `interact` command.
481-
482-
Launch and control cloud browser sessions. By default, commands are sent to agent-browser (pre-installed in every sandbox). Use `--python` or `--node` to run Playwright code directly instead.
480+
Scrape a page, then interact with it in a live browser session using natural language or code. No manual session management required.
483481

484482
```bash
485-
# 1. Launch a session
486-
firecrawl browser launch --stream
487-
488-
# 2. Execute agent-browser commands (default)
489-
firecrawl browser execute "open https://example.com"
490-
firecrawl browser execute "snapshot"
491-
firecrawl browser execute "click @e5"
492-
firecrawl browser execute "scrape"
483+
# 1. Scrape a page first
484+
firecrawl scrape https://example.com
493485

494-
# 3. Execute Playwright Python or JavaScript
495-
firecrawl browser execute --python "await page.goto('https://example.com'); print(await page.title())"
496-
firecrawl browser execute --node "await page.goto('https://example.com'); await page.title()"
486+
# 2. Interact with it
487+
firecrawl interact "Click the pricing tab"
488+
firecrawl interact "Fill in the email field with test@example.com"
489+
firecrawl interact "Extract the pricing table"
497490

498-
# 4. List sessions
499-
firecrawl browser list
491+
# 3. Code execution (Playwright)
492+
firecrawl interact -c "await page.title()"
493+
firecrawl interact -c "print(await page.title())" --python
500494

501-
# 5. Close
502-
firecrawl browser close
495+
# 4. Stop the session
496+
firecrawl interact stop
503497
```
504498

505-
#### Launch Options
506-
507-
| Option | Description |
508-
| ---------------------------- | ------------------------------------------- |
509-
| `--ttl <seconds>` | Total session TTL in seconds (default: 300) |
510-
| `--ttl-inactivity <seconds>` | Inactivity TTL in seconds |
511-
| `--stream` | Enable live view streaming |
512-
| `-o, --output <path>` | Save output to file |
513-
| `--json` | Output as JSON format |
514-
515-
#### Execute Options
499+
#### Interact Options
516500

517-
| Option | Description |
518-
| --------------------- | ------------------------------------------------------------------ |
519-
| `--python` | Execute as Playwright Python code |
520-
| `--node` | Execute as Playwright JavaScript code |
521-
| `--bash` | Execute bash commands in the sandbox (agent-browser pre-installed) |
522-
| `--session <id>` | Target a specific session (auto-saved on launch) |
523-
| `-o, --output <path>` | Save output to file |
524-
| `--json` | Output as JSON format |
501+
| Option | Description |
502+
| ---------------------- | ---------------------------------------------- |
503+
| `-p, --prompt <text>` | AI prompt (alternative to positional argument) |
504+
| `-c, --code <code>` | Code to execute in the browser sandbox |
505+
| `-s, --scrape-id <id>` | Scrape job ID (default: last scrape) |
506+
| `--python` | Execute code as Python/Playwright |
507+
| `--node` | Execute code as Node.js/Playwright (default) |
508+
| `--bash` | Execute code as Bash |
509+
| `--timeout <seconds>` | Timeout in seconds (1-300, default: 30) |
510+
| `-o, --output <path>` | Save output to file |
511+
| `--json` | Output as JSON format |
525512

526-
By default (no flag), commands are sent to agent-browser. `--python`, `--node`, and `--bash` are mutually exclusive.
513+
#### Profiles
527514

528-
#### Examples
515+
Use `--profile` on the scrape to persist browser state across scrapes:
529516

530517
```bash
531-
# agent-browser commands (default mode)
532-
firecrawl browser execute "open https://example.com"
533-
firecrawl browser execute "snapshot"
534-
firecrawl browser execute "click @e5"
535-
firecrawl browser execute "fill @e3 'search query'"
536-
firecrawl browser execute "scrape"
537-
538-
# Playwright Python
539-
firecrawl browser execute --python "await page.goto('https://example.com'); print(await page.title())"
540-
541-
# Playwright JavaScript
542-
firecrawl browser execute --node "await page.goto('https://example.com'); await page.title()"
543-
544-
# Bash (arbitrary commands in the sandbox)
545-
firecrawl browser execute --bash "ls /tmp"
546-
547-
# Launch with extended TTL
548-
firecrawl browser launch --ttl 900 --ttl-inactivity 120
518+
# Login and save state
519+
firecrawl scrape "https://app.example.com/login" --profile my-app
520+
firecrawl interact "Fill in email and click login"
549521

550-
# JSON output
551-
firecrawl browser execute --json "snapshot"
522+
# Come back authenticated later
523+
firecrawl scrape "https://app.example.com/dashboard" --profile my-app
524+
firecrawl interact "Extract the dashboard data"
552525
```
553526

554527
---
@@ -610,7 +583,7 @@ firecrawl --status
610583
```
611584

612585
```
613-
🔥 firecrawl cli v1.4.0
586+
🔥 firecrawl cli v1.13.0
614587
615588
● Authenticated via stored credentials
616589
Concurrency: 0/100 jobs (parallel scrape limit)
@@ -688,83 +661,6 @@ firecrawl scrape https://example.com -o output.md
688661

689662
---
690663

691-
## `download` - Bulk Site Download
692-
693-
A convenience command that combines `map` + `scrape` to save a site as local files. Maps the site first to discover pages, then scrapes each one into nested directories under `.firecrawl/`. All [scrape options](#scrape-options) work with download. Run without flags for an interactive wizard that walks you through format, screenshot, and path selection.
694-
695-
```bash
696-
# Interactive wizard (picks format, screenshots, paths for you)
697-
firecrawl download https://docs.firecrawl.dev
698-
699-
# Download with screenshots
700-
firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y
701-
702-
# Full page screenshots
703-
firecrawl download https://docs.firecrawl.dev --full-page-screenshot --limit 20 -y
704-
705-
# Multiple formats (each saved as its own file per page)
706-
firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y
707-
# Creates per page: index.md + links.txt + screenshot.png
708-
709-
# Download as HTML
710-
firecrawl download https://docs.firecrawl.dev --html --limit 20 -y
711-
712-
# Main content only
713-
firecrawl download https://docs.firecrawl.dev --only-main-content --limit 50 -y
714-
715-
# Filter to specific paths
716-
firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"
717-
718-
# Skip localized pages
719-
firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
720-
721-
# Include subdomains
722-
firecrawl download https://firecrawl.dev --allow-subdomains
723-
724-
# Combine everything
725-
firecrawl download https://docs.firecrawl.dev \
726-
--include-paths "/features,/sdks" \
727-
--exclude-paths "/zh,/ja,/fr,/es,/pt-BR" \
728-
--only-main-content \
729-
--screenshot \
730-
-y
731-
```
732-
733-
#### Download Options
734-
735-
| Option | Description |
736-
| ------------------------- | ---------------------------------------------- |
737-
| `--limit <number>` | Max pages to download |
738-
| `--search <query>` | Filter pages by search query |
739-
| `--include-paths <paths>` | Only download matching paths (comma-separated) |
740-
| `--exclude-paths <paths>` | Skip matching paths (comma-separated) |
741-
| `--allow-subdomains` | Include subdomains when mapping |
742-
| `-y, --yes` | Skip confirmation prompt and wizard |
743-
744-
All [scrape options](#scrape-options) also work with download (formats, screenshots, tags, geo-targeting, etc.)
745-
746-
#### Output Structure
747-
748-
Each format is saved as its own file per page:
749-
750-
```
751-
.firecrawl/
752-
docs.firecrawl.dev/
753-
features/
754-
scrape/
755-
index.md # markdown content
756-
links.txt # one link per line
757-
screenshot.png # actual PNG image
758-
crawl/
759-
index.md
760-
screenshot.png
761-
sdks/
762-
python/
763-
index.md
764-
```
765-
766-
---
767-
768664
## Telemetry
769665

770666
The CLI collects anonymous usage data during authentication to help improve the product:
@@ -782,122 +678,47 @@ export FIRECRAWL_NO_TELEMETRY=1
782678

783679
---
784680

785-
## Experimental: AI Workflows
681+
## Experimental
786682

787-
Launch pre-built AI workflows that combine Firecrawl's web capabilities with your coding agent. One command spins up an interactive session with the right system prompt, tools, and instructions -- like `ollama run` but for web research agents. All workflows spawn parallel subagents to divide the work and finish faster.
683+
Experimental commands live under `firecrawl experimental` (alias: `firecrawl x`).
788684

789-
```bash
790-
# Claude Code (available now)
791-
firecrawl claude competitor-analysis
792-
firecrawl claude deep-research
793-
firecrawl claude lead-research
794-
firecrawl claude seo-audit
795-
firecrawl claude qa
685+
### `download` - Bulk Site Download
796686

797-
# Codex and OpenCode -- coming soon
798-
firecrawl codex competitor-analysis
799-
firecrawl opencode competitor-analysis
800-
```
801-
802-
See the full documentation: **[Experimental Workflows ->](src/commands/experimental/README.md)**
803-
804-
---
805-
806-
## Testing Workflows Locally
807-
808-
After building the CLI (`pnpm run build`), every workflow works with all three backends — just swap the command name:
809-
810-
```bash
811-
# Help
812-
firecrawl claude --help
813-
firecrawl codex --help
814-
firecrawl opencode --help
815-
```
816-
817-
### QA Testing
818-
819-
```bash
820-
firecrawl claude qa https://myapp.com
821-
firecrawl codex qa https://myapp.com
822-
firecrawl opencode qa https://myapp.com
823-
```
824-
825-
### Product Demo Walkthrough
826-
827-
```bash
828-
firecrawl claude demo https://resend.com
829-
firecrawl codex demo https://neon.tech
830-
firecrawl opencode demo https://linear.app
831-
```
832-
833-
### Competitor Analysis
687+
Combines `map` + `scrape` to save a site as local files under `.firecrawl/`.
834688

835689
```bash
836-
firecrawl claude competitor-analysis https://firecrawl.dev
837-
firecrawl codex competitor-analysis https://crawlee.dev
838-
firecrawl opencode competitor-analysis https://apify.com
690+
firecrawl x download https://docs.firecrawl.dev
691+
firecrawl x download https://docs.firecrawl.dev --screenshot --limit 20 -y
692+
firecrawl x download https://docs.firecrawl.dev --include-paths "/features,/sdks" -y
839693
```
840694

841-
### Deep Research
695+
### AI Workflows
842696

843-
```bash
844-
firecrawl claude deep-research "RAG pipeline data ingestion tools"
845-
firecrawl codex deep-research "web scraping best practices 2025"
846-
firecrawl opencode deep-research "browser automation frameworks comparison"
847-
```
848-
849-
### Other Workflows
697+
Launch pre-built AI workflows that combine Firecrawl with your coding agent. One command spins up an interactive session with the right system prompt, tools, and instructions.
850698

851699
```bash
852-
# Lead research
853-
firecrawl claude lead-research "Vercel"
854-
firecrawl codex lead-research "Stripe"
855-
856-
# SEO audit
857-
firecrawl opencode seo-audit https://example.com
858-
859-
# Knowledge base
860-
firecrawl claude knowledge-base https://docs.langchain.com
861-
862-
# Research papers
863-
firecrawl codex research-papers "web scraping compliance HIPAA"
864-
865-
# Shopping
866-
firecrawl claude shop "best mechanical keyboard for developers"
867-
```
700+
# Claude Code (available now)
701+
firecrawl x claude competitor-analysis https://firecrawl.dev
702+
firecrawl x claude deep-research "RAG pipeline data ingestion tools"
703+
firecrawl x claude lead-research "Vercel"
704+
firecrawl x claude seo-audit https://example.com
705+
firecrawl x claude qa https://myapp.com
706+
firecrawl x claude demo https://resend.com
707+
firecrawl x claude shop "best mechanical keyboard for developers"
868708

869-
### Natural Language (no workflow name)
709+
# Natural language (no workflow name)
710+
firecrawl x claude "scrape the firecrawl docs and summarize"
870711

871-
```bash
872-
firecrawl claude "scrape the firecrawl docs and summarize"
873-
firecrawl codex "find pricing for crawlee vs scrapy"
874-
firecrawl opencode "compare Firecrawl and Apify features"
712+
# Codex and OpenCode -- coming soon
713+
firecrawl x codex competitor-analysis https://crawlee.dev
714+
firecrawl x opencode deep-research "browser automation frameworks"
875715
```
876716

877-
Add `-y` to any command to auto-approve tool permissions (maps to `--dangerously-skip-permissions` for Claude, `--full-auto` for Codex).
878-
879-
### Live View
880-
881-
Use `firecrawl scrape <url>` + `firecrawl interact` to interact with pages. For advanced use cases requiring a raw CDP session, you can still use `firecrawl browser launch --json` to get a live view URL:
882-
883-
```bash
884-
# Preferred: scrape + interact workflow
885-
firecrawl scrape https://myapp.com
886-
firecrawl interact --prompt "Click on the login button and fill in the form"
717+
Add `-y` to auto-approve tool permissions.
887718

888-
# Advanced: Launch a browser session and grab the live view URL
889-
LIVE_URL=$(firecrawl browser launch --json | jq -r '.liveViewUrl')
890-
891-
# Pass it to Claude Code
892-
claude --append-system-prompt "A cloud browser session is running. Live view: $LIVE_URL -- use \`firecrawl interact\` to interact with scraped pages." \
893-
--dangerously-skip-permissions \
894-
"QA test https://myapp.com"
895-
896-
# Or use the built-in workflow commands
897-
firecrawl claude demo https://resend.com
898-
```
719+
See the full documentation: **[Experimental Workflows ->](src/commands/experimental/README.md)**
899720

900-
### Prerequisites
721+
#### Prerequisites
901722

902723
Each backend requires its CLI to be installed separately:
903724

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "firecrawl-cli",
3-
"version": "1.12.2",
3+
"version": "1.13.0",
44
"description": "Command-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.",
55
"main": "dist/index.js",
66
"bin": {

skills/firecrawl-cli/rules/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ description: |
1212
## Quick Setup (Recommended)
1313

1414
```bash
15-
npx -y firecrawl-cli@latest -y
15+
npx -y firecrawl-cli@1.13.0 -y
1616
```
1717

1818
This installs `firecrawl-cli` globally, authenticates via browser, and installs all skills.
@@ -36,7 +36,7 @@ firecrawl setup skills
3636
## Manual Install
3737

3838
```bash
39-
npm install -g firecrawl-cli@latest
39+
npm install -g firecrawl-cli@1.13.0
4040
```
4141

4242
## Verify
@@ -78,5 +78,5 @@ Ask the user how they'd like to authenticate:
7878
If `firecrawl` is not found after installation:
7979

8080
1. Ensure npm global bin is in PATH
81-
2. Try: `npx firecrawl-cli@latest --version`
82-
3. Reinstall: `npm install -g firecrawl-cli@latest`
81+
2. Try: `npx firecrawl-cli@1.13.0 --version`
82+
3. Reinstall: `npm install -g firecrawl-cli@1.13.0`

0 commit comments

Comments
 (0)