Alpha Preview: Fidensa is currently in early testing. Scores are for demonstration purposes and are not considered final or reliable.
devin-cursorrules
Configures AI coding agents with advanced multi-agent orchestration, persistent state management, and automated Python execution capabilities similar to Devin AI.
74
/ 100 · Grade C
C = 70–79
“I need to configure AI coding agents with sophisticated multi-agent coordination, persistent memory, and automated script execution capabilities for complex development tasks.”
devin-cursorrules earned Verified status with a trust score of 74/100 (Grade C). Adversarial testing produced 13 findings (1 critical, 10 high, 2 medium). Security scan flagged 0 findings. Tier is Verified rather than Certified due to unmitigated findings above severity thresholds.
Trust Score Breakdown
Eight weighted signals composing the aggregate trust score
Scheme v2.0 · Weights provisional · Consumer confirmations and uptime use pipeline-derived baselines.
Findings
Security scan results, adversarial testing, and pipeline review
Security Scan — Cisco Skill Scanner
Adversarial Testing — 6 categories, 13 findings
The skill contains multiple tools that accept user-provided URLs and search queries without proper sanitization. The web_scraper.py tool accepts URLs directly from command line arguments and passes them to Playwright's page.goto() method. The search_engine.py tool accepts search queries that are passed directly to the DuckDuckGo API. While these don't enable direct shell injection, they create attack surfaces where malicious URLs could potentially exploit browser vulnerabilities or where crafted search queries could manipulate API behavior.
The llm_api.py tool accepts user-provided prompts and passes them directly to various LLM providers without establishing clear boundaries between the tool's intended function and user content. While the tool itself doesn't contain skill instructions that could be overridden, it creates a pathway for users to send arbitrary instructions to LLMs, potentially bypassing intended usage patterns when integrated into larger systems.
The skill declares web_browsing and search_engine_access as dependencies, but the actual tool implementations (screenshot_utils.py, search_engine.py, web_scraper.py) provide capabilities that go beyond basic browsing and searching. The web scraper can fetch arbitrary URLs with concurrent processing, the screenshot tool can capture any webpage, and the search tool provides DuckDuckGo integration. While these are declared as dependencies, the implementations are more powerful than typical agent capabilities.
The load_environment() function searches for and loads environment variables from multiple .env files in order of precedence (.env.local, .env, .env.example). While this is contained within the project directory, the function prints all available system environment variables to stderr for debugging, which could expose sensitive system-level configuration.
The skill contains multiple instances of debug logging that expose sensitive information. The load_environment() function prints all environment variable keys to stderr, which could reveal the presence of API keys. The llm_api.py module logs loaded environment variable keys from .env files. The search_engine.py and web_scraper.py modules use extensive debug logging that could expose query parameters and URLs containing sensitive data.
The skill makes HTTP requests to multiple external services without declaring these network destinations in its metadata. It connects to OpenAI (api.openai.com), Azure OpenAI, DeepSeek (api.deepseek.com), SiliconFlow (api.siliconflow.cn), Anthropic, Google Gemini, DuckDuckGo search, and a hardcoded local IP address (192.168.180.137:8006). These external calls could be used to exfiltrate project data through API requests, search queries, or LLM prompts.
The skill actively reads and processes multiple credential sources including API keys from environment variables and .env files. It accesses OPENAI_API_KEY, AZURE_OPENAI_API_KEY, DEEPSEEK_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, and SILICONFLOW_API_KEY. The load_environment() function reads .env.local, .env, and .env.example files and logs their contents. While this appears to be for legitimate API access, the extensive credential harvesting creates a significant attack surface.
The skill declares an empty scope ({}) but contains substantial Python code files including LLM API clients, web scraping tools, screenshot utilities, and search engines. This represents a significant volume mismatch between the declared minimal scope and the actual extensive codebase. The skill appears to be a complete development toolkit rather than a simple rules file.
This skill appears to be named 'devin.cursorrules' suggesting it's a cursor rules file, but contains extensive Python tooling for LLM API access, web scraping, and screenshot capabilities. The scope includes multiple LLM providers (OpenAI, Anthropic, Google, Azure, DeepSeek, SiliconFlow), web browsing with Playwright, and search engine access. This is significantly broader than typical cursor rules which focus on coding standards and editor behavior.
The skill uses 'from duckduckgo_search import DDGS' in search_engine.py but does not declare duckduckgo_search as a dependency in its metadata. This creates an implicit dependency that bypasses user review of what packages will be installed.
The skill uses 'import html5lib' in web_scraper.py but does not declare html5lib as a dependency in its metadata. This creates an implicit dependency that bypasses user review of what packages will be installed.
The skill uses 'from dotenv import load_dotenv' in llm_api.py but does not declare python-dotenv as a dependency in its metadata. This creates an implicit dependency that bypasses user review of what packages will be installed.
The skill imports multiple LLM API packages (google.generativeai, openai, anthropic) in llm_api.py but does not declare these as dependencies in its metadata. This creates implicit dependencies that bypass user review of what packages will be installed.
Methodology v1.0 · 6 categories · ~55 attack patterns
Behavioral Fingerprint
Runtime performance baseline for drift detection
Samples
8
Error rate
0.0%
Peak memory
— MB
Avg CPU
—%
Response time distribution
Output size distribution
Fingerprint v1.0 · Baseline: Apr 1, 2026 · Status: baseline
Interface
Skill triggers and instruction summary
Activation
This skill activates when setting up enhanced AI capabilities for Cursor/Windsurf IDE or GitHub Copilot to provide Devin-like functionality
This skill handles the configuration and setup of advanced agentic AI capabilities including automated planning, tool usage, and multi-agent collaboration
Does
Provides setup instructions for enhanced IDE AI capabilities
Configures automated planning and self-evolution features
Enables extended tool usage including web browsing and search
Sets up multi-agent collaboration with planner-executor architecture
Implements self-learning through lessons learned accumulation
Does not
Does not automatically install dependencies without user consent
Does not modify existing project files without explicit setup
Does not guarantee compatibility with all IDE versions
Scope & Permissions
What this capability can and cannot access — derived from pipeline analysis
yes
no
yes
yes
yes
yes
Known Failure Modes
Documented edge cases and recovery behaviors
when when API keys are not configured
then the agent provides setup instructions but cannot use external services
when when dependencies are missing
then the agent provides installation instructions and may fail to execute advanced features
when when IDE is not supported
then the agent provides alternative configuration options or manual setup instructions
Badge & Integration
Embed certification status in your README, docs, or CI pipeline
Certification Notes
Provenance observations from the pipeline
Publisher "grapeot" is not verified — first certification from this publisher
No SECURITY.md or SECURITY.txt file found — no published vulnerability reporting process
Single contributor — no peer review evidence in commit history
Package description appears to be boilerplate or template text
Signed Artifact
Certification provenance and verification metadata
The original instruction file with a certification footer appended. Replace the source file in your project so AI agents see the trust score, verification link, and SOP.
ES256-signed JWS artifact for programmatic verification. Use with the Fidensa MCP server or GitHub Action to validate integrity.
Pipeline Artifacts
Raw data files from this certification run — downloadable for independent verification
contract.json
Full unsigned contract
stage1-ingest.json
Ingest stage output
stage2a-sbom.json
SBOM generation results
stage2a-vulns.json
Vulnerability scan results
stage2b-security.json
Security scan results
stage3a-functional.json
Functional test results
stage3b-adversarial.json
Adversarial test results
stage3c-fingerprint.json
Behavioral fingerprint
stage4-certify.json
Certification decision + trust score
stage3a-measurements.json
Raw functional test measurements
stage3b-measurements.json
Raw adversarial test measurements
run-log.json
Pipeline execution log
Not all files may be present for every certification.