• Home  
  • Anthropic brings mad Skills to Claude • The Register
- Technology

Anthropic brings mad Skills to Claude • The Register

Paying Anthropic customers can now teach their Claude new tricks, which the company calls Skills. Despite much talk about superintelligence, AI models can be clueless when it comes to interacting with specific applications. Sure, they can parse PDF text, but they probably don’t know how to fill in a PDF form. Skills, or Agent Skills […]

Paying Anthropic customers can now teach their Claude new tricks, which the company calls Skills.

Despite much talk about superintelligence, AI models can be clueless when it comes to interacting with specific applications. Sure, they can parse PDF text, but they probably don’t know how to fill in a PDF form.

Skills, or Agent Skills for jargon maximalists, offer a way to instill Claude with specific knowledge that it may not possess in its distillation of training data. Anthropic has already deployed a few of them within Claude to handle common tasks like creating spreadsheets or presentations. 

Now, paying Claude customers – not the freeloader tier – can create Skills that suit their specific needs. Just make sure to subtract the labor that went into creating them when tabulating time saved by AI.

A Skill consists of a directory with a SKILL.md file – a mix of YAML and Markdown – and possibly other resources such as text, scripts, and data. This turducken of instructions and executable code can be stored locally (~/.claude/skills/) or uploaded to the cloud for use with the Claude API.

Claude’s context window when loaded appends metadata from available Skills in the system prompt. Thereafter, when Claude is asked to carry out a relevant task, it will launch the appropriate skill by invoking the Bash tool to read the SKILL.md file. Once done, the model should have the data necessary to perform the desired task, such as interacting with content in a third-party application like Box or creating a PowerPoint presentation.

The AI model will do so through a process Anthropic calls progressive disclosure.

“Progressive disclosure is the core design principle that makes Agent Skills flexible and scalable,” the company explains in its engineering blog post on the subject. “Like a well-organized manual that starts with a table of contents, then specific chapters, and finally a detailed appendix, skills let Claude load information only as needed.”

The result is that Claude doesn’t process tokens unnecessarily for Skills that won’t be used, which helps keep operating costs down.

Skills also provide a way to revert to programmatic code execution when a large language model would be the wrong tool for the job. As an example, Anthropic cites the inefficiency of sorting a list via token generation – a coded sorting algorithm will complete the task much faster and at less cost, and will produce the same output every time.

Creating Skills can be complicated for those who want to create the relevant files manually, but they’re less burdensome with Claude’s help. Claude incorporates a “skill-creator” that enables the creation of new Skills through interactive chatbot banter. There’s also a Claude Skills Cookbook for those who want to understand the creation process.

Speaking of malware, Anthropic warns that Skills present some risk (not unlike giving Claude access to Bash). Malicious Skills, the company cautions, may introduce vulnerabilities or allow the exfiltration of data.

“We recommend installing skills only from trusted sources,” the company says. “When installing a skill from a less-trusted source, thoroughly audit it before use. Start by reading the contents of the files bundled in the skill to understand what it does, paying particular attention to code dependencies and bundled resources like images or scripts. Similarly, pay attention to instructions or code within the skill that instruct Claude to connect to potentially untrusted external network sources.”

With that in mind, we note that Anthropic says it hopes to enable AI agents to create their own Skills. ®

First Appeared on
Source link

Leave a comment

Your email address will not be published. Required fields are marked *

isenews.com  @2024. All Rights Reserved.