feat: Import 35+ skills, merge duplicates, add openclaw installer

Major updates:
- Added 35+ new skills from awesome-opencode-skills and antigravity repos
- Merged SEO skills into seo-master
- Merged architecture skills into architecture
- Merged security skills into security-auditor and security-coder
- Merged testing skills into testing-master and testing-patterns
- Merged pentesting skills into pentesting
- Renamed website-creator to thai-frontend-dev
- Replaced skill-creator with github version
- Removed Chutes references (use MiniMax API instead)
- Added install-openclaw-skills.sh for cross-platform installation
- Updated .env.example with MiniMax API credentials
This commit is contained in:
Kunthawat Greethong
2026-03-26 11:37:39 +07:00
parent 48595100a1
commit 7edf5bc4d0
469 changed files with 131580 additions and 417 deletions

View File

@@ -0,0 +1,236 @@
---
name: security-auditor
description: |
Master security auditor combining vulnerability scanning, web security testing,
DevSecOps, and compliance frameworks. Use when auditing security,
performing vulnerability assessments, or testing for OWASP Top 10.
---
# Security Auditor
Comprehensive security skill combining: vulnerability scanning, web security testing, DevSecOps, OWASP Top 10, and compliance frameworks.
---
## Quick Reference
| Task | Use Section |
|------|-------------|
| Scan for vulnerabilities | **Vulnerability Scanning** |
| Test web application | **Web Security Testing** |
| Audit security controls | **Security Audit** |
| Check compliance | **Compliance Frameworks** |
| Review authentication | **Auth Security** |
| DevSecOps integration | **Security Automation** |
---
## Vulnerability Scanning
**Core Principles:**
| Principle | Application |
|-----------|-------------|
| **Assume Breach** | Design as if attacker already inside |
| **Zero Trust** | Never trust, always verify |
| **Defense in Depth** | Multiple layers, no single point |
| **Least Privilege** | Minimum required access only |
| **Fail Secure** | On error, deny access |
### OWASP Top 10 (2025)
1. **A01** - Broken Access Control
2. **A02** - Cryptographic Failures
3. **A03** - Injection
4. **A04** - Insecure Design
5. **A05** - Security Misconfiguration
6. **A06** - Vulnerable Components
7. **A07** - Auth Failures
8. **A08** - Data Integrity Failures
9. **A09** - Logging Failures
10. **A10** - SSRF
### Scanning Process
1. **Reconnaissance** - Map attack surface
2. **Enumeration** - Identify vulnerabilities
3. **Exploitation** - Verify findings
4. **Documentation** - Report findings
5. **Remediation** - Suggest fixes
---
## Web Security Testing
### OWASP Top 10 Testing Checklist
#### A01 - Broken Access Control
- [ ] Horizontal/vertical privilege escalation
- [ ] IDOR (Insecure Direct Object Reference)
- [ ] CORS misconfiguration
- [ ] JWT token manipulation
- [ ] Missing function-level access control
#### A02 - Cryptographic Failures
- [ ] Sensitive data exposure (PII, credentials)
- [ ] Weak encryption algorithms
- [ ] Default/hardcoded credentials
- [ ] Insufficient key rotation
- [ ] Client-side encryption only
#### A03 - Injection
- [ ] SQL injection (error-based, blind, time-based)
- [ ] NoSQL injection
- [ ] Command injection
- [ ] LDAP injection
- [ ] XPath injection
- [ ] ORM injection
#### A04 - Insecure Design
- [ ] Business logic flaws
- [ ] Rate limiting bypass
- [ ] Workflow bypasses
- [ ] Race conditions
- [ ] Mass assignment
#### A05 - Security Misconfiguration
- [ ] Default credentials
- [ ] Unnecessary features enabled
- [ ] Error handling (stack traces)
- [ ] Cloud misconfigurations
- [ ] Missing security headers
#### A06 - Vulnerable Components
- [ ] Outdated dependencies
- [ ] Unpatched vulnerabilities
- [ ] License compliance
- [ ] Component integrity
#### A07 - Auth Failures
- [ ] Weak password policies
- [ ] Credential stuffing
- [ ] Session fixation/hijacking
- [ ] Missing MFA
- [ ] Password reset flaws
#### A08 - Data Integrity Failures
- [ ] SSRF (Server-Side Request Forgery)
- [ ] XXE (XML External Entities)
- [ ] Deserialization attacks
- [ ] Supply chain attacks
#### A09 - Logging Failures
- [ ] Insufficient logging
- [ ] Missing alerts
- [ ] Undetected breaches
- [ ] Audit trail gaps
#### A10 - SSRF
- [ ] URL validation bypass
- [ ] Cloud metadata access
- [ ] Internal port scanning
---
## Security Audit
### Audit Checklist
1. **Scope Definition** - Assets, systems, boundaries
2. **Threat Modeling** - Attack vectors, likelihood, impact
3. **Control Review** - Technical and administrative controls
4. **Vulnerability Assessment** - Automated + manual testing
5. **Risk Prioritization** - CVSS scoring, business impact
6. **Remediation Planning** - Short-term and long-term fixes
7. **Report Generation** - Executive summary, technical details
### Security Headers Checklist
```http
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Content-Security-Policy: default-src 'self'
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: geolocation=(), microphone=(), camera=()
```
---
## DevSecOps & Security Automation
### CI/CD Security Pipeline
1. **SAST** - Static Application Security Testing
2. **DAST** - Dynamic Application Security Testing
3. **SCA** - Software Composition Analysis
4. **Secrets Scanning** - Detect credentials in code
5. **Container Scanning** - Image vulnerability scanning
6. **Infrastructure Scanning** - Cloud configuration
### Tools
- **SAST:** SonarQube, Semgrep, Bandit
- **DAST:** OWASP ZAP, Burp Suite, Nuclei
- **SCA:** Snyk, Dependabot, Renovate
- **Secrets:** GitLeaks, TruffleHog
- **Containers:** Trivy, Clair, Anchore
---
## Authentication & Authorization Security
### Auth Patterns to Audit
- [ ] Password hashing (bcrypt, Argon2)
- [ ] MFA implementation
- [ ] Session management
- [ ] Token handling (JWT, OAuth)
- [ ] Password reset flows
- [ ] Account lockout policies
### OAuth 2.0 Security
- [ ] Authorization code flow (not implicit)
- [ ] PKCE for public clients
- [ ] State parameter validation
- [ ] Redirect URI validation
- [ ] Token expiration and rotation
- [ ] Scope minimization
---
## Compliance Frameworks
| Framework | Focus |
|-----------|-------|
| **SOC 2** | Security, Availability, Confidentiality |
| **ISO 27001** | Information Security |
| **GDPR** | EU Data Protection |
| **HIPAA** | Healthcare Data |
| **PCI DSS** | Payment Card Data |
| **NIST** | Cybersecurity Framework |
---
## Remediation Priorities
### Critical (P0)
- RCE, SQL injection, authentication bypass
- **Fix within 24-48 hours**
### High (P1)
- XSS, CSRF, IDOR
- **Fix within 1-2 weeks**
### Medium (P2)
- Security misconfiguration, weak crypto
- **Fix within 1 month**
### Low (P3)
- Missing headers, info disclosure
- **Fix within 3 months**
---
## Best Practices
1. **Shift Left** - Security testing early in SDLC
2. **Defense in Depth** - Multiple security layers
3. **Zero Trust** - Never trust, always verify
4. **Least Privilege** - Minimum required permissions
5. **Secure Defaults** - Safe out-of-the-box
6. **Fail Securely** - Errors should deny access
7. **Defense in Depth** - No single control is enough

View File

@@ -0,0 +1,516 @@
---
name: aws-compliance-checker
description: "Automated compliance checking against CIS, PCI-DSS, HIPAA, and SOC 2 benchmarks"
category: security
risk: safe
source: community
tags: "[aws, compliance, audit, cis, pci-dss, hipaa, kiro-cli]"
date_added: "2026-02-27"
---
# AWS Compliance Checker
Automated compliance validation against industry standards including CIS AWS Foundations, PCI-DSS, HIPAA, and SOC 2.
## When to Use
Use this skill when you need to validate AWS compliance against industry standards, prepare for audits, or maintain continuous compliance monitoring.
## Supported Frameworks
**CIS AWS Foundations Benchmark**
- Identity and Access Management
- Logging and Monitoring
- Networking
- Data Protection
**PCI-DSS (Payment Card Industry)**
- Network security
- Access controls
- Encryption
- Monitoring and logging
**HIPAA (Healthcare)**
- Access controls
- Audit controls
- Data encryption
- Transmission security
**SOC 2**
- Security
- Availability
- Confidentiality
- Privacy
## CIS AWS Foundations Checks
### Identity & Access Management (1.x)
```bash
#!/bin/bash
# cis-iam-checks.sh
echo "=== CIS IAM Compliance Checks ==="
# 1.1: Root account usage
echo "1.1: Checking root account usage..."
root_usage=$(aws iam get-credential-report --output text | \
awk -F, 'NR==2 {print $5,$11}')
echo " Root password last used: $root_usage"
# 1.2: MFA on root account
echo "1.2: Checking root MFA..."
root_mfa=$(aws iam get-account-summary \
--query 'SummaryMap.AccountMFAEnabled' --output text)
echo " Root MFA enabled: $root_mfa"
# 1.3: Unused credentials
echo "1.3: Checking for unused credentials (>90 days)..."
aws iam get-credential-report --output text | \
awk -F, 'NR>1 {
if ($5 != "N/A" && $5 != "no_information") {
cmd = "date -d \"" $5 "\" +%s"
cmd | getline last_used
close(cmd)
now = systime()
days = (now - last_used) / 86400
if (days > 90) print " ⚠️ " $1 ": " int(days) " days inactive"
}
}'
# 1.4: Access keys rotated
echo "1.4: Checking access key age..."
aws iam list-users --query 'Users[*].UserName' --output text | \
while read user; do
aws iam list-access-keys --user-name "$user" \
--query 'AccessKeyMetadata[*].[AccessKeyId,CreateDate]' \
--output text | \
while read key_id create_date; do
age_days=$(( ($(date +%s) - $(date -d "$create_date" +%s)) / 86400 ))
if [ $age_days -gt 90 ]; then
echo " ⚠️ $user: Key $key_id is $age_days days old"
fi
done
done
# 1.5-1.11: Password policy
echo "1.5-1.11: Checking password policy..."
policy=$(aws iam get-account-password-policy 2>&1)
if echo "$policy" | grep -q "NoSuchEntity"; then
echo " ❌ No password policy configured"
else
echo " ✓ Password policy exists"
echo "$policy" | jq '.PasswordPolicy | {
MinimumPasswordLength,
RequireSymbols,
RequireNumbers,
RequireUppercaseCharacters,
RequireLowercaseCharacters,
MaxPasswordAge,
PasswordReusePrevention
}'
fi
# 1.12-1.14: MFA for IAM users
echo "1.12-1.14: Checking IAM user MFA..."
aws iam get-credential-report --output text | \
awk -F, 'NR>1 && $4=="false" {print " ⚠️ " $1 ": No MFA"}'
```
### Logging (2.x)
```bash
#!/bin/bash
# cis-logging-checks.sh
echo "=== CIS Logging Compliance Checks ==="
# 2.1: CloudTrail enabled
echo "2.1: Checking CloudTrail..."
trails=$(aws cloudtrail describe-trails \
--query 'trailList[*].[Name,IsMultiRegionTrail,LogFileValidationEnabled]' \
--output text)
if [ -z "$trails" ]; then
echo " ❌ No CloudTrail configured"
else
echo "$trails" | while read name multi_region validation; do
echo " Trail: $name"
echo " Multi-region: $multi_region"
echo " Log validation: $validation"
# Check if logging
status=$(aws cloudtrail get-trail-status --name "$name" \
--query 'IsLogging' --output text)
echo " Is logging: $status"
done
fi
# 2.2: CloudTrail log file validation
echo "2.2: Checking log file validation..."
aws cloudtrail describe-trails \
--query 'trailList[?LogFileValidationEnabled==`false`].Name' \
--output text | \
while read trail; do
echo " ⚠️ $trail: Log validation disabled"
done
# 2.3: S3 bucket for CloudTrail
echo "2.3: Checking CloudTrail S3 bucket access..."
aws cloudtrail describe-trails \
--query 'trailList[*].S3BucketName' --output text | \
while read bucket; do
public=$(aws s3api get-bucket-acl --bucket "$bucket" 2>&1 | \
grep -c "AllUsers")
if [ "$public" -gt 0 ]; then
echo "$bucket: Publicly accessible"
else
echo "$bucket: Not public"
fi
done
# 2.4: CloudTrail integrated with CloudWatch Logs
echo "2.4: Checking CloudWatch Logs integration..."
aws cloudtrail describe-trails \
--query 'trailList[*].[Name,CloudWatchLogsLogGroupArn]' \
--output text | \
while read name log_group; do
if [ "$log_group" = "None" ]; then
echo " ⚠️ $name: Not integrated with CloudWatch Logs"
else
echo "$name: Integrated with CloudWatch"
fi
done
# 2.5: AWS Config enabled
echo "2.5: Checking AWS Config..."
recorders=$(aws configservice describe-configuration-recorders \
--query 'ConfigurationRecorders[*].name' --output text)
if [ -z "$recorders" ]; then
echo " ❌ AWS Config not enabled"
else
echo " ✓ AWS Config enabled: $recorders"
fi
# 2.6: S3 bucket logging
echo "2.6: Checking S3 bucket logging..."
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
logging=$(aws s3api get-bucket-logging --bucket "$bucket" 2>&1)
if ! echo "$logging" | grep -q "LoggingEnabled"; then
echo " ⚠️ $bucket: Access logging disabled"
fi
done
# 2.7: VPC Flow Logs
echo "2.7: Checking VPC Flow Logs..."
aws ec2 describe-vpcs --query 'Vpcs[*].VpcId' --output text | \
while read vpc; do
flow_logs=$(aws ec2 describe-flow-logs \
--filter "Name=resource-id,Values=$vpc" \
--query 'FlowLogs[*].FlowLogId' --output text)
if [ -z "$flow_logs" ]; then
echo " ⚠️ $vpc: No flow logs enabled"
else
echo "$vpc: Flow logs enabled"
fi
done
```
### Monitoring (3.x)
```bash
#!/bin/bash
# cis-monitoring-checks.sh
echo "=== CIS Monitoring Compliance Checks ==="
# Check for required CloudWatch metric filters and alarms
required_filters=(
"unauthorized-api-calls"
"no-mfa-console-signin"
"root-usage"
"iam-changes"
"cloudtrail-changes"
"console-signin-failures"
"cmk-changes"
"s3-bucket-policy-changes"
"aws-config-changes"
"security-group-changes"
"nacl-changes"
"network-gateway-changes"
"route-table-changes"
"vpc-changes"
)
log_group=$(aws cloudtrail describe-trails \
--query 'trailList[0].CloudWatchLogsLogGroupArn' \
--output text | cut -d: -f7)
if [ -z "$log_group" ] || [ "$log_group" = "None" ]; then
echo " ❌ CloudTrail not integrated with CloudWatch Logs"
else
echo "Checking metric filters for log group: $log_group"
existing_filters=$(aws logs describe-metric-filters \
--log-group-name "$log_group" \
--query 'metricFilters[*].filterName' --output text)
for filter in "${required_filters[@]}"; do
if echo "$existing_filters" | grep -q "$filter"; then
echo "$filter: Configured"
else
echo " ⚠️ $filter: Missing"
fi
done
fi
```
### Networking (4.x)
```bash
#!/bin/bash
# cis-networking-checks.sh
echo "=== CIS Networking Compliance Checks ==="
# 4.1: No security groups allow 0.0.0.0/0 ingress to port 22
echo "4.1: Checking SSH access (port 22)..."
aws ec2 describe-security-groups \
--query 'SecurityGroups[*].[GroupId,GroupName,IpPermissions]' \
--output json | \
jq -r '.[] | select(.[2][]? |
select(.FromPort == 22 and .IpRanges[]?.CidrIp == "0.0.0.0/0")) |
" ⚠️ \(.[0]): \(.[1]) allows SSH from 0.0.0.0/0"'
# 4.2: No security groups allow 0.0.0.0/0 ingress to port 3389
echo "4.2: Checking RDP access (port 3389)..."
aws ec2 describe-security-groups \
--query 'SecurityGroups[*].[GroupId,GroupName,IpPermissions]' \
--output json | \
jq -r '.[] | select(.[2][]? |
select(.FromPort == 3389 and .IpRanges[]?.CidrIp == "0.0.0.0/0")) |
" ⚠️ \(.[0]): \(.[1]) allows RDP from 0.0.0.0/0"'
# 4.3: Default security group restricts all traffic
echo "4.3: Checking default security groups..."
aws ec2 describe-security-groups \
--filters Name=group-name,Values=default \
--query 'SecurityGroups[*].[GroupId,IpPermissions,IpPermissionsEgress]' \
--output json | \
jq -r '.[] | select((.[1] | length) > 0 or (.[2] | length) > 1) |
" ⚠️ \(.[0]): Default SG has rules"'
```
## PCI-DSS Compliance Checks
```python
#!/usr/bin/env python3
# pci-dss-checker.py
import boto3
def check_pci_compliance():
"""Check PCI-DSS requirements"""
ec2 = boto3.client('ec2')
rds = boto3.client('rds')
s3 = boto3.client('s3')
issues = []
# Requirement 1: Network security
sgs = ec2.describe_security_groups()
for sg in sgs['SecurityGroups']:
for perm in sg.get('IpPermissions', []):
for ip_range in perm.get('IpRanges', []):
if ip_range.get('CidrIp') == '0.0.0.0/0':
issues.append(f"PCI 1.2: {sg['GroupId']} open to internet")
# Requirement 2: Secure configurations
# Check for default passwords, etc.
# Requirement 3: Protect cardholder data
volumes = ec2.describe_volumes()
for vol in volumes['Volumes']:
if not vol['Encrypted']:
issues.append(f"PCI 3.4: Volume {vol['VolumeId']} not encrypted")
# Requirement 4: Encrypt transmission
# Check for SSL/TLS on load balancers
# Requirement 8: Access controls
iam = boto3.client('iam')
users = iam.list_users()
for user in users['Users']:
mfa = iam.list_mfa_devices(UserName=user['UserName'])
if not mfa['MFADevices']:
issues.append(f"PCI 8.3: {user['UserName']} no MFA")
# Requirement 10: Logging
cloudtrail = boto3.client('cloudtrail')
trails = cloudtrail.describe_trails()
if not trails['trailList']:
issues.append("PCI 10.1: No CloudTrail enabled")
return issues
if __name__ == "__main__":
print("PCI-DSS Compliance Check")
print("=" * 50)
issues = check_pci_compliance()
if not issues:
print("✓ No PCI-DSS issues found")
else:
print(f"Found {len(issues)} issues:\n")
for issue in issues:
print(f" ⚠️ {issue}")
```
## HIPAA Compliance Checks
```bash
#!/bin/bash
# hipaa-checker.sh
echo "=== HIPAA Compliance Checks ==="
# Access Controls (164.308(a)(3))
echo "Access Controls:"
aws iam get-credential-report --output text | \
awk -F, 'NR>1 && $4=="false" {print " ⚠️ " $1 ": No MFA (164.312(a)(2)(i))"}'
# Audit Controls (164.312(b))
echo ""
echo "Audit Controls:"
trails=$(aws cloudtrail describe-trails --query 'trailList[*].Name' --output text)
if [ -z "$trails" ]; then
echo " ❌ No CloudTrail (164.312(b))"
else
echo " ✓ CloudTrail enabled"
fi
# Encryption (164.312(a)(2)(iv))
echo ""
echo "Encryption at Rest:"
aws ec2 describe-volumes \
--query 'Volumes[?Encrypted==`false`].VolumeId' \
--output text | \
while read vol; do
echo " ⚠️ $vol: Not encrypted (164.312(a)(2)(iv))"
done
aws rds describe-db-instances \
--query 'DBInstances[?StorageEncrypted==`false`].DBInstanceIdentifier' \
--output text | \
while read db; do
echo " ⚠️ $db: Not encrypted (164.312(a)(2)(iv))"
done
# Transmission Security (164.312(e)(1))
echo ""
echo "Transmission Security:"
echo " Check: All data in transit uses TLS 1.2+"
```
## Automated Compliance Reporting
```python
#!/usr/bin/env python3
# compliance-report.py
import boto3
import json
from datetime import datetime
def generate_compliance_report(framework='cis'):
"""Generate comprehensive compliance report"""
report = {
'framework': framework,
'generated': datetime.now().isoformat(),
'checks': [],
'summary': {
'total': 0,
'passed': 0,
'failed': 0,
'score': 0
}
}
# Run all checks based on framework
if framework == 'cis':
checks = run_cis_checks()
elif framework == 'pci':
checks = run_pci_checks()
elif framework == 'hipaa':
checks = run_hipaa_checks()
report['checks'] = checks
report['summary']['total'] = len(checks)
report['summary']['passed'] = sum(1 for c in checks if c['status'] == 'PASS')
report['summary']['failed'] = report['summary']['total'] - report['summary']['passed']
report['summary']['score'] = (report['summary']['passed'] / report['summary']['total']) * 100
return report
def run_cis_checks():
# Implement CIS checks
return []
def run_pci_checks():
# Implement PCI checks
return []
def run_hipaa_checks():
# Implement HIPAA checks
return []
if __name__ == "__main__":
import sys
framework = sys.argv[1] if len(sys.argv) > 1 else 'cis'
report = generate_compliance_report(framework)
print(f"\n{framework.upper()} Compliance Report")
print("=" * 50)
print(f"Score: {report['summary']['score']:.1f}%")
print(f"Passed: {report['summary']['passed']}/{report['summary']['total']}")
print(f"Failed: {report['summary']['failed']}/{report['summary']['total']}")
# Save to file
with open(f'compliance-{framework}-{datetime.now().strftime("%Y%m%d")}.json', 'w') as f:
json.dump(report, f, indent=2)
```
## Example Prompts
- "Run CIS AWS Foundations compliance check"
- "Generate a PCI-DSS compliance report"
- "Check HIPAA compliance for my AWS account"
- "Audit against SOC 2 requirements"
- "Create a compliance dashboard"
## Best Practices
- Run compliance checks weekly
- Automate with Lambda/EventBridge
- Track compliance trends over time
- Document exceptions with justification
- Integrate with AWS Security Hub
- Use AWS Config Rules for continuous monitoring
## Kiro CLI Integration
```bash
kiro-cli chat "Use aws-compliance-checker to run CIS benchmark"
kiro-cli chat "Generate PCI-DSS report with aws-compliance-checker"
```
## Additional Resources
- [CIS AWS Foundations Benchmark](https://www.cisecurity.org/benchmark/amazon_web_services)
- [AWS Security Hub](https://aws.amazon.com/security-hub/)
- [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/)

View File

@@ -0,0 +1,397 @@
---
name: aws-iam-best-practices
description: "IAM policy review, hardening, and least privilege implementation"
category: security
risk: safe
source: community
tags: "[aws, iam, security, access-control, kiro-cli, least-privilege]"
date_added: "2026-02-27"
---
# AWS IAM Best Practices
Review and harden IAM policies following AWS security best practices and least privilege principles.
## When to Use
Use this skill when you need to review IAM policies, implement least privilege access, or harden IAM security.
## Core Principles
**Least Privilege**
- Grant minimum permissions needed
- Use managed policies when possible
- Avoid wildcard (*) permissions
- Regular access reviews
**Defense in Depth**
- Enable MFA for all users
- Use IAM roles instead of access keys
- Implement service control policies (SCPs)
- Enable CloudTrail for audit
**Separation of Duties**
- Separate admin and user roles
- Use different roles for different environments
- Implement approval workflows
- Regular permission audits
## IAM Security Checks
### Find Overly Permissive Policies
```bash
# List policies with full admin access
aws iam list-policies --scope Local \
--query 'Policies[*].[PolicyName,Arn]' --output table | \
grep -i admin
# Find policies with wildcard actions
aws iam list-policies --scope Local --query 'Policies[*].Arn' --output text | \
while read arn; do
version=$(aws iam get-policy --policy-arn "$arn" \
--query 'Policy.DefaultVersionId' --output text)
doc=$(aws iam get-policy-version --policy-arn "$arn" \
--version-id "$version" --query 'PolicyVersion.Document')
if echo "$doc" | grep -q '"Action": "\*"'; then
echo "Wildcard action in: $arn"
fi
done
# Find inline policies (should use managed policies)
aws iam list-users --query 'Users[*].UserName' --output text | \
while read user; do
policies=$(aws iam list-user-policies --user-name "$user" \
--query 'PolicyNames' --output text)
if [ -n "$policies" ]; then
echo "Inline policies on user $user: $policies"
fi
done
```
### MFA Enforcement
```bash
# List users without MFA
aws iam get-credential-report --output text | \
awk -F, 'NR>1 && $4=="false" {print $1}'
# Check if MFA is required in policies
aws iam list-policies --scope Local --query 'Policies[*].Arn' --output text | \
while read arn; do
version=$(aws iam get-policy --policy-arn "$arn" \
--query 'Policy.DefaultVersionId' --output text)
doc=$(aws iam get-policy-version --policy-arn "$arn" \
--version-id "$version" --query 'PolicyVersion.Document')
if echo "$doc" | grep -q "aws:MultiFactorAuthPresent"; then
echo "MFA enforced in: $arn"
fi
done
# Enable MFA for a user (returns QR code)
aws iam create-virtual-mfa-device \
--virtual-mfa-device-name user-mfa \
--outfile /tmp/qr.png \
--bootstrap-method QRCodePNG
```
### Access Key Management
```bash
# Find old access keys (>90 days)
aws iam list-users --query 'Users[*].UserName' --output text | \
while read user; do
aws iam list-access-keys --user-name "$user" \
--query 'AccessKeyMetadata[*].[AccessKeyId,CreateDate,Status]' \
--output text | \
while read key_id create_date status; do
age_days=$(( ($(date +%s) - $(date -d "$create_date" +%s)) / 86400 ))
if [ $age_days -gt 90 ]; then
echo "$user: Key $key_id is $age_days days old"
fi
done
done
# Rotate access key
OLD_KEY="AKIAIOSFODNN7EXAMPLE"
USER="myuser"
# Create new key
NEW_KEY=$(aws iam create-access-key --user-name "$USER")
echo "New key created. Update applications, then run:"
echo "aws iam delete-access-key --user-name $USER --access-key-id $OLD_KEY"
# Deactivate old key (test first)
aws iam update-access-key \
--user-name "$USER" \
--access-key-id "$OLD_KEY" \
--status Inactive
```
### Role and Policy Analysis
```bash
# List unused roles (no activity in 90 days)
aws iam list-roles --query 'Roles[*].[RoleName,RoleLastUsed.LastUsedDate]' \
--output text | \
while read role last_used; do
if [ "$last_used" = "None" ]; then
echo "Never used: $role"
fi
done
# Find roles with trust relationships to external accounts
aws iam list-roles --query 'Roles[*].RoleName' --output text | \
while read role; do
trust=$(aws iam get-role --role-name "$role" \
--query 'Role.AssumeRolePolicyDocument')
if echo "$trust" | grep -q '"AWS":'; then
echo "External trust: $role"
fi
done
# Analyze policy permissions
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:user/myuser \
--action-names s3:GetObject s3:PutObject \
--resource-arns arn:aws:s3:::mybucket/*
```
## IAM Policy Templates
### Least Privilege S3 Access
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/user-data/${aws:username}/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringLike": {
"s3:prefix": "user-data/${aws:username}/*"
}
}
}
]
}
```
### MFA-Required Policy
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
```
### Time-Based Access
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"DateGreaterThan": {
"aws:CurrentTime": "2026-01-01T00:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime": "2026-12-31T23:59:59Z"
}
}
}
]
}
```
### IP-Restricted Access
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"203.0.113.0/24",
"198.51.100.0/24"
]
}
}
}
]
}
```
## IAM Hardening Checklist
**User Management**
- [ ] Enable MFA for all users
- [ ] Remove unused IAM users
- [ ] Rotate access keys every 90 days
- [ ] Use IAM roles instead of long-term credentials
- [ ] Implement password policy (length, complexity, rotation)
**Policy Management**
- [ ] Replace inline policies with managed policies
- [ ] Remove wildcard (*) permissions
- [ ] Implement least privilege
- [ ] Use policy conditions (MFA, IP, time)
- [ ] Regular policy reviews
**Role Management**
- [ ] Use roles for EC2 instances
- [ ] Implement cross-account roles properly
- [ ] Review trust relationships
- [ ] Remove unused roles
- [ ] Use session tags for fine-grained access
**Monitoring**
- [ ] Enable CloudTrail for IAM events
- [ ] Set up CloudWatch alarms for IAM changes
- [ ] Use AWS IAM Access Analyzer
- [ ] Regular access reviews
- [ ] Monitor for privilege escalation
## Automated IAM Hardening
```python
#!/usr/bin/env python3
# iam-hardening.py
import boto3
from datetime import datetime, timedelta
iam = boto3.client('iam')
def enforce_mfa():
"""Identify users without MFA"""
users = iam.list_users()['Users']
no_mfa = []
for user in users:
mfa_devices = iam.list_mfa_devices(
UserName=user['UserName']
)['MFADevices']
if not mfa_devices:
no_mfa.append(user['UserName'])
return no_mfa
def rotate_old_keys():
"""Find access keys older than 90 days"""
users = iam.list_users()['Users']
old_keys = []
for user in users:
keys = iam.list_access_keys(
UserName=user['UserName']
)['AccessKeyMetadata']
for key in keys:
age = datetime.now(key['CreateDate'].tzinfo) - key['CreateDate']
if age.days > 90:
old_keys.append({
'user': user['UserName'],
'key_id': key['AccessKeyId'],
'age_days': age.days
})
return old_keys
def find_overpermissive_policies():
"""Find policies with wildcard actions"""
policies = iam.list_policies(Scope='Local')['Policies']
overpermissive = []
for policy in policies:
version = iam.get_policy_version(
PolicyArn=policy['Arn'],
VersionId=policy['DefaultVersionId']
)
doc = version['PolicyVersion']['Document']
for statement in doc.get('Statement', []):
if statement.get('Action') == '*':
overpermissive.append(policy['PolicyName'])
break
return overpermissive
if __name__ == "__main__":
print("IAM Hardening Report")
print("=" * 50)
print("\nUsers without MFA:")
for user in enforce_mfa():
print(f" - {user}")
print("\nOld access keys (>90 days):")
for key in rotate_old_keys():
print(f" - {key['user']}: {key['age_days']} days")
print("\nOverpermissive policies:")
for policy in find_overpermissive_policies():
print(f" - {policy}")
```
## Example Prompts
- "Review my IAM policies for security issues"
- "Find users without MFA enabled"
- "Create a least privilege policy for S3 access"
- "Identify overly permissive IAM roles"
- "Generate an IAM hardening report"
## Best Practices
- Use AWS managed policies when possible
- Implement policy versioning
- Test policies in non-production first
- Document policy purposes
- Regular access reviews (quarterly)
- Use IAM Access Analyzer
- Implement SCPs for organization-wide controls
## Kiro CLI Integration
```bash
kiro-cli chat "Use aws-iam-best-practices to review my IAM setup"
kiro-cli chat "Create a least privilege policy with aws-iam-best-practices"
```
## Additional Resources
- [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
- [IAM Policy Simulator](https://policysim.aws.amazon.com/)
- [IAM Access Analyzer](https://aws.amazon.com/iam/features/analyze-access/)

View File

@@ -0,0 +1,465 @@
---
name: aws-secrets-rotation
description: "Automate AWS secrets rotation for RDS, API keys, and credentials"
category: security
risk: safe
source: community
tags: "[aws, secrets-manager, security, automation, kiro-cli, credentials]"
date_added: "2026-02-27"
---
# AWS Secrets Rotation
Automate rotation of secrets, credentials, and API keys using AWS Secrets Manager and Lambda.
## When to Use
Use this skill when you need to implement automated secrets rotation, manage credentials securely, or comply with security policies requiring regular key rotation.
## Supported Secret Types
**AWS Services**
- RDS database credentials
- DocumentDB credentials
- Redshift credentials
- ElastiCache credentials
**Third-Party Services**
- API keys
- OAuth tokens
- SSH keys
- Custom credentials
## Secrets Manager Setup
### Create a Secret
```bash
# Create RDS secret
aws secretsmanager create-secret \
--name prod/db/mysql \
--description "Production MySQL credentials" \
--secret-string '{
"username": "admin",
"password": "CHANGE_ME",
"engine": "mysql",
"host": "mydb.cluster-abc.us-east-1.rds.amazonaws.com",
"port": 3306,
"dbname": "myapp"
}'
# Create API key secret
aws secretsmanager create-secret \
--name prod/api/stripe \
--secret-string '{
"api_key": "sk_live_xxxxx",
"webhook_secret": "whsec_xxxxx"
}'
# Create secret from file
aws secretsmanager create-secret \
--name prod/ssh/private-key \
--secret-binary fileb://~/.ssh/id_rsa
```
### Retrieve Secrets
```bash
# Get secret value
aws secretsmanager get-secret-value \
--secret-id prod/db/mysql \
--query 'SecretString' --output text
# Get specific field
aws secretsmanager get-secret-value \
--secret-id prod/db/mysql \
--query 'SecretString' --output text | \
jq -r '.password'
# Get binary secret
aws secretsmanager get-secret-value \
--secret-id prod/ssh/private-key \
--query 'SecretBinary' --output text | \
base64 -d > private-key.pem
```
## Automatic Rotation Setup
### Enable RDS Rotation
```bash
# Enable automatic rotation (30 days)
aws secretsmanager rotate-secret \
--secret-id prod/db/mysql \
--rotation-lambda-arn arn:aws:lambda:us-east-1:123456789012:function:SecretsManagerRDSMySQLRotation \
--rotation-rules AutomaticallyAfterDays=30
# Rotate immediately
aws secretsmanager rotate-secret \
--secret-id prod/db/mysql
# Check rotation status
aws secretsmanager describe-secret \
--secret-id prod/db/mysql \
--query 'RotationEnabled'
```
### Lambda Rotation Function
```python
# lambda_rotation.py
import boto3
import json
import os
secrets_client = boto3.client('secretsmanager')
rds_client = boto3.client('rds')
def lambda_handler(event, context):
"""Rotate RDS MySQL password"""
secret_arn = event['SecretId']
token = event['ClientRequestToken']
step = event['Step']
# Get current secret
current = secrets_client.get_secret_value(SecretId=secret_arn)
secret = json.loads(current['SecretString'])
if step == "createSecret":
# Generate new password
new_password = generate_password()
secret['password'] = new_password
# Store as pending
secrets_client.put_secret_value(
SecretId=secret_arn,
ClientRequestToken=token,
SecretString=json.dumps(secret),
VersionStages=['AWSPENDING']
)
elif step == "setSecret":
# Update RDS password
rds_client.modify_db_instance(
DBInstanceIdentifier=secret['dbInstanceIdentifier'],
MasterUserPassword=secret['password'],
ApplyImmediately=True
)
elif step == "testSecret":
# Test new credentials
import pymysql
conn = pymysql.connect(
host=secret['host'],
user=secret['username'],
password=secret['password'],
database=secret['dbname']
)
conn.close()
elif step == "finishSecret":
# Mark as current
secrets_client.update_secret_version_stage(
SecretId=secret_arn,
VersionStage='AWSCURRENT',
MoveToVersionId=token,
RemoveFromVersionId=current['VersionId']
)
return {'statusCode': 200}
def generate_password(length=32):
import secrets
import string
alphabet = string.ascii_letters + string.digits + "!@#$%^&*()"
return ''.join(secrets.choice(alphabet) for _ in range(length))
```
### Custom Rotation for API Keys
```python
# api_key_rotation.py
import boto3
import requests
import json
secrets_client = boto3.client('secretsmanager')
def rotate_stripe_key(secret_arn, token, step):
"""Rotate Stripe API key"""
current = secrets_client.get_secret_value(SecretId=secret_arn)
secret = json.loads(current['SecretString'])
if step == "createSecret":
# Create new Stripe key via API
response = requests.post(
'https://api.stripe.com/v1/api_keys',
auth=(secret['api_key'], ''),
data={'name': f'rotated-{token[:8]}'}
)
new_key = response.json()['secret']
secret['api_key'] = new_key
secrets_client.put_secret_value(
SecretId=secret_arn,
ClientRequestToken=token,
SecretString=json.dumps(secret),
VersionStages=['AWSPENDING']
)
elif step == "testSecret":
# Test new key
response = requests.get(
'https://api.stripe.com/v1/balance',
auth=(secret['api_key'], '')
)
if response.status_code != 200:
raise Exception("New key failed validation")
elif step == "finishSecret":
# Revoke old key
old_key = json.loads(current['SecretString'])['api_key']
requests.delete(
f'https://api.stripe.com/v1/api_keys/{old_key}',
auth=(secret['api_key'], '')
)
# Promote to current
secrets_client.update_secret_version_stage(
SecretId=secret_arn,
VersionStage='AWSCURRENT',
MoveToVersionId=token
)
```
## Rotation Monitoring
### CloudWatch Alarms
```bash
# Create alarm for rotation failures
aws cloudwatch put-metric-alarm \
--alarm-name secrets-rotation-failures \
--alarm-description "Alert on secrets rotation failures" \
--metric-name RotationFailed \
--namespace AWS/SecretsManager \
--statistic Sum \
--period 300 \
--evaluation-periods 1 \
--threshold 1 \
--comparison-operator GreaterThanThreshold \
--alarm-actions arn:aws:sns:us-east-1:123456789012:alerts
```
### Rotation Audit Script
```bash
#!/bin/bash
# audit-rotations.sh
echo "Secrets Rotation Audit"
echo "====================="
aws secretsmanager list-secrets --query 'SecretList[*].[Name,RotationEnabled,LastRotatedDate]' \
--output text | \
while read name enabled last_rotated; do
echo ""
echo "Secret: $name"
echo " Rotation Enabled: $enabled"
echo " Last Rotated: $last_rotated"
if [ "$enabled" = "True" ]; then
# Check rotation schedule
rules=$(aws secretsmanager describe-secret --secret-id "$name" \
--query 'RotationRules.AutomaticallyAfterDays' --output text)
echo " Rotation Schedule: Every $rules days"
# Calculate days since last rotation
if [ "$last_rotated" != "None" ]; then
days_ago=$(( ($(date +%s) - $(date -d "$last_rotated" +%s)) / 86400 ))
echo " Days Since Rotation: $days_ago"
if [ $days_ago -gt $rules ]; then
echo " ⚠️ OVERDUE for rotation!"
fi
fi
fi
done
```
## Application Integration
### Python SDK
```python
import boto3
import json
def get_secret(secret_name):
"""Retrieve secret from Secrets Manager"""
client = boto3.client('secretsmanager')
try:
response = client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
except Exception as e:
print(f"Error retrieving secret: {e}")
raise
# Usage
db_creds = get_secret('prod/db/mysql')
connection = pymysql.connect(
host=db_creds['host'],
user=db_creds['username'],
password=db_creds['password'],
database=db_creds['dbname']
)
```
### Node.js SDK
```javascript
const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager();
async function getSecret(secretName) {
try {
const data = await secretsManager.getSecretValue({
SecretId: secretName
}).promise();
return JSON.parse(data.SecretString);
} catch (err) {
console.error('Error retrieving secret:', err);
throw err;
}
}
// Usage
const dbCreds = await getSecret('prod/db/mysql');
const connection = mysql.createConnection({
host: dbCreds.host,
user: dbCreds.username,
password: dbCreds.password,
database: dbCreds.dbname
});
```
## Rotation Best Practices
**Planning**
- [ ] Identify all secrets requiring rotation
- [ ] Define rotation schedules (30, 60, 90 days)
- [ ] Test rotation in non-production first
- [ ] Document rotation procedures
- [ ] Plan for emergency rotation
**Implementation**
- [ ] Use AWS managed rotation when possible
- [ ] Implement proper error handling
- [ ] Add CloudWatch monitoring
- [ ] Test application compatibility
- [ ] Implement gradual rollout
**Operations**
- [ ] Monitor rotation success/failure
- [ ] Set up alerts for failures
- [ ] Regular rotation audits
- [ ] Document troubleshooting steps
- [ ] Maintain rotation runbooks
## Emergency Rotation
```bash
# Immediate rotation (compromise detected)
aws secretsmanager rotate-secret \
--secret-id prod/db/mysql \
--rotate-immediately
# Force rotation even if recently rotated
aws secretsmanager rotate-secret \
--secret-id prod/api/stripe \
--rotation-lambda-arn arn:aws:lambda:us-east-1:123456789012:function:RotateStripeKey \
--rotate-immediately
# Verify rotation completed
aws secretsmanager describe-secret \
--secret-id prod/db/mysql \
--query 'LastRotatedDate'
```
## Compliance Tracking
```python
#!/usr/bin/env python3
# compliance-report.py
import boto3
from datetime import datetime, timedelta
client = boto3.client('secretsmanager')
def generate_compliance_report():
secrets = client.list_secrets()['SecretList']
compliant = []
non_compliant = []
for secret in secrets:
name = secret['Name']
rotation_enabled = secret.get('RotationEnabled', False)
last_rotated = secret.get('LastRotatedDate')
if not rotation_enabled:
non_compliant.append({
'name': name,
'issue': 'Rotation not enabled'
})
continue
if last_rotated:
days_ago = (datetime.now(last_rotated.tzinfo) - last_rotated).days
if days_ago > 90:
non_compliant.append({
'name': name,
'issue': f'Not rotated in {days_ago} days'
})
else:
compliant.append(name)
else:
non_compliant.append({
'name': name,
'issue': 'Never rotated'
})
print(f"Compliant Secrets: {len(compliant)}")
print(f"Non-Compliant Secrets: {len(non_compliant)}")
print("\nNon-Compliant Details:")
for item in non_compliant:
print(f" - {item['name']}: {item['issue']}")
if __name__ == "__main__":
generate_compliance_report()
```
## Example Prompts
- "Set up automatic rotation for my RDS credentials"
- "Create a Lambda function to rotate API keys"
- "Audit all secrets for rotation compliance"
- "Implement emergency rotation for compromised credentials"
- "Generate a secrets rotation report"
## Kiro CLI Integration
```bash
kiro-cli chat "Use aws-secrets-rotation to set up RDS credential rotation"
kiro-cli chat "Create a rotation audit report with aws-secrets-rotation"
```
## Additional Resources
- [AWS Secrets Manager Rotation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html)
- [Rotation Lambda Templates](https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas)
- [Best Practices for Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html)

View File

@@ -0,0 +1,369 @@
---
name: aws-security-audit
description: "Comprehensive AWS security posture assessment using AWS CLI and security best practices"
category: security
risk: safe
source: community
tags: "[aws, security, audit, compliance, kiro-cli, security-assessment]"
date_added: "2026-02-27"
---
# AWS Security Audit
Perform comprehensive security assessments of AWS environments to identify vulnerabilities and misconfigurations.
## When to Use
Use this skill when you need to audit AWS security posture, identify vulnerabilities, or prepare for compliance assessments.
## Audit Categories
**Identity & Access Management**
- Overly permissive IAM policies
- Unused IAM users and roles
- MFA enforcement gaps
- Root account usage
- Access key rotation
**Network Security**
- Open security groups (0.0.0.0/0)
- Public S3 buckets
- Unencrypted data in transit
- VPC flow logs disabled
- Network ACL misconfigurations
**Data Protection**
- Unencrypted EBS volumes
- Unencrypted RDS instances
- S3 bucket encryption disabled
- Backup policies missing
- KMS key rotation disabled
**Logging & Monitoring**
- CloudTrail disabled
- CloudWatch alarms missing
- VPC Flow Logs disabled
- S3 access logging disabled
- Config recording disabled
## Security Audit Commands
### IAM Security Checks
```bash
# List users without MFA
aws iam get-credential-report --output text | \
awk -F, '$4=="false" && $1!="<root_account>" {print $1}'
# Find unused IAM users (no activity in 90 days)
aws iam list-users --query 'Users[*].[UserName]' --output text | \
while read user; do
last_used=$(aws iam get-user --user-name "$user" \
--query 'User.PasswordLastUsed' --output text)
echo "$user: $last_used"
done
# List overly permissive policies (AdministratorAccess)
aws iam list-policies --scope Local \
--query 'Policies[?PolicyName==`AdministratorAccess`]'
# Find access keys older than 90 days
aws iam list-users --query 'Users[*].UserName' --output text | \
while read user; do
aws iam list-access-keys --user-name "$user" \
--query 'AccessKeyMetadata[*].[AccessKeyId,CreateDate]' \
--output text
done
# Check root account access keys
aws iam get-account-summary \
--query 'SummaryMap.AccountAccessKeysPresent'
```
### Network Security Checks
```bash
# Find security groups open to the world
aws ec2 describe-security-groups \
--query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].[GroupId,GroupName]' \
--output table
# List public S3 buckets
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
acl=$(aws s3api get-bucket-acl --bucket "$bucket" 2>/dev/null)
if echo "$acl" | grep -q "AllUsers"; then
echo "PUBLIC: $bucket"
fi
done
# Check VPC Flow Logs status
aws ec2 describe-vpcs --query 'Vpcs[*].VpcId' --output text | \
while read vpc; do
flow_logs=$(aws ec2 describe-flow-logs \
--filter "Name=resource-id,Values=$vpc" \
--query 'FlowLogs[*].FlowLogId' --output text)
if [ -z "$flow_logs" ]; then
echo "No flow logs: $vpc"
fi
done
# Find RDS instances without encryption
aws rds describe-db-instances \
--query 'DBInstances[?StorageEncrypted==`false`].[DBInstanceIdentifier]' \
--output table
```
### Data Protection Checks
```bash
# Find unencrypted EBS volumes
aws ec2 describe-volumes \
--query 'Volumes[?Encrypted==`false`].[VolumeId,Size,State]' \
--output table
# Check S3 bucket encryption
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
encryption=$(aws s3api get-bucket-encryption \
--bucket "$bucket" 2>&1)
if echo "$encryption" | grep -q "ServerSideEncryptionConfigurationNotFoundError"; then
echo "No encryption: $bucket"
fi
done
# Find RDS snapshots that are public
aws rds describe-db-snapshots \
--query 'DBSnapshots[*].[DBSnapshotIdentifier]' --output text | \
while read snapshot; do
attrs=$(aws rds describe-db-snapshot-attributes \
--db-snapshot-identifier "$snapshot" \
--query 'DBSnapshotAttributesResult.DBSnapshotAttributes[?AttributeName==`restore`].AttributeValues' \
--output text)
if echo "$attrs" | grep -q "all"; then
echo "PUBLIC SNAPSHOT: $snapshot"
fi
done
# Check KMS key rotation
aws kms list-keys --query 'Keys[*].KeyId' --output text | \
while read key; do
rotation=$(aws kms get-key-rotation-status --key-id "$key" \
--query 'KeyRotationEnabled' --output text 2>/dev/null)
if [ "$rotation" = "False" ]; then
echo "Rotation disabled: $key"
fi
done
```
### Logging & Monitoring Checks
```bash
# Check CloudTrail status
aws cloudtrail describe-trails \
--query 'trailList[*].[Name,IsMultiRegionTrail,LogFileValidationEnabled]' \
--output table
# Verify CloudTrail is logging
aws cloudtrail get-trail-status --name my-trail \
--query 'IsLogging'
# Check if AWS Config is enabled
aws configservice describe-configuration-recorders \
--query 'ConfigurationRecorders[*].[name,roleARN]' \
--output table
# List S3 buckets without access logging
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
logging=$(aws s3api get-bucket-logging --bucket "$bucket" 2>&1)
if ! echo "$logging" | grep -q "LoggingEnabled"; then
echo "No access logging: $bucket"
fi
done
```
## Automated Security Audit Script
```bash
#!/bin/bash
# comprehensive-security-audit.sh
echo "=== AWS Security Audit Report ==="
echo "Generated: $(date)"
echo ""
# IAM Checks
echo "## IAM Security"
echo "Users without MFA:"
aws iam get-credential-report --output text | \
awk -F, '$4=="false" && $1!="<root_account>" {print " - " $1}'
echo ""
echo "Root account access keys:"
aws iam get-account-summary \
--query 'SummaryMap.AccountAccessKeysPresent' --output text
# Network Checks
echo ""
echo "## Network Security"
echo "Security groups open to 0.0.0.0/0:"
aws ec2 describe-security-groups \
--query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].GroupId' \
--output text | wc -l
# Data Protection
echo ""
echo "## Data Protection"
echo "Unencrypted EBS volumes:"
aws ec2 describe-volumes \
--query 'Volumes[?Encrypted==`false`].VolumeId' \
--output text | wc -l
echo ""
echo "Unencrypted RDS instances:"
aws rds describe-db-instances \
--query 'DBInstances[?StorageEncrypted==`false`].DBInstanceIdentifier' \
--output text | wc -l
# Logging
echo ""
echo "## Logging & Monitoring"
echo "CloudTrail status:"
aws cloudtrail describe-trails \
--query 'trailList[*].[Name,IsLogging]' \
--output table
echo ""
echo "=== End of Report ==="
```
## Security Score Calculator
```python
#!/usr/bin/env python3
# security-score.py
import boto3
import json
def calculate_security_score():
iam = boto3.client('iam')
ec2 = boto3.client('ec2')
s3 = boto3.client('s3')
score = 100
issues = []
# Check MFA
try:
report = iam.get_credential_report()
users_without_mfa = 0
# Parse report and count
if users_without_mfa > 0:
score -= 10
issues.append(f"{users_without_mfa} users without MFA")
except:
pass
# Check open security groups
sgs = ec2.describe_security_groups()
open_sgs = 0
for sg in sgs['SecurityGroups']:
for perm in sg.get('IpPermissions', []):
for ip_range in perm.get('IpRanges', []):
if ip_range.get('CidrIp') == '0.0.0.0/0':
open_sgs += 1
break
if open_sgs > 0:
score -= 15
issues.append(f"{open_sgs} security groups open to internet")
# Check unencrypted volumes
volumes = ec2.describe_volumes()
unencrypted = sum(1 for v in volumes['Volumes'] if not v['Encrypted'])
if unencrypted > 0:
score -= 20
issues.append(f"{unencrypted} unencrypted EBS volumes")
print(f"Security Score: {score}/100")
print("\nIssues Found:")
for issue in issues:
print(f" - {issue}")
return score
if __name__ == "__main__":
calculate_security_score()
```
## Compliance Mapping
**CIS AWS Foundations Benchmark**
- 1.1: Root account usage
- 1.2-1.14: IAM policies and MFA
- 2.1-2.9: Logging (CloudTrail, Config, VPC Flow Logs)
- 4.1-4.3: Monitoring and alerting
**PCI-DSS**
- Requirement 1: Network security controls
- Requirement 2: Secure configurations
- Requirement 8: Access controls and MFA
- Requirement 10: Logging and monitoring
**HIPAA**
- Access controls (IAM)
- Audit controls (CloudTrail)
- Encryption (EBS, RDS, S3)
- Transmission security (TLS/SSL)
## Remediation Priorities
**Critical (Fix Immediately)**
- Root account access keys
- Public RDS snapshots
- Security groups open to 0.0.0.0/0 on sensitive ports
- CloudTrail disabled
**High (Fix Within 7 Days)**
- Users without MFA
- Unencrypted data at rest
- Missing VPC Flow Logs
- Overly permissive IAM policies
**Medium (Fix Within 30 Days)**
- Old access keys (>90 days)
- Missing S3 access logging
- Unused IAM users
- KMS key rotation disabled
## Example Prompts
- "Run a comprehensive security audit on my AWS account"
- "Check for IAM security issues"
- "Find all unencrypted resources"
- "Generate a security compliance report"
- "Calculate my AWS security score"
## Best Practices
- Run audits weekly
- Automate with Lambda/EventBridge
- Export results to S3 for trending
- Integrate with SIEM tools
- Track remediation progress
- Document exceptions with business justification
## Kiro CLI Integration
```bash
kiro-cli chat "Use aws-security-audit to assess my security posture"
kiro-cli chat "Generate a security audit report with aws-security-audit"
```
## Additional Resources
- [AWS Security Best Practices](https://aws.amazon.com/security/best-practices/)
- [CIS AWS Foundations Benchmark](https://www.cisecurity.org/benchmark/amazon_web_services)
- [AWS Security Hub](https://aws.amazon.com/security-hub/)

View File

@@ -0,0 +1,121 @@
# Security Checklists
> Quick reference checklists for security audits. Use alongside vulnerability-scanner principles.
---
## OWASP Top 10 Audit Checklist
### A01: Broken Access Control
- [ ] Authorization on all protected routes
- [ ] Deny by default
- [ ] Rate limiting implemented
- [ ] CORS properly configured
### A02: Cryptographic Failures
- [ ] Passwords hashed (bcrypt/argon2, cost 12+)
- [ ] Sensitive data encrypted at rest
- [ ] TLS 1.2+ for all connections
- [ ] No secrets in code/logs
### A03: Injection
- [ ] Parameterized queries
- [ ] Input validation on all user data
- [ ] Output encoding for XSS
- [ ] No eval() or dynamic code execution
### A04: Insecure Design
- [ ] Threat modeling done
- [ ] Security requirements defined
- [ ] Business logic validated
### A05: Security Misconfiguration
- [ ] Unnecessary features disabled
- [ ] Error messages sanitized
- [ ] Security headers configured
- [ ] Default credentials changed
### A06: Vulnerable Components
- [ ] Dependencies up to date
- [ ] No known vulnerabilities
- [ ] Unused dependencies removed
### A07: Authentication Failures
- [ ] MFA available
- [ ] Session invalidation on logout
- [ ] Session timeout implemented
- [ ] Brute force protection
### A08: Integrity Failures
- [ ] Dependency integrity verified
- [ ] CI/CD pipeline secured
- [ ] Update mechanism secured
### A09: Logging Failures
- [ ] Security events logged
- [ ] Logs protected
- [ ] No sensitive data in logs
- [ ] Alerting configured
### A10: SSRF
- [ ] URL validation implemented
- [ ] Allow-list for external calls
- [ ] Network segmentation
---
## Authentication Checklist
- [ ] Strong password policy
- [ ] Account lockout
- [ ] Secure password reset
- [ ] Session management
- [ ] Token expiration
- [ ] Logout invalidation
---
## API Security Checklist
- [ ] Authentication required
- [ ] Authorization per endpoint
- [ ] Input validation
- [ ] Rate limiting
- [ ] Output sanitization
- [ ] Error handling
---
## Data Protection Checklist
- [ ] Encryption at rest
- [ ] Encryption in transit
- [ ] Key management
- [ ] Data minimization
- [ ] Secure deletion
---
## Security Headers
| Header | Purpose |
|--------|---------|
| **Content-Security-Policy** | XSS prevention |
| **X-Content-Type-Options** | MIME sniffing |
| **X-Frame-Options** | Clickjacking |
| **Strict-Transport-Security** | Force HTTPS |
| **Referrer-Policy** | Referrer control |
---
## Quick Audit Commands
| Check | What to Look For |
|-------|------------------|
| Secrets in code | password, api_key, secret |
| Dangerous patterns | eval, innerHTML, SQL concat |
| Dependency issues | npm audit, snyk |
---
> **Usage:** Copy relevant checklists into your PLAN.md or security report.

View File

@@ -0,0 +1,458 @@
#!/usr/bin/env python3
"""
Skill: vulnerability-scanner
Script: security_scan.py
Purpose: Validate that security principles from SKILL.md are applied correctly
Usage: python security_scan.py <project_path> [--scan-type all|deps|secrets|patterns|config]
Output: JSON with validation findings
This script verifies:
1. Dependencies - Supply chain security (OWASP A03)
2. Secrets - No hardcoded credentials (OWASP A04)
3. Code Patterns - Dangerous patterns identified (OWASP A05)
4. Configuration - Security settings validated (OWASP A02)
"""
import subprocess
import json
import os
import sys
import re
import argparse
from pathlib import Path
from typing import Dict, List, Any
from datetime import datetime
# Fix Windows console encoding for Unicode output
try:
sys.stdout.reconfigure(encoding='utf-8', errors='replace')
sys.stderr.reconfigure(encoding='utf-8', errors='replace')
except AttributeError:
pass # Python < 3.7
# ============================================================================
# CONFIGURATION
# ============================================================================
SECRET_PATTERNS = [
# API Keys & Tokens
(r'api[_-]?key\s*[=:]\s*["\'][^"\']{10,}["\']', "API Key", "high"),
(r'token\s*[=:]\s*["\'][^"\']{10,}["\']', "Token", "high"),
(r'bearer\s+[a-zA-Z0-9\-_.]+', "Bearer Token", "critical"),
# Cloud Credentials
(r'AKIA[0-9A-Z]{16}', "AWS Access Key", "critical"),
(r'aws[_-]?secret[_-]?access[_-]?key\s*[=:]\s*["\'][^"\']+["\']', "AWS Secret", "critical"),
(r'AZURE[_-]?[A-Z_]+\s*[=:]\s*["\'][^"\']+["\']', "Azure Credential", "critical"),
(r'GOOGLE[_-]?[A-Z_]+\s*[=:]\s*["\'][^"\']+["\']', "GCP Credential", "critical"),
# Database & Connections
(r'password\s*[=:]\s*["\'][^"\']{4,}["\']', "Password", "high"),
(r'(mongodb|postgres|mysql|redis):\/\/[^\s"\']+', "Database Connection String", "critical"),
# Private Keys
(r'-----BEGIN\s+(RSA|PRIVATE|EC)\s+KEY-----', "Private Key", "critical"),
(r'ssh-rsa\s+[A-Za-z0-9+/]+', "SSH Key", "critical"),
# JWT
(r'eyJ[A-Za-z0-9-_]+\.eyJ[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+', "JWT Token", "high"),
]
DANGEROUS_PATTERNS = [
# Injection risks
(r'eval\s*\(', "eval() usage", "critical", "Code Injection risk"),
(r'exec\s*\(', "exec() usage", "critical", "Code Injection risk"),
(r'new\s+Function\s*\(', "Function constructor", "high", "Code Injection risk"),
(r'child_process\.exec\s*\(', "child_process.exec", "high", "Command Injection risk"),
(r'subprocess\.call\s*\([^)]*shell\s*=\s*True', "subprocess with shell=True", "high", "Command Injection risk"),
# XSS risks
(r'dangerouslySetInnerHTML', "dangerouslySetInnerHTML", "high", "XSS risk"),
(r'\.innerHTML\s*=', "innerHTML assignment", "medium", "XSS risk"),
(r'document\.write\s*\(', "document.write", "medium", "XSS risk"),
# SQL Injection indicators
(r'["\'][^"\']*\+\s*[a-zA-Z_]+\s*\+\s*["\'].*(?:SELECT|INSERT|UPDATE|DELETE)', "SQL String Concat", "critical", "SQL Injection risk"),
(r'f"[^"]*(?:SELECT|INSERT|UPDATE|DELETE)[^"]*\{', "SQL f-string", "critical", "SQL Injection risk"),
# Insecure configurations
(r'verify\s*=\s*False', "SSL Verify Disabled", "high", "MITM risk"),
(r'--insecure', "Insecure flag", "medium", "Security disabled"),
(r'disable[_-]?ssl', "SSL Disabled", "high", "MITM risk"),
# Unsafe deserialization
(r'pickle\.loads?\s*\(', "pickle usage", "high", "Deserialization risk"),
(r'yaml\.load\s*\([^)]*\)(?!\s*,\s*Loader)', "Unsafe YAML load", "high", "Deserialization risk"),
]
SKIP_DIRS = {'node_modules', '.git', 'dist', 'build', '__pycache__', '.venv', 'venv', '.next'}
CODE_EXTENSIONS = {'.js', '.ts', '.jsx', '.tsx', '.py', '.go', '.java', '.rb', '.php'}
CONFIG_EXTENSIONS = {'.json', '.yaml', '.yml', '.toml', '.env', '.env.local', '.env.development'}
# ============================================================================
# SCANNING FUNCTIONS
# ============================================================================
def scan_dependencies(project_path: str) -> Dict[str, Any]:
"""
Validate supply chain security (OWASP A03).
Checks: npm audit, lock file presence, dependency age.
"""
results = {"tool": "dependency_scanner", "findings": [], "status": "[OK] Secure"}
# Check for lock files
lock_files = {
"npm": ["package-lock.json", "npm-shrinkwrap.json"],
"yarn": ["yarn.lock"],
"pnpm": ["pnpm-lock.yaml"],
"pip": ["requirements.txt", "Pipfile.lock", "poetry.lock"],
}
found_locks = []
missing_locks = []
for manager, files in lock_files.items():
pkg_file = "package.json" if manager in ["npm", "yarn", "pnpm"] else "setup.py"
pkg_path = Path(project_path) / pkg_file
if pkg_path.exists() or (manager == "pip" and (Path(project_path) / "requirements.txt").exists()):
has_lock = any((Path(project_path) / f).exists() for f in files)
if has_lock:
found_locks.append(manager)
else:
missing_locks.append(manager)
results["findings"].append({
"type": "Missing Lock File",
"severity": "high",
"message": f"{manager}: No lock file found. Supply chain integrity at risk."
})
# Run npm audit if applicable
if (Path(project_path) / "package.json").exists():
try:
result = subprocess.run(
["npm", "audit", "--json"],
cwd=project_path,
capture_output=True,
text=True,
timeout=60
)
try:
audit_data = json.loads(result.stdout)
vulnerabilities = audit_data.get("vulnerabilities", {})
severity_count = {"critical": 0, "high": 0, "moderate": 0, "low": 0}
for vuln in vulnerabilities.values():
sev = vuln.get("severity", "low").lower()
if sev in severity_count:
severity_count[sev] += 1
if severity_count["critical"] > 0:
results["status"] = "[!!] Critical vulnerabilities"
results["findings"].append({
"type": "npm audit",
"severity": "critical",
"message": f"{severity_count['critical']} critical vulnerabilities in dependencies"
})
elif severity_count["high"] > 0:
results["status"] = "[!] High vulnerabilities"
results["findings"].append({
"type": "npm audit",
"severity": "high",
"message": f"{severity_count['high']} high severity vulnerabilities"
})
results["npm_audit"] = severity_count
except json.JSONDecodeError:
pass
except (FileNotFoundError, subprocess.TimeoutExpired):
pass
if not results["findings"]:
results["status"] = "[OK] Supply chain checks passed"
return results
def scan_secrets(project_path: str) -> Dict[str, Any]:
"""
Validate no hardcoded secrets (OWASP A04).
Checks: API keys, tokens, passwords, cloud credentials.
"""
results = {
"tool": "secret_scanner",
"findings": [],
"status": "[OK] No secrets detected",
"scanned_files": 0,
"by_severity": {"critical": 0, "high": 0, "medium": 0}
}
for root, dirs, files in os.walk(project_path):
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
for file in files:
ext = Path(file).suffix.lower()
if ext not in CODE_EXTENSIONS and ext not in CONFIG_EXTENSIONS:
continue
filepath = Path(root) / file
results["scanned_files"] += 1
try:
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
for pattern, secret_type, severity in SECRET_PATTERNS:
matches = re.findall(pattern, content, re.IGNORECASE)
if matches:
results["findings"].append({
"file": str(filepath.relative_to(project_path)),
"type": secret_type,
"severity": severity,
"count": len(matches)
})
results["by_severity"][severity] += len(matches)
except Exception:
pass
if results["by_severity"]["critical"] > 0:
results["status"] = "[!!] CRITICAL: Secrets exposed!"
elif results["by_severity"]["high"] > 0:
results["status"] = "[!] HIGH: Secrets found"
elif sum(results["by_severity"].values()) > 0:
results["status"] = "[?] Potential secrets detected"
# Limit findings for output
results["findings"] = results["findings"][:15]
return results
def scan_code_patterns(project_path: str) -> Dict[str, Any]:
"""
Validate dangerous code patterns (OWASP A05).
Checks: Injection risks, XSS, unsafe deserialization.
"""
results = {
"tool": "pattern_scanner",
"findings": [],
"status": "[OK] No dangerous patterns",
"scanned_files": 0,
"by_category": {}
}
for root, dirs, files in os.walk(project_path):
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
for file in files:
ext = Path(file).suffix.lower()
if ext not in CODE_EXTENSIONS:
continue
filepath = Path(root) / file
results["scanned_files"] += 1
try:
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
lines = f.readlines()
for line_num, line in enumerate(lines, 1):
for pattern, name, severity, category in DANGEROUS_PATTERNS:
if re.search(pattern, line, re.IGNORECASE):
results["findings"].append({
"file": str(filepath.relative_to(project_path)),
"line": line_num,
"pattern": name,
"severity": severity,
"category": category,
"snippet": line.strip()[:80]
})
results["by_category"][category] = results["by_category"].get(category, 0) + 1
except Exception:
pass
critical_count = sum(1 for f in results["findings"] if f["severity"] == "critical")
high_count = sum(1 for f in results["findings"] if f["severity"] == "high")
if critical_count > 0:
results["status"] = f"[!!] CRITICAL: {critical_count} dangerous patterns"
elif high_count > 0:
results["status"] = f"[!] HIGH: {high_count} risky patterns"
elif results["findings"]:
results["status"] = "[?] Some patterns need review"
# Limit findings
results["findings"] = results["findings"][:20]
return results
def scan_configuration(project_path: str) -> Dict[str, Any]:
"""
Validate security configuration (OWASP A02).
Checks: Security headers, CORS, debug modes.
"""
results = {
"tool": "config_scanner",
"findings": [],
"status": "[OK] Configuration secure",
"checks": {}
}
# Check common config files for issues
config_issues = [
(r'"DEBUG"\s*:\s*true', "Debug mode enabled", "high"),
(r'debug\s*=\s*True', "Debug mode enabled", "high"),
(r'NODE_ENV.*development', "Development mode in config", "medium"),
(r'"CORS_ALLOW_ALL".*true', "CORS allow all origins", "high"),
(r'"Access-Control-Allow-Origin".*\*', "CORS wildcard", "high"),
(r'allowCredentials.*true.*origin.*\*', "Dangerous CORS combo", "critical"),
]
for root, dirs, files in os.walk(project_path):
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
for file in files:
ext = Path(file).suffix.lower()
if ext not in CONFIG_EXTENSIONS and file not in ['next.config.js', 'webpack.config.js', '.eslintrc.js']:
continue
filepath = Path(root) / file
try:
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
for pattern, issue, severity in config_issues:
if re.search(pattern, content, re.IGNORECASE):
results["findings"].append({
"file": str(filepath.relative_to(project_path)),
"issue": issue,
"severity": severity
})
except Exception:
pass
# Check for security header configurations
header_files = ["next.config.js", "next.config.mjs", "middleware.ts", "nginx.conf"]
for hf in header_files:
hf_path = Path(project_path) / hf
if hf_path.exists():
results["checks"]["security_headers_config"] = True
break
else:
results["checks"]["security_headers_config"] = False
results["findings"].append({
"issue": "No security headers configuration found",
"severity": "medium",
"recommendation": "Configure CSP, HSTS, X-Frame-Options headers"
})
if any(f["severity"] == "critical" for f in results["findings"]):
results["status"] = "[!!] CRITICAL: Configuration issues"
elif any(f["severity"] == "high" for f in results["findings"]):
results["status"] = "[!] HIGH: Configuration review needed"
elif results["findings"]:
results["status"] = "[?] Minor configuration issues"
return results
# ============================================================================
# MAIN
# ============================================================================
def run_full_scan(project_path: str, scan_type: str = "all") -> Dict[str, Any]:
"""Execute security validation scans."""
report = {
"project": project_path,
"timestamp": datetime.now().isoformat(),
"scan_type": scan_type,
"scans": {},
"summary": {
"total_findings": 0,
"critical": 0,
"high": 0,
"overall_status": "[OK] SECURE"
}
}
scanners = {
"deps": ("dependencies", scan_dependencies),
"secrets": ("secrets", scan_secrets),
"patterns": ("code_patterns", scan_code_patterns),
"config": ("configuration", scan_configuration),
}
for key, (name, scanner) in scanners.items():
if scan_type == "all" or scan_type == key:
result = scanner(project_path)
report["scans"][name] = result
findings_count = len(result.get("findings", []))
report["summary"]["total_findings"] += findings_count
for finding in result.get("findings", []):
sev = finding.get("severity", "low")
if sev == "critical":
report["summary"]["critical"] += 1
elif sev == "high":
report["summary"]["high"] += 1
# Determine overall status
if report["summary"]["critical"] > 0:
report["summary"]["overall_status"] = "[!!] CRITICAL ISSUES FOUND"
elif report["summary"]["high"] > 0:
report["summary"]["overall_status"] = "[!] HIGH RISK ISSUES"
elif report["summary"]["total_findings"] > 0:
report["summary"]["overall_status"] = "[?] REVIEW RECOMMENDED"
return report
def main():
parser = argparse.ArgumentParser(
description="Validate security principles from vulnerability-scanner skill"
)
parser.add_argument("project_path", nargs="?", default=".", help="Project directory to scan")
parser.add_argument("--scan-type", choices=["all", "deps", "secrets", "patterns", "config"],
default="all", help="Type of scan to run")
parser.add_argument("--output", choices=["json", "summary"], default="json",
help="Output format")
args = parser.parse_args()
if not os.path.isdir(args.project_path):
print(json.dumps({"error": f"Directory not found: {args.project_path}"}))
sys.exit(1)
result = run_full_scan(args.project_path, args.scan_type)
if args.output == "summary":
print(f"\n{'='*60}")
print(f"Security Scan: {result['project']}")
print(f"{'='*60}")
print(f"Status: {result['summary']['overall_status']}")
print(f"Total Findings: {result['summary']['total_findings']}")
print(f" Critical: {result['summary']['critical']}")
print(f" High: {result['summary']['high']}")
print(f"{'='*60}\n")
for scan_name, scan_result in result['scans'].items():
print(f"\n{scan_name.upper()}: {scan_result['status']}")
for finding in scan_result.get('findings', [])[:5]:
print(f" - {finding}")
else:
print(json.dumps(result, indent=2))
if __name__ == "__main__":
main()