Auto fix all rubocop offenses with chatgpt / openai / local llm
gem install rubofix
- Get openai key and export as
RUBOFIX_API_KEY=
(for non-openai see "Options" below) - Break something for rubocop (remove
# rubocop:disable
comment or removeEnabled: false
from.rubocop.yml
)
MAX=2 rubofix
# Fixing MAX=2 of 35 warnings with MODEL=gpt-4o-mini ...
# Fixing Rakefile:89:19: W: Lint/AssignmentInCondition: Use == if you meant to do a comparison ... with:
# unless (template = args[:template])
# Fixing Rakefile:103:22: W: Lint/AssignmentInCondition: Use == if you meant to do a comparison ... with:
# next unless (spec = e["spec"])
git diff
# Rakefile
# -unless template = args[:template]
# +unless (template = args[:template])
git commit -am 'fixing rubocop warnings'
- only fix given files
rubofix file1.rb file2.rb
DEBUG=1
show prompt and answersCONTEXT=10
feed 10 lines of context around the offense to the modelRUBOFIX_URL=
defaults tohttps://api.openai.com
RUBOFIX_API_KEY=
rake
to run unit testsrake integration
to run integration tests, they need an api key setrake bump:<major|minor|patch>
to create a new versionrake release
to release a new version
- custom api endpoints
- local LLM support
- read rubocop.todo file and autofix everything in there
- retry on api failures
- parallel execution
- colored output
- try different temperatures to get better results
- try to send output back to llm with "check this makes sense" to fix bugs
- produce diffs and then apply them so we can fix multiple things in 1 file without changing line numbers
Michael Grosser
michael@grosser.it
License: MIT