for file in $(find . -type f -iname '*.json'); do
sed -i 's/"verified":"10"/"verified":"11"/' $file;
done
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Find can actually do the sed itself if you don't want to use a subshell and a shell loop.
find . -type f -iname '*.json' -exec sed -i 's/"verified":"10"/"verified":"11"/' '{}' ';'
-print0 | xargs -0 sed -i
to get a single sed process working across multiple files.
Add a -P 8
to xargs to get 8 parallel processes.
Today I learned that xargs supports parallelization natively! That's gonna make some of my scripts much simpler
find /path -name *.json -exec sed -i 's/from/to/g' {} ; -print
This, unless you want to mess with jq
Yeah, jq doesn't edit files, right? You'd have to have temp files or something? jq is so good handling json, I wish there was a way of using it to edit files.
You really want to do it that way anyway.. process the files to a new set of files. That way when you screw it up going back is just deleting the new files, fixing and rerunning.
I also agree sed and some regex is your best bet
I recommend formatting the regex with regex101.com, I'm down to help you if you post some examples
Additionally there is a cli tool, I think jq or something like that, for processing json on the command line
I have foundry too, let me see if I can find the files that need to be updated
Here's the GitHub link to one of the batches of files I'm working with.
This line ,,"compatibility":{"minimum":"9","verified":"10"}," needs to say" 11" in all the files
I have made a python script and ran it on a clone of your git repo to confirm it works, simply run it at the root directory of wherever the files are, it will walk through and find module.json and do the replace.
#!/usr/bin/env python3
import re
import os
import fileinput
pattern = re.compile(r'(?P\.+)\"compatibility\":{\"minimum\":\"(?P\\d+)\",\"verified\":\"(?P\\d+)\"},(?P\.+)')
def make11(match):
if match.groupdict().get('min', None) and match.groupdict().get('ver', None):
return f"{match.groupdict()['pre']}\"compatibility\":{{\"minimum\":\"11\",\"verified\":\"11\"}},{match.groupdict()['post']}"
for root, dirs, files in os.walk("."):
for file in files:
if file == "module.json":
for line in fileinput.input(f"{root}/{file}", inplace=True):
print(re.sub(pattern, make11, line))
edit: lemmy is fucking with the formatting and removing the fucking regex group names, which will bork it. I've tried fixing it, dm me if you want me to send a downloadable link to the script
If using Python, why not just use JSON module? Simpler and easier maintain without all those regex.
Still +1, on sed if one is on Linux.