You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I like this project idea and it could become a nice little collection of chaos monkey programs. I think that to be scalable, it needs to be better structured. I was thinking of something like:
Adding individual READMEs in the different directories would allow to have a less bloated main README. The main README could contain why the project (what problem it solves), warnings, and how to use the tools, e.g. do you run all the programs at the same time? Do we create a random program execution script which would execute divide by 0 or segfault etc. randomly? You mention creating releases which is very specific. I think something like this could be interesting:
# Chaos Marmosets
This project contains small programs that behave badly. They can be used for
[chaos engineering](https://en.wikipedia.org/wiki/Chaos_engineering) to test system behavior and
infrastructure setup for those cases.
## Prerequisites### Linux- GCC
- Make
- Python 3
### macOS- Xcode Command Line Tools
- Python 3
### Windows- MinGW or Visual Studio
- Python 3
## Installation### Linux and macOS1. Clone the repository:
```sh
git clone https://github.com/yourusername/chaos-marmosets.git
cd chaos-marmosets
```
2. Build the C programs:
```sh make```
3. (Optional) Create a release tarball:
```sh make dist```### Windows
1. Clone the repository:
```sh git clone https://github.com/bdrung/chaos-marmosets.gitcd chaos-marmosets```
etc.
For execution, we should be able to specify what we want to run, one program, two programs or all a the same time, I am thinking of something like this:
#!/bin/bash# Show how to use the programusage() {
echo"Usage: $0 --run=<program1>[,<program2>,<program3>,...|all]"exit 1
}
# Check if run is provided else show how to useif [[ $#-ne 1 ||$1!= --run=* ]];then
usage
fi# Parse --run
programs=${1#--run=}# Executing the programs from ./binrun_program() {
case$1in
divide-by-zero)
./bin/divide-by-zero &
;;
divide-by-zero-python)
python3 src/divide-by-zero/main.py &
;;
leak-memory)
./bin/leak-memory &
;;
seg-fault)
./bin/seg-fault &
;;
*)
echo"Unknown program: $1"
usage
;;
esac
}
# Run all programs based on the flagif [[ $programs=="all" ]];then
run_program divide-by-zero
run_program divide-by-zero-python
run_program leak-memory
run_program seg-fault
else
IFS=','read -r -a program_array <<<"$programs"forprogramin"${program_array[@]}";do
run_program "$program"donefi# Wait until completewait
This script is just an example and would need some improvement. The same can easily be done in Powershell for Windows users.
I am done talking about the structure. Finally, if you're interested in collaborating on this project. I would be interested in adding new features like CPU exhaustion, disk I/O stress, network floods and packet loss scenarios, descriptor leaks, etc.
Let me know what you think!
The text was updated successfully, but these errors were encountered:
Hi, I like this project idea and it could become a nice little collection of chaos monkey programs. I think that to be scalable, it needs to be better structured. I was thinking of something like:
Adding individual
README
s in the different directories would allow to have a less bloated mainREADME
. The mainREADME
could contain why the project (what problem it solves), warnings, and how to use the tools, e.g. do you run all the programs at the same time? Do we create a random program execution script which would execute divide by 0 or segfault etc. randomly? You mention creating releases which is very specific. I think something like this could be interesting:For execution, we should be able to specify what we want to run, one program, two programs or all a the same time, I am thinking of something like this:
This script is just an example and would need some improvement. The same can easily be done in Powershell for Windows users.
I am done talking about the structure. Finally, if you're interested in collaborating on this project. I would be interested in adding new features like CPU exhaustion, disk I/O stress, network floods and packet loss scenarios, descriptor leaks, etc.
Let me know what you think!
The text was updated successfully, but these errors were encountered: