fix: some cleanup

This commit is contained in:
Asger Gitz-Johansen 2024-08-26 20:53:28 +02:00
parent 484efc200a
commit fd0eb59aae
5 changed files with 20 additions and 30 deletions

27
TODO.md
View File

@ -8,7 +8,7 @@
- [x] Fourth things fourth, implement a prototype that reads a space-separated file and populates a struct.
- [x] Fifth things fifth, implement a prototype that spawns a new thread that executes a shell command.
- [x] Sixth things sixth, daemonize it!
- [ ] Seventh things seventh, package the sucker (arch, debian, alpine, docker)
- [x] Seventh things seventh, package the sucker (arch, debian, alpine, docker)
- [x] archlinux
- https://wiki.archlinux.org/title/Creating_packages
- [x] debian
@ -16,12 +16,12 @@
- just use docker.
- [-] ~~alpine~~ later.
- [-] ~~docker~~ later.
- [ ] Eight things eight, try it out! - maybe even write the python webhook extension.
- [ ] Port this document to gitea issue tracking
- [x] Eight things eight, try it out! - maybe even write the python webhook extension.
- [x] Port this document to gitea issue tracking
- [x] enable PATH-able programs and argv in the command section
- [x] custom environment variable passing. Something like `-e MY_TOKEN` ala docker-style
- [x] address sanitizers please.
- [ ] Ninth things ninth, fix bugs, see below
- [ ] Ninth things ninth, fix bugs, see https://git.gtz.dk/agj/sci/projects/1
- [ ] Tenth things tenth, write manpages, choose license
- [ ] Eleventh things Eleventh, polish
- [ ] Twelveth things last, release!
@ -71,25 +71,6 @@ alpine linux is using OpenRC (cool), which complicates things a little bit, but
generally really well written. Otherwise, I am sure that both wiki.gentoo and wiki.archlinux have great pages too
docker is super easy, just make a dockerfile - only concern is the trigger files.
#### Bugs / Missing Features
- [x] command output is being inherited. It should be piped into some random log-file
- [ ] pretty sure that `ctrl+c` / SIGINT is not graceful yet.
- [ ] missing license (heavily considering GPLv3)
- [ ] pipeline scripts should be executed in a unique `/tmp` dir
- [ ] Some way for third parties to see which pipelines are currently running and their status.
- Could be as simple as looking in the logs directory.
- How to mark a run as failed / success / warn?
- Third parties may need to extract artifacts.
or maybe the scripts themselves would upload artifacts?
- [ ] I am deliberately not using `Restart=on-failure` in the `scid.service` file because we are using `Type=exec`
and not `Type=notify` (yet) - which would require a `sd_notify` call of `READY=1` (see `man systemd.service`)
- [ ] Custom environment variables passed to the pipelines on invokation should be possible.
- [ ] Listener threads should be killed and restarted (worker pool should just chug along) when pipeline config file
has changed during runtime. Should be disableable with `--no-hot-reload-config` - i.e. on by default.
- [x] ~~`docker stop` is very slow. I am probably not handling signals properly yet.~~ native docker is abandonned
- [x] It seems that `-v 4` is segfaulting when running release builds, maybe the logger just cant find the source file?
Nope. I just wrote some bad code (inverted NULL check).
### Note Regarding `inotify` usage
From the manpage:
```

View File

@ -41,6 +41,7 @@ void per_line(const char* file, line_handler handler);
char* join(const char* a, const char* b);
char* join3(const char* a, const char* b, const char* c);
char* join4(const char* a, const char* b, const char* c, const char* d);
const char* skip_arg(const char* cp);
char* skip_spaces(const char* str);

View File

@ -8,12 +8,13 @@
#include <linux/limits.h>
#include <spawn.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <unistd.h>
#include <uuid/uuid.h>
const char* log_dir = "./"; // NOTE: must end with a /
const char* log_dir = ".";
const strlist_node* shared_environment = NULL;
void set_shared_environment(const strlist_node* root) {
@ -42,15 +43,13 @@ optional_int open_logfile(const char* const pipeline_id) {
optional_int result;
result.has_value = false;
result.value = 0;
char* log_file = join(pipeline_id, ".log");
char* log_filepath = join(log_dir, log_file);
char* log_filepath = join4(log_dir, "/", pipeline_id, ".log");
int fd = open(log_filepath, O_WRONLY | O_CREAT | O_TRUNC, 0644);
if (fd != -1) {
result.has_value = true;
result.value = fd;
} else
perror("open");
free(log_file);
free(log_filepath);
return result;
}
@ -126,7 +125,8 @@ void executor(void* data) {
log_info("{%s} (%s) exited with status %d", pipeline_id, e->name, status);
char buf[32];
sprintf(buf, "exited with status %d", status);
write(fd.value, buf, strnlen(buf, 32));
if(write(fd.value, buf, strnlen(buf, 32)) == -1)
perror("write");
end:
argv_free(argv);
close(fd.value);

View File

@ -48,8 +48,6 @@ and each pipeline will have an associated pipeline trigger file that can be
By default, pipeline triggers are placed in /tmp/sci but this can be overridden with the
.OP -x.
.SH EXAMPLES
A simple example configuration file could look something like the following:

View File

@ -72,6 +72,16 @@ char* join3(const char* a, const char* b, const char* c) {
return result;
}
char* join4(const char* a, const char* b, const char* c, const char* d) {
size_t alen = strlen(a);
size_t blen = strlen(b);
size_t clen = strlen(c);
size_t dlen = strlen(d);
char* result = malloc(alen + blen + clen + dlen + 1);
sprintf(result, "%s%s%s%s", a, b, c, d);
return result;
}
const char* skip_arg(const char* cp) {
while(*cp && !isspace(*cp))
cp++;