Skip to content
Snippets Groups Projects
README.org 67.8 KiB
Newer Older
Felix Drees's avatar
Felix Drees committed
# operating system -*- mode: org; coding: utf-8; -*-

#+SETUPFILE: https://fniessen.github.io/org-html-themes/org/theme-readtheorg.setup
#+LATEX_CLASS_OPTIONS: [a4paper, 11pt, twoside]
#+LATEX_HEADER: \usepackage[utf8]{inputenc}
#+STARTUP: latexpreview
Felix Drees's avatar
Felix Drees committed
#+OPTIONS: toc:t tex:t num:t
#+OPTIONS: ^:{}
Felix Drees's avatar
Felix Drees committed

#+EXPORT_FILE_NAME: index
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
#+title: operating system
#+subtitle: WiSe 2023/2024
#+AUTHOR: olma
#+AUTHOR: klygin
#+AUTHOR: khemiri
#+AUTHOR: fdrees
Felix Drees's avatar
Felix Drees committed


#+begin_src shell
git clone git@gitlab.informatik.uni-bremen.de:fdrees/operating-systems.git
#+end_src

* GitLab Page

Felix Drees's avatar
Felix Drees committed
[[https://fdrees.glpages.informatik.uni-bremen.de/operating-systems]]
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
* GitLab Repo
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
[[https://gitlab.informatik.uni-bremen.de/fdrees/operating-systems]]
Felix Drees's avatar
Felix Drees committed



Felix Drees's avatar
Felix Drees committed
* UPPAAL
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
** Erklären Sie das Konzept von „Clocks“ in UPPAL.
- kann zur Prüfung von Zeitinvarianten verwendet werden
- es gibt schreibend nur ein Reset auf 0
- sonst monoton linear steigend
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
Clocks are global variables that are incremented in each transition.
They can be reset to 0 in each transition.
They can be compared to constants and other clocks.
They can be used to model time.


Felix Drees's avatar
Felix Drees committed
** Erklären Sie das Konzept von „Channels“ in UPPAAL.

Felix Drees's avatar
Felix Drees committed
Channels are used to model communication between processes.


** Wie lässt sich eine typische Safety Property in UPPAAL prüfen?
Mit Queries im Verifier-Reiter lassen sich Zustände prüfen.
So kann man z.B. eine Query schreiben, welche prüft, ob sich 2 Prozesse gleichzeitig in einem Zustand (z.B. kritischem Abschnitt) befinden.
Felix Drees's avatar
Felix Drees committed

** Unter welchen Voraussetzungen gilt eine Eigenschaft der folgenden Form:
Felix Drees's avatar
Felix Drees committed

*** A[]phi
On all execution sequences it holds (~A~), that in all states of the execution sequence (~[]~) it is the case that ~phi~ is true.
Felix Drees's avatar
Felix Drees committed

*** A<>phi
On all execution sequences it holds (~A~), that finally there exists a state in the sequence (~<>~) so that ~phi~ is true.
Felix Drees's avatar
Felix Drees committed

*** E<>phi
There exists an execution sequence (~E~), such that there finally exists a state in that sequence (~<>~) so that ~phi~ is true.
Felix Drees's avatar
Felix Drees committed

*** E[]phi
There exists an execution sequence (~E~), such that in all states of the execution sequence (~[]~) it is the case that ~phi~ is true.
Felix Drees's avatar
Felix Drees committed

*** phi-->psi
If in state ~phi~, every process will finally reach state ~psi~.
(assuming ~phi~ and ~psi~ are states here)
Felix Drees's avatar
Felix Drees committed

** Wie lässt sich die Negation der Formel (not E <> (P(1).cs and P(2).cs)) ?
Ist äquivalent zu
~A[] !(P(1).cs and P(2).cs)
Es wird also geprüft, dass es keine Ausführungs-Sequenz gibt, sodass sich P(1) und P(2) gleichzeitig im kritischen Abschnitt (~cs~) befinden.
Felix Drees's avatar
Felix Drees committed

** Modellieren sie den Algorithmus von Peterson in UPPAAL.
#+CAPTION: Reference solution for the Peterson algorithm in UPPAAL
#+NAME: fig:uppaal_peterson
Ole-Niklas Mahlstädt's avatar
Ole-Niklas Mahlstädt committed
[[./uppaal_peterson.png]]
Felix Drees's avatar
Felix Drees committed

** Durch welche UppAal Query lässt sich der gegenseitige Ausschluss von Petersons Algorithmus prüfen?
~A[] !(P(0).CS && P(1).CS)~
Felix Drees's avatar
Felix Drees committed

** Wie lässt sich die Deadlock-Freiheit von Petersons Algorithmus prüfen?
~A[] not deadlock~
Felix Drees's avatar
Felix Drees committed

** Wie lässt sich die Fairness für beide Prozesse von Petersons Algorithmus prüfen?
Es gibt keinen Pfad auf dem ein Prozess verhungert, obwohl er in die
Critical-Section möchte.
~A[] (P(1).WAIT --> P(1).CS)~
bei dem ~-->~ Operator is es äquivalent das ~A[]~ wegzulassen
~P(1).WAIT --> P(1).CS~
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed

* Active-Wait-Verfahren
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
** What are the advantages of an active-wait protocol compared to semaphores?
Benötigen keinen Kontextwechsel zwischen User- und Kernel-Mode und sind dadurch (bei kurzen Wartezeiten) sehr effizient.
Sie sind auch leichter zu implementieren (keine Syscalls, nur z.B. ein ~while~)
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
Active-wait protocols are more efficient than semaphores, because they do not require a context switch.
They are also easier to implement.
Felix Drees's avatar
Felix Drees committed


Felix Drees's avatar
Felix Drees committed
** What are the disadvantages of an active-wait protocol?
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
Active-wait protocols are not preemptive, so they can lead to starvation.
Also, they are not deadlock free, because they do not provide mutual exclusion.
Felix Drees's avatar
Felix Drees committed


Felix Drees's avatar
Felix Drees committed
** What is priority inversion (de: "Prioritäten-Inversion")?
Bei active-wait auf einem singe-core System kann es dazu kommen
Ein hochpriorisierter Prozess will auf eine Critical-Section zugreifen, die
aber noch von einem niederpriorisierten Prozess blockiert wird.
Nun würde der hochp. Prozess gescheduled und bremst sich damit selber aus, um in
die Critical-Section zu kommen.
Felix Drees's avatar
Felix Drees committed

Priority inversion is a problem that can occur when using semaphores (or any kind of mutual exclusion machanism).
Felix Drees's avatar
Felix Drees committed
A high priority process is waiting for a low priority process to release a semaphore.
Because of their priorities they get blocked/waste time. It would have been faster to let
the low priority process be active, so that it can leave the critical section faster.
Felix Drees's avatar
Felix Drees committed


** How does Peterson's algorithm work?

Peterson's algorithm is an active-wait protocol that provides mutual exclusion.
It uses two flags to indicate that a process is interested in entering the critical section.
It also uses a turn variable to indicate which process is allowed to enter the critical section.

(mutual exclusion):
- only one process can be in the critical section at a time
- if process 0 is in the critical section, then process 1 is not in the critical section

#+NAME: Peterson's algorithm in C
#+BEGIN_SRC c
Felix Drees's avatar
Felix Drees committed
int turn = 0;
int interested[2] = {0, 0};

// Oo enter a ciritical section, call the following function.
// (pid is in range 0..1)
void enterCritical(int pid) {
    // int other = 1 - pid;

    interested[pid] = 1;
    turn = pid;

    while (turn == pid && interested[1 - pid] == 1) {
        // busy wait
    }
}

// To leave a critical section, call the following function.
void leaveCritical(int pid) {
    interested[pid] = 0;
}
Felix Drees's avatar
Felix Drees committed


** "What happens if you swap the assignments in Peterson's algorithm?

If the assignments are swapped, then the algorithm does not provide mutual exclusion anymore.
This is because the turn variable is set before the interested flag is set.

#+NAME: Error in Peterson's algorithm (swapped assignments)
#+BEGIN_SRC c
Felix Drees's avatar
Felix Drees committed
// WRONG: this does not provide mutual exclusion

void enterCritical(int pid) {
    int other = 1 - pid;

    // WRONG: turn is set before interested[pid] is set
    turn = pid;
    interested[pid] = 1;

    while (turn == pid && interested[other] == 1) {
        // busy wait
    }
}
#+END_SRC

This is because the active wait has 2 conditions. One process can set ~turn = pid~ and be suspended.
Afterwards the other process can proceed to set ~turn~ and ~interested~. It can continue into the critical section, because ~interested[other]~ is not yet set by the first process.
Now the first process can continue and sets its ~interested~-bit. It too can proceed into the cricital section, because ~turn == pid~ isn't satisfied anymore, due to the other process that overwrote it.
Felix Drees's avatar
Felix Drees committed


** How does Fischer's protocol work?

Fischer's protocol is an active-wait protocol that provides mutual exclusion.
It uses a flag to indicate that a process is interested in entering the critical section.
It also uses a turn variable to indicate which process is allowed to enter the critical section.

Can be implemented and used in practice for applications with arbitrary many processes.

Most famous protocol using timers for synchronization.

It works as follows:
- A process that wants to enter the critical section sets its flag to true.
- It then waits until the flag of all other processes is false.
- It then sets its flag to false and enters the critical section.
- When it leaves the critical section, it sets its flag to false.


** What assumption must be fulfilled for Fischer's protocol?

The processes must be synchronized.
This means that they must start at the same time.
Also, they must have synchronized clocks.


** How does strict alternation work?

Strict alternation is an active-wait protocol that provides mutual exclusion.
It uses a flag to indicate that a process is interested in entering the critical section.
It also uses a turn variable to indicate which process is allowed to enter the critical section.

It works as follows:
- A process that wants to enter the critical section sets its flag to true.
- It then waits until the flag of the other process is false.
- It then sets its flag to false and enters the critical section.
- When it leaves the critical section, it sets its flag to false.
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed

* Kommunikationsmechanismen

** Wie funktioniert ein Ringpuffer?
A ringbuffer consists of an array with a fixed size (~N~) and two variables for the
read (~R~) and write (~W~) index. The indexes will be wrapped around (modulo) when they reach (~N~).
The reader has to actively wait when the buffer is full (so that no values that weren't read yet are overwritten).
The writer has to actively wait when the buffer is empty (so that no values that were already read are re-read).

This can be checked like this:
- Ringbuffer is empty when ~R == W~ (--> reader has to wait)
- Ringbuffer is full when ~W+1 == R~ (--> writer has to wait)
The index are always incremented by one and modulo (~S~)
In the following example code with ringbuffer ~a~ of size *~n+1~* (last valid index is ~n~ -> modulo at ~n+1~), read index ~rIdx~ and write index ~wIdx~.
#+NAME: Ringbuffer writer
#+BEGIN_SRC c
void writer(myType_t x) {
  int wNew = (wIdx + 1)%(n+1);
  while ( wNew == rIdx );    // active-wait
  a[wIdx] = x;               // write data
  wIdx = wNew;
}
#+END_SRC
#+NAME: Ringbuffer reader
#+BEGIN_SRC c
myType_t reader() {
  myType_t x;
  while ( wIdx == rIdx );    // active-wait
    x = a[rIdx];             // read data
    rIdx = (rIdx + 1)%(n+1); // increment index
    return x;
}
#+END_SRC

** Wie kann festgestellt werden, dass der Ringpuffer leer ist?
Ringbuffer is empty when ~R == W~ (--> reader has to wait)

** Wie kann festgestellt werden, dass der Ringpuffer voll ist?
Ringbuffer is full when ~W+1 == R~ (--> writer has to wait)

** Wie kann der Ringpuffer ohne Modulo-Operation umgesetzt werden?
To do this you need a ringbuffer of size 2^{x} (e.g. a[8] with last valid index 7).
With this it's possible to do bit-wise ~AND~ operations to achive the same as a modulo would.
For example a ringbuffer of size ~n = 2^{3} = 8~:
instead of ~rIdx = (rIdx + 1)%(n)~
you could write ~rIdx = (rIdx + 1) & 7~
This works because ~7_{10} = 0111_{2}~ and ~8_{10} = 1000_{2}~.
#+NAME: bit-wise AND example
#+BEGIN_SRC c
// VAR     binary  decimal
// rIdx    0111    7
// mask    0111    7
// result  0111    7

// rIdx+1  1000    8
// mask    0111    7
// result  0000    0
#+END_SRC
With the bit-wise ~AND~ the most significant bit can be "ignored" so that the result will
wrap around to 0, exactly like a modulo operation would have done.

** Wie implementiert man einen Ringpuffer mit sehr großen Datenelementen x , so dass man nicht ein ganzes Array-Feld der Größe x verschwenden muss?
Instead of carrying datastructure of x in the rinbuffer, you would create a
ringbuffer of pointers to data of type x (--> myType_t* a[n+1])

** Wie funktioniert das Non-Blocking Write Protokoll?
Basic concepts: (from slides)
- Writer is always allowed to write
- Reader only starts reading if the writer is not writing
- Reader checks after read-completion whether the writer did not interfere with the read process
- If interefence is detected, the reader discards the data and repeats the read procedure

With the "Concurrency Control Flag" ~uintn_t CCF = 0~ and the buffer/struct/array ~myType_t a~.
#+NAME: example implementation writer
#+BEGIN_SRC c
void writer() {
  while (1) {
    do_other_things();

    CCF++;
    write(&a);
    CCF++;
  }
}
#+END_SRC
#+NAME: example implementation reader
#+BEGIN_SRC c
void reader() {
  int c0, c1;
  myType_t b;
  while (1) {
    do {
      do { c0 = CCF; } while (c0 & 1 ); // active-wait
      // Now c0 is even
      b = copy(a);
      c1 = CCF;
    } while ( c1 != c0 );
  }
}
#+END_SRC
Felix Drees's avatar
Felix Drees committed


* Semaphoren

Kirill Lygin's avatar
Kirill Lygin committed
** Welche Systemcalls bietet das System V Paket für Semaphoroperationen?

- ~int semctl(int semid, int semnum, int cmd, ...)~
    - ~int semid~ - id of the semaphore to operate on
    - ~int semnum~ - number of the sem to operate on
    - ~int cmd~ - command to perform. Examples: ~IPC_RMID~ - remove sem set, ~SETVAL~ - set value.

Kirill Lygin's avatar
Kirill Lygin committed
- ~int semget(key_t key, int nsems, int semflg)~ returns sem-set id.
	- ~key_t key~ - id
	- ~int nsems~ - number of semaphores in sem-set, usually - 1.
	- ~int semflg~ - flags.
Kirill Lygin's avatar
Kirill Lygin committed
- ~int semop(int semid, struct sembuf *sops, size_t nsops)~
	- ~int semid~ - id of the semaphore to operate on
	- ~sembuf* sops~ = ~{.sem_num, .sem_op, .sem_flg}~
	- ~size_t nsops~ - size of sops array (i.e. operations to perform). In most cases - 1.
Kirill Lygin's avatar
Kirill Lygin committed
- int semtimedop(...like in semop..., const struct timespec timeout)
	- ~struct timespec timeout~ - time the calling process would sleep, if NULL, then behaves like ~semop()~.

** Wie kann mit diesen Systemcalls eine up-Operation auf einer Semaphore durchgeführt werden?
 	- To perform UP (V) operation, use ~semop(semid, {.sem_num, 1, .sem_flg})~

** Wie können mit diesen Systemcalls down-Operation auf mehreren Semaphoren mit einem Systemcall Aufruf durchgeführt werden?
To perform operations on multiple semaphores, simply pass an array of sembufs
to the semop() syscall and adjust nsops accordingly. The sembuf structures can
indicate operations on different semaphores within one semaphore set. That's
what the parameter .sem_num exists for.
Kirill Lygin's avatar
Kirill Lygin committed

** Was passiert in diesem Fall, wenn eine einzelne down-Operation nicht ohne Blockieren ausgeführt werden kann, der Systemcall aber für MEHRERE Semaphoren ein Down anfordert?
Even if only one semaphore operation is blocking, the other operations performed
with one syscall (as mentioned above) are also going to be blocked. An operation
performed with one syscall is either happening or not; it's atomic, regardless
of how many semaphores you change. So, if it's not possible to decrement one
semaphore, the others won't be decremented either.
Kirill Lygin's avatar
Kirill Lygin committed

** Was passiert wenn das Feld sem_op für eine Semaphor-Operation den Wert 0 enthält?
	Such operation would have no affect.

* Shared memory
** Welche Systemcalls bietet das System V Paket, um Shared-Memory zu nutzen?
- ~shmget()~ - allocate shared mem segment
- ~shmop()~ - shared mem operations  (there is no such syscall, please check)
- ~shmctl()~ - control mem segment
- ~shmat()~ - attach mem seg
- ~shmdt()~ - detach memory segment
#+BEGIN_QUOTE
Kirill Lygin's avatar
Kirill Lygin committed
	Read man, man.
-- Kyrill ^^
#+END_QUOTE
Felix Drees's avatar
Felix Drees committed


Felix Drees's avatar
Felix Drees committed
* Netzwerkkommunikation

Kirill Lygin's avatar
Kirill Lygin committed
** Was versteht man unter verbindungsloser/verbindungsorientierter Kommunikation?
Connection-based communication involves establishing a connection (communication channel) with the server
before the communication itself begins. Connectionless communication, on the other hand,
does not involve establishing any connections.
It is the difference between how TCP and UDP handle connections. TCP needs to
establish a channel with a Three-Way-Handshake first. UDP just sends datagrams.
Kirill Lygin's avatar
Kirill Lygin committed

** Was versteht man unter Datagramm-orientierter/Stream-orientierter Kommunikation?
Datagram (Packet)-oriented communication is a type of communication performed by
Kirill Lygin's avatar
Kirill Lygin committed
exchanging packets explicitly. Packets are small pieces of data of fixed size.

In Stream-oriented communication, if the size of data to transfer is bigger than
the size of a standard IP packet, it will be broken into pieces and transferred
separately. Despite this, TCP can be used as if it were a 'stream' because the
original sequence of data is guaranteed to be reasembled on the receiving end.
Transmission errors (retransmissions) will be handled by TCP.
Kirill Lygin's avatar
Kirill Lygin committed

** Welche Eigenschaften hat das UDP Protokoll?
Kirill Lygin's avatar
Kirill Lygin committed
Datagram-based (Verbindungslos) Example: UDP protocol
	- Communication without stable connection.
	- Communication is based on sending/recieving packets (datagrams) explicitly.
	- Every packet is addressed individually and can be routed differently.
	- Packets can be lost or recieved in different sequence.

** Welche Eigenschaften hat das TCP Protokoll?
Kirill Lygin's avatar
Kirill Lygin committed
Connection-based (Verbindungsorientiert) Example: TCP protocol
	- Connection needs to be establishedi (Three-Way-Handshake).
    - There are mechanisms to ensure that data is received and lost/corrupted packets are retransmitted
    - The packets are reassembled in the same sequence they were sent in (via seq-numbers)

Kirill Lygin's avatar
Kirill Lygin committed
** Welche Systemcalls sind nötig, um einen TCP-Client zu realisieren? (inkl. Parameter, und Reihenfolge der Systemcalls)
~int socket(int domain, int type, int protocol)~ Returns file descriptor.
    - ~int domain~ - selects protocol family (~AF_LOCAL/AF_UNIX~ - local, ~AF_INET~ - IPv4, ~AF_INET6~ - IPv6, ...)
    - ~int type~ - selects communication semantics (~SOCK_STREAM~ - TCP, ~SOCK_DGRAM~ - UDP, ...)
    - ~int protocol~ - usually 0, cause normally there is only one proto pro type.

~int connect(int sockFd, const struct sockaddr *addr, socklen_t addrlen)~
Kirill Lygin's avatar
Kirill Lygin committed
    - ~int sockFd~ - fd of the socket
Kirill Lygin's avatar
Kirill Lygin committed
    - ~(sockaddr*)(sockaddr_in* addr)~ - stores address and port of the targt. Read man for more.
    - ~socklen_t addrlen~ - just the size of sockaddr

~int close(fd)~

** Welche Systemcalls sind nötig, um einen TCP-Server zu realisieren? (inkl. Parameter, und Reihenfolge der Systemcalls)
    At first, we need to create ~int socket(AF_INET, SOCK_STREAM, 0)~ socket, then bind it to a port
    ~int bind(int sockfd, const struct sockaddr *addr, socklen_t addrlen)~.
Kirill Lygin's avatar
Kirill Lygin committed
    Prepare to accept connections: ~int listen(int sockfd, int backlog)~, where
    int backlog is a max number of pending connections in sockfd queue.
    Then, we use blocking syscall ~int accept(int sockfd, struct sockaddr* addr, socklen_t addrlen)~
    to extract the first pending connection from the queue and create another
    socket for it. It writes the address of the client in addr and returns fd of the
    new socket. The first socket is not affected by this syscall and can be used
Kirill Lygin's avatar
Kirill Lygin committed
    to recieve another connections.
Kirill Lygin's avatar
Kirill Lygin committed
Communication:
    - ~ssize_t read(int fd, void buffer[n], size_t n)~ - Linux only
    - ~ssize_t write(int fd, void buffer[n], size_t n)~ - Linux only

    - ~ssize_t send(int sockfd, const void buf[.len], size_t len, int flags)~
    - ~ssize_t recv(int sockfd, const void buf[.len], size_t len, int flags)~
        with flags = 0 send() and recv() are equivalent to write and read.
Kirill Lygin's avatar
Kirill Lygin committed

** Welche Systemcalls sind nötig, um ein UDP-Paket zu versenden?
Client
    - ~int socket(AF_INET, SOCK_DGRAM, 0)~

Server
    - ~int socket(AF_INET, SOCK_DGRAM, 0)~
Ole-Niklas Mahlstädt's avatar
Ole-Niklas Mahlstädt committed
    - ~int bind(int sockfd, const struct sockaddr *addr, socklen_t addrlen)~
Kirill Lygin's avatar
Kirill Lygin committed
Communication:
    - ~ssize_t recvfrom(..like with recv..., struct sockaddr* addr, socklen_t* addrlen)~
    - ~ssize_t sendto(..like with recv..., struct sockaddr* addr, socklen_t* addrlen)~
Kirill Lygin's avatar
Kirill Lygin committed
    Client uses the same *addr* to bind, send and recieve.
    Server uses *const addr* to bind, but passes the pointer to another *addr*
    when recieving, so that ~recvfrom()~ could write the address of client down.
    Then the server can use this structure to send data to the client it initially recieved from.

Kirill Lygin's avatar
Kirill Lygin committed
** Was ist der Unterschied zwischen einem iterativen und parallelen Server? Wie werden diese umgesetzt? (Pseudo-Code)
Iterative:
#+NAME: iterative server
#+BEGIN_SRC c
Kirill Lygin's avatar
Kirill Lygin committed
    for (;;) {
        rq request = wait_for_request();
        serve_request(request);
    }
#+END_SRC
The server can't respond to any new requests until work on the current request is finished.

Kirill Lygin's avatar
Kirill Lygin committed
Parallel:
#+NAME: parallel server
#+BEGIN_SRC c
Kirill Lygin's avatar
Kirill Lygin committed
    for (;;) {
        rq request = wait_for_request();
        create_new_thread(*(*serve_request)(rq*), request);
    }
#+END_SRC
Kirill Lygin's avatar
Kirill Lygin committed
Serving a request doesn't block the whole process, instead a new parallel thread is created.
While a request is handled, new request can be received/handled.

Felix Drees's avatar
Felix Drees committed


Felix Drees's avatar
Felix Drees committed
* Prozesse und Threads
Felix Drees's avatar
Felix Drees committed

Kirill Lygin's avatar
Kirill Lygin committed
** Worin unterscheiden sich Prozesse und Threads?
- Processes have their own stack, heap, data, and code memory sections.
- The OS treats and schedules them as separate programs, meaning they are isolated from each other. They cannot access each other's memory sectors.
- Processes are scheduled by the kernel and take longer to create.
- There are two types of so-called threads: Lightweight Processes (Kernel threads) and User threads.
- LWPs are treated by the OS scheduler like separate programs. They are scheduled separately, and the OS scheduler pays little to no attention to what process owns the scheduled LWP.
- User threads are scheduled by the libraries you use to create them; the kernel has no clue about them.
- Each user/kernel thread has its own stack and registers.
- Creating an LWP is faster than creating a separate thread, but making a new User-Thread is even faster due to the indifference of the kernel.
- The same applies to context switches. In terms of time efficiency: user threads > LWPs > Processes.
- It's easier to establish communication between threads because they share the heap sector.


** Welche Systemcalls werden zum Erzeugen eines Threads benötigt?
*Clone() is used.*
Kirill Lygin's avatar
Kirill Lygin committed
By  contrast  with  ~fork(2)~, these system calls provide more precise control
over what pieces of execution context are shared between the calling process
and  the  child  process.  For example, using these system calls, the caller
Kirill Lygin's avatar
Kirill Lygin committed
can control whether or not the  two  processes  share  the  virtual  address
space,  the  table  of  file  descriptors, and the table of signal handlers.

Also worth to mention crossplatform ~pthread_create()~ from ~pthread.h~.
Kirill Lygin's avatar
Kirill Lygin committed

** Wie kann auf die Terminierung eines Threads gewartet werden?
~pthread_join(pthread_t thread, void **retval)~ from ~pthread.h~.
- ~thread~ - thread id.
- ~retval~ - pointer to a value supplied to ~pthread_exit(void *retval)~. Exit status of the terminated thread so to say.
Kirill Lygin's avatar
Kirill Lygin committed

** Wie lässt sich ein Prozess auf einer festen CPU ausführen?
- ~sched_setaffinity(pid_t pid, size_t cpusetsize, const cpu_set_t *mask)~
- ~sched_getaffinity(pid_t pid, size_t cpusetsize, cpu_set_t *mask)~
from ~sched.h~

Set and get cpu affinity mask. Affinity bitmask specifies processors (cores)
The program is allowed to run on.
Kirill Lygin's avatar
Kirill Lygin committed

** Erkläre die Aufrufe zur Prozesserzeugung und zum Prozessabbruch, die in der Sledgehammer/DPLL-Aufgabe verwendet wurden
- ~fork()~ returns pid to parent, 0 to child. Checking if 0, then ~execve(DPLL)~,
Kirill Lygin's avatar
Kirill Lygin committed
otherwise add pid to the child-array.
- Used ~pid_t fastest = wait()~ to wait for the fastest child.
- Then ~kill(pid, 1)~ other ones just iterating through the child-array. Wild.
Felix Drees's avatar
Felix Drees committed



Felix Drees's avatar
Felix Drees committed
* Userspace Scheduling

** Welche Vorteile bietet ein Userspace-Scheduler?
Userspace-Scheduling erlaubt es Anwendungsspezifische Scheduling-Entscheidungen zu treffen.
Außerdem muss nicht in den Kernel-Mode, für erstellen oder wechseln von User-Threads, gewechselt werden. Dadurch weniger Overhead.
Weil das Scheduling im Userspace gemacht wird, ist es unabhängig vom Betriebssystem-Scheduler und somit leichter zu portieren.
Felix Drees's avatar
Felix Drees committed

** Welche Nachteile hat ein Userspace-Scheduler?
Die einzelnen User-Threads laufen alle im selben Prozess. Der Scheduler vom Kernel betrachtet den Prozess nur als ganzes. Dadurch können die User-Threads nicht echt-parallel laufen (auch auf Multithreaded CPUs) und sie laufen alle auf der selben CPU.
Das Scheduling muss im Userspace gemacht werden.
Weil das Scheduling im gleichen Prozess gemacht wird kann keine Preemption für aktuell laufende User-Threads gemacht werden. Die User-Threads müssen also kooperativ schedulen und sollten nicht blockieren.
Felix Drees's avatar
Felix Drees committed

** Skizzieren Sie ein Beispiel, in welchem User-Space Scheduling die Code-Komplexität gegenüber C-Funktionen ohne Multi Threading reduziert.
Eine Anwendung mit mehreren Reitern. Bei dem Wechsel zwischen den Reitern werden die entsprechenden User-Threads gewechselt. Dadurch kann der Zustand (State) der vorher besuchten Reiter leicht behalten und später wieder aufgerufen werden.
(TODO: besseres Beispiel?)
Felix Drees's avatar
Felix Drees committed

** Welche C-Befehle sind nötig, um den Kontextwechsel in einem Userspace-Scheduler zu realisieren?
~getcontext(*ucp)~ und ~setcontext(*ucp)~ wobei hier ~*ucp~ jeweils ein ~user context pointer~ vom Typ ~ucontext_t~ ist.
Zunächst kann man mit [[https://man7.org/linux/man-pages/man3/getcontext.3.html][~getcontext~]] den aktuellen Kontext bekommen.
Mit ~setcontext~ kann man einen Kontext wieder herstellen und somit in einen anderen Kontext (User-Thread) innerhalb des selben Prozesses wechseln.
Alternativ kann man auch mit [[https://man7.org/linux/man-pages/man3/swapcontext.3.html][~swapcontext~]] wechseln.
Felix Drees's avatar
Felix Drees committed


Felix Drees's avatar
Felix Drees committed

** Wie viele Scheduling-Klassen gibt es im Linux-Kernel?
Ein Implementierungsdetail. Zum Beispiel mehrere Policies in einer Klasse.
Zum Beispiel Realtime-Klasse und Other-Klasse. Innerhalb dieser Klasse werden
Prozesse gleichbehandelt, egal welcher Policy sie angehören und entsprechend
geschedult.
There are 4 scheduling classes : 
1. Deadline scheduling class (DEADLINE) (this is a scheduling policy in the RT class, [[https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/linux/sched/rt.h#n27][source]])
2. Real-time scheduling class (RT)
3. Fair scheduling class (FAIR) (what do you mean by this? The other class?)
4. Idle scheduling class (IDLE) (this is a scheduling policy in the OTHER/NORMAL class, [[https://www.kernel.org/doc/html/v5.14/scheduler/sched-design-CFS.html#scheduling-policies][source]])

In the last tutorial he only mentioned OTHER (aka. NORMAL) and REALTIME as scheduling classes.

** Welche Scheduling-Policies werden vom Linux-Kernel unterstützt?
Der Linux-Kernel hat 6 Scheduling-Policies (in den Klassen OTHER und REALTIME).
~SCHED_OTHER~, ~SCHED_IDLE~, ~SCHED_BATCH~, ~SCHED_FIFO~, ~SCHED_RR~ und ~SCHED_DEADLINE~

~SCHED_OTHER~ is [[https://www.man7.org/linux/man-pages/man7/sched.7.html][named SCHED_NORMAL]] in the Linux kernel source code (but it's the same thing).
(Vielleicht kommt irgendwann noch ~SCHED_EXT~ in den Kernel [[https://lwn.net/Articles/922405/][link]])
Kirill Lygin's avatar
Kirill Lygin committed
Die Beschreibung jeder Policy: https://wiki.linuxfoundation.org/realtime/documentation/technical_basics/sched_policy_prio/start

** Wie werden in der Realtime Scheduling Klasse die Prozesse verwaltet (Datenstrukturen, wie wird O(1)-Scheduling erreicht)?
Prozesse, die nach einer Realtime Scheduling Policy (~SCHED_FIFO~, ~SCHED_RR~) geschedult werden, haben auch eine ~sched_priority~ mit Werten typischerweise in [1..99].
(POSIX.1 verlangt mindestens nur 32 Prioriäten, also [1..32])
Der  Scheduler hat eine Liste von Listen. Für jede ~sched_priority~ gibt es eine Liste von lauffähigen Prozessen.
Nun muss in der Liste von (~sched_priority~) Listen die höchst-priorisierteste nicht-leere Liste gefunden werden.
Der Kopf dieser Liste ist der Prozess der nun geschedult wird.
Souheil Khemiri's avatar
Souheil Khemiri committed
Edit: In the real time process class, we use an array of struct list_head of size 128 called prio_array. the 128 represent priorities in this class.
So in every priority level we find a list_head that reference a Process control block(which we can have acces to using the container_of macro).
Each list-head is a part of a doubly linked list that points to the previous and next process.
O(1) is guaranteed by the fact when the scheduler has to pick a task, it determines the lowest bit in the bit vector with the position P.
Then activate the first element in the list  prio_array[p].
** In einem 1-CPU System mit einem permanent rechenbereiten Prozess der Scheduling-Policy SCHED_FIFO: wie oft bekommen Prozesse der Policy SCHED_OTHER die CPU?
Überhaupt nicht, da ~SCHED_FIFO~ eine Realtime-Policy ist und somit eine höhere (statische) Priorität als ~SCHED_OTHER~ hat.
Solange es lauffähige Realtime-Prozesse gibt (hier permanent der Fall), werden diese ausgeführt/geschedult.
Prozesse mit der ~SCHED_OTHER~ können nur eine statische Priorität von 0 haben, welche mit der ~SCHED_FIFO~ jedoch in [1..99] (immer > 0).
** In einem System mit einem permanent rechenbereiten Prozess der Scheduling-Policy SCHED_FIFO mit Prio 5: wie oft werden Prozesse der Policy SCHED_RR mit prio 12 geschedult?
Beides sind Realtime-Scheduling-Policies, daher ist hier vorallem die Priorität wichtig und für ~SCHED_RR~ auch sogenanntes "time quantum" (Zeit Menge).
Die Prozesse der ~SCHED_RR~ Policy haben eine Priorität von 12 und werden damit vor denen der ~SCHED_FIFO~ mit Priorität 5 geschedult.
Bei dem Round-Robin-Scheduling können Prozesse nur eine maximale Zeit (bestimmt durch das "time quantum") aktiv sein, bevor sie preempted werden und wieder ans Ende der Scheduling-Liste gestellt werden.
Solange es nun lauffähige Prozesse der ~SCHED_RR~ mit Priorität 12 gibt (höher als 5) werden diese geschedult und entsprechend ihres Zeit-Budgets "rotiert".
Die ~SCHED_FIFO~ Prozesse mit Priorität 5 würden währenddessen jedoch nicht geschedult werden.

** Was bedeutet präemptives Scheduling?
Präemptives Scheduling bedeutet, dass die Ausführung eines Prozesses unterbrochen wird, obwohl er noch lauffähig währe.
Dies erlaubt es zwischendurch andere Prozesse zu schedulen. Insgesamt sollte dadurch die Interaktivität (responsiveness) des Systems verbessert werden.

** Was bedeutet schwache Fairness?
(Aus den Folien)
"A schedule is weakly fair, if and only if every process that is always runnable is scheduled infinitely often.
A violation of weak fairness occurs if there exists a permanently runnable process which gets the CPU only a finite number of times (or not at all)."
> Ein Prozess der immer lauffähig ist, wird "unendlich" oft wieder geschedult und bekommt entsprechend auch "unendlich" oft die CPU.
Also z.B. ein permanent lauffähiger Prozess.
- while True
- mit nicht-blockierenden Instruktionen

** Was bedeutet starke Fairness?
(Aus den Folien)
"A schedule is strongly fair, if and only if every process that is runnable infinitely often is scheduled infinitely often.
A violation of strong fairness occurs if there exists a process which is runnable infinitely often, but gets the CPU only a finite number of times (or not at all)."
> Ein Prozess der "unendlich" mal lauffähig ist, wird "unendlich" oft geschedult und bekommt auch entsprechend "unendlich" oft die CPU.
Also z.B. ein immer wieder lauffähiger Prozess
- wird Berechnungen immer wieder beenden
- und dann erneut lauffähig werden
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
** TODO Welche Eigenschaft hat ein universell fairer Scheduler?
Kirill Lygin's avatar
Kirill Lygin committed
Ein universell fairer Scheduler muss "strongly fair" sein. Ein Scheduling von einem fairen Scheduler muss auch von einem universell fairen Scheduler erzeugt werden können.
Felix Drees's avatar
Felix Drees committed

Kirill Lygin's avatar
Kirill Lygin committed
** Wie lässt sich ein Universell Fairer Scheduler mit Hilfe von Zufallszahlen realisieren? Skizze in Pseudo-Code
#+BEGIN_SRC Pseudo
    processes = {array with processes}
    weights = {array with wieghts}

    runnable = get_runnable_processes(processes);
    to_run = choose_random_one({processes with the lowest weight form runnables});

    weights[to_run] = random_from(N);

    for (int i = 0; i < len(weights), i++) {
        if (i != to_run) weights[i]--;
    }

    schedule(to_run)
#+END_SRC
Felix Drees's avatar
Felix Drees committed

- give every process a weight ~z_i~
- for every process-number ~i~ this is initially a random (non-negative) value
- for every scheduling cycle, get the set of runnalbe processes and do the following:
  - select the process(es) with the lowest weight
  - if multiple processes had the same lowest weight, select a random one of those
  - the weight of the selected process is set to a new random (non-negative) value
  - the weights of all other (not-scheduled) runnable processes are decremented by 1
  - schedule the selected process

** Skizzieren sie den Beweis der universellen Fairness dieses Schedulers.
The initial weight of every process is a non-negative random value. Every process that gets scheduled gets a new random non-negative weight.
A process that is runnable but not scheduled has its weight decremented by 1. Eventually its weight will be negative and will therefore (after all other processes that may have been negative were scheduled and have a new non-negative weight) be the process with the lowest weight. This will guarantee it will eventually be chosen as the next process to schedule.

Therefore any process that is scheduled infinitely often, will also eventually be run infinitely often (-> strongly fair).
With that its shown that the universally fair scheduler (UFS) is indeed a fair scheduler.
TODO: still need to show that this (fairness) implies that its schedule can be produced by every other fair scheduler
(on the slides the formal proof went the other direction: prove that the schedule can be produced by every other fair scheduler, this implies that UFS itself is fair)
Felix Drees's avatar
Felix Drees committed

Kirill Lygin's avatar
Kirill Lygin committed
** Nach maximal wie vielen Schedulingzyklen wird in diesem Fall ein Prozess mit Prozessgewicht M bei N Prozessen geschedult?
Ole-Niklas Mahlstädt's avatar
Ole-Niklas Mahlstädt committed
Nach maximal _M + N - 1_ Zyklen, da 1-N das kleinste mögliche Gewicht ist das erreicht werden kann, bevor ein Prozess garantiert ist geschedult zu werden.
A process with the weight ~M~ that is runnable and scheduled together with ~N~ processes has to wait at most ~M+N - 1~ scheduling cycles.
Felix Drees's avatar
Felix Drees committed

Kirill Lygin's avatar
Kirill Lygin committed
** Warum ist der universell faire Scheduler nicht umsetzbar, wenn die Prozessgewichte vom Typ „int“ sind?
If the weights are of type int, then there are only a finite amount of weights a process could have.
If there are more than MAX_INT processes to schedule, it is not guaranteed that a process will eventually be run.
Assume all MAX_INT processes have a weight of 0 and are always runnable. One of them is guaranteed to not be scheduled MAX_INT times and be decremented so much that it wraps around to a positive wait again. Therefore a process could starve and fairness is not guaranteed.
TODO: check if this is the solution/problem Peleska was looking for
(I think its very unlikely so many processes are scheduled and it could be easily handled by stopping to decrement once the weight reaches -MAX_INT)
Felix Drees's avatar
Felix Drees committed

Kirill Lygin's avatar
Kirill Lygin committed
** Wie werden im Linux-Kernel Prozesse der Policy SCHED_OTHER geschedult?
Die Prozesse aus SCHED_OTHER werden mithilfe des CFS (completely fair scheduler) 
geschedult. Diese Methode verwendet einen Rot-Schwarz-Baum, bei dem jedes Node einen 
Prozess darstellt und sie nach Prozessorzeit sortiert sind. Somit befindet sich 
der Prozess mit der kürzesten Ausführungszeit am linken unteren Ende. Nach jeder 
Ausführung des Schedulings wird der Baum geändert, sodass der Thread mit der 
geringsten Prozessorzeit wieder am linken unteren Ende steht. Außerdem wird der 
Baum balanciert, falls er unbalanciert wurde, da es der entscheidende Faktor für 
die Geschwindigkeit des Schedulers ist.
Felix Drees's avatar
Felix Drees committed

Processes in ~SCHED_OTHER~ (aka. ~SCHED_NORMAL~), ~SCHED_BATCH~ and ~SCHED_IDLE~ are scheduled using CFS.
This means from all runnable processes the one with the lowest virual runtime [[https://www.kernel.org/doc/html/latest/scheduler/sched-design-CFS.html#few-implementation-details][(~p->se.vruntime~)]] will be selected to run next.

Felix Drees's avatar
Felix Drees committed
** TODO Ist der Completely Fair Scheduler (CFS) ein universell fairer Scheduler?
Maybe, instead of decrementing weights, CFS increments ~virtual runtime~. Eventually every process will have the lowest amount of virtual runtime, after every other process increased their ~vruntimes~ by being scheduled (--> fair).
TODO: show schedules from CFS can also be produced by every other universally fair scheduler.
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed
** TODO Ist der Completely Fair Scheduler (CFS) ein stark fairer Scheduler?
Yes, a process that is inifintely often scheduled, will eventually run an infinite amount of times.
New processes will initially receive a ~vruntime~ of ~min_vruntime~ and will therefore be scheduled as soon as possible.
The process can't starve, because eventually it is guaranteed to have the lowest ~vruntime~ of all runnable processes (when all other processes were active so long, that their ~vruntime~s have increased enough).
Felix Drees's avatar
Felix Drees committed


* Cloud Computing

** Wie erzeugt man einen Container mit Docker – erkläre den Inhalt eines Dockerfile und beschreibe die nötigen docker-Kommandos, um einen Container zu bauen
Ein ~Dockerfile~ könnte z.B. folgenden Inhalt haben:
#+NAME: Dockerfile
#+BEGIN_SRC yaml
FROM ubuntu:latest
COPY project/binary/foo /usr/bin/foo
RUN chmod +x /usr/bin/foo
ENTRYPOINT ["/usr/bin/foo"]
#+END_SRC
Um mit einem ~Dockerfile~ ein neues Container-Image zu bauen ~docker build -t CONTAINER_NAME:TAG ./Dockerfile~.
Es wird mit ~FROM~ ein Container-Image als Basis ausgewählt, dass angepasst werden soll.
Mit dem ~COPY~ wird aus dem relativen Pfad ~./project/binary/foo~ eine Datei in den Container an den Pfad ~/usr/bin/foo~ kopiert.
Mit ~RUN~ können Befehle im Container ausgeführt werden, die Änderungen dabei werden im entstehenden neuen Container-Image gespeichert (hier, dass ~/usr/bin/foo~ das executable-Bit gesetzt bekommt).
Mit ~ENTRYPOINT~ wird der Befehl angegeben, der beim Start eines Containers (der das neue Container-Image verwendet) ausgeführt wird.

** Erkläre den Inhalt eines yaml-Files zur POD Erzeugung - Beispiele "Hello World" sowie Start eines eigenen Containers vom Docker Hub
#+NAME: job_hello_world.yml
#+BEGIN_SRC yaml
---
apiVersion: batch/v1
kind: Job
metadata:
  name: hello-world
  namespace: bs2024
spec:
  template:
    spec:
      containers:
      - name: hello-world
        image: ubuntu:latest
        imagePullPolicy: IfNotPresent
        command: ["echo", "Hello World - version 1!"]
      restartPolicy: Never
  backoffLimit: 4
#+END_SRC
Es wird eine Resource vom Typ ~Job~ definiert, mit dem Namen ~hello-world~ und in dem Namespace ~bs2024~.
Der Job wird entsprechend dem ~spec~ spezifiziert. Es soll ein Container mit dem Namen ~hello-world~ gestartet werden und dafür das Container-Image ~ubuntu~ mit dem ~latest~ Tag geholt werden.
In diesem Container wird der Befehl ~echo "Hello World - version 1!"~ ausgeführt.
Falls der Container "sterben" sollte, soll er nicht neugestartet werden (~restartPolicy: Never~).
Das ~backoffLimit~ gibt eine Grenze an, wie oft versucht werden soll den Container neuzustarten, bevor aufgegeben wird.
Um einen eigenen Container zu starten würde man ~image~, ~command~ und die ~name~ entsprechend anpassen.

** Was ist der Unterschied zwischen einem "normalen" POD und einem Deployment ?
Ein Deployment ist ein POD, welcher von Kubernetes neugestartet werden kann und von dem eine bestimmte Anzahl an Replicas vorhanden sein sollen.

** Wie kann man in der POD-Spezifikation Umgebungsvariablen setzen?
Unterhalb der Container-Spezifikation können Environment-Variablen gesetzt werden:
#+NAME: job_env_var.yml
#+BEGIN_SRC yaml
spec:
  template:
    spec:
      containers:
      - name: example-container
        image: docker.io/example/container
        env:
          - name: VARIABLE_NAME
            value: example_value
#+END_SRC

** Was ist ein Persistent Volume Claim?
Ein PVC ist ein Claim/Anspruch an einen persistenten Speicher der von einem Pod gestellt wird.
Zu jedem PVC gehört ein PV. Wo die Daten letzendlich gesichert/persistiert wird vom Cluster System Management entschieden.

** Welche Arten von Persistent Volume Claims gibt es ?
Souheil Khemiri's avatar
Souheil Khemiri committed
Es gibt vier verschiedene Typen von Claims: ~ReadWriteOnce~, ~ReadOnlyMany~, ~ReadWriteMany~, ~ReadWriteOncePod~
Ole-Niklas Mahlstädt's avatar
Ole-Niklas Mahlstädt committed
(Hinweis: ~ReadWriteOncePod~ ist sehr neu, erst 2023-12-13 mit dem release von Kubernetes v1.29.0 als stable feature dazugekommen und war deshalb auch noch nicht in den Folien von Peleska enthalten)

** Wie kann ein laufender POD auf dem Cluster die Cluster-IP eines dort laufenden Services abfragen?
Entweder über eine DNS-Query, oder über Environment-Variablen.
Angenommen es gibt einen Service ~mysql-example~ im Namespace ~bs2024~.
Dann ist die Environment-Variable ~MYSQL_EXAMPLE_SERVICE_HOST~ entsprechend der IP gesetzt (Pod muss gestartet worden sein, nachdem der Service existierte!).
Besser ist es über DNS-Queries, z.B. mit ~dig +short mysql-example.bs2024~.

** Erkläre das yaml-File zur Erzeugung eines öffentlich zugänglichen Service, der seinerseits replizierte Service Backends verwendet
Man benötigt 2 yaml-Files, eines für das replizierte Backend und eines für einen LoadBalancer welcher die Anfragen auf die Backends verteilt.
#+NAME: backend.yml
#+BEGIN_SRC yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: example
          image: "docker.io/example/container:latest"
          ports:
            - name: http
              containerPort: 80
#+END_SRC
#+NAME: frontend.yml
#+BEGIN_SRC yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: backend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 80
  type: LoadBalancer
#+END_SRC
Felix Drees's avatar
Felix Drees committed

** TODO Erkläre den Code von Client, Server Frontend und Server Backend im Projekt dpll-server-and-backend
Felix Drees's avatar
Felix Drees committed

** Wie kann man auf dem Cluster mit mehreren PODS über eine Multicast-Adresse kommunizieren?
Mehrere Container die innerhalb eines PODS laufen, können über `localhost` miteinander kommunizieren.
Kirill Lygin's avatar
Kirill Lygin committed
TODO Multicast-Adresse (ggf. auch über PODS hinweg?). 

Idea: Man könnte einen "Sidecar"-Pod erstellen, der als 
Proxy fungiert und die Pakete an relevante Pods sendet. Ein Endbenutzer muss 
über unicast mit diesem simulierten Multicast-Pod kommunizieren.
Felix Drees's avatar
Felix Drees committed

** TODO Erkläre den Java Script Code, den der Client verwendet, um den Cloud Server über das REST API anzusprechen
Felix Drees's avatar
Felix Drees committed

** Erkläre den C++ REST API Code in backend Servern- Sledgehammer/DPLL-Anwendung
#+NAME: rest-cpp-example/dpll-backend/Sledgehammer/SledgehammerMicroService.cpp
#+BEGIN_SRC cpp
//
// Created by felix on 07/04/2022.
//
#include <cstring>
#include <sys/wait.h>
Felix Drees's avatar
Felix Drees committed

#include "SledgehammerMicroService.hpp"
Felix Drees's avatar
Felix Drees committed

Felix Drees's avatar
Felix Drees committed

#define MOUNT_PATH "/tmp/dpllfiles/"
#define DPLL_EXE_PATH (char*)"/usr/bin/DpllSolver"
#define NUM_DPLL_INSTANCES 4 // Must be <= 9
Felix Drees's avatar
Felix Drees committed

static char resultFileName[1000];
Felix Drees's avatar
Felix Drees committed

// --------------------------------------------------------------------------------------
Felix Drees's avatar
Felix Drees committed

void SledgehammerMicroService::initRestOpHandlers()
{
    _listener.support(methods::GET, std::bind(&SledgehammerMicroService::handleGet, this, std::placeholders::_1));
    _listener.support(methods::PUT, std::bind(&SledgehammerMicroService::handlePut, this, std::placeholders::_1));
    _listener.support(methods::POST, std::bind(&SledgehammerMicroService::handlePost, this, std::placeholders::_1));
    _listener.support(methods::DEL, std::bind(&SledgehammerMicroService::handleDelete, this, std::placeholders::_1));
    _listener.support(methods::PATCH, std::bind(&SledgehammerMicroService::handlePatch, this, std::placeholders::_1));
    _listener.support(methods::OPTIONS, std::bind(&SledgehammerMicroService::handleOptions, this, std::placeholders::_1));
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::addCorsHeader(http_response& response)
{
    response.headers().add(U("Access-Control-Allow-Origin"), U("*"));
    response.headers().add(U("Access-Control-Allow-Methods"), U("OPTIONS, PUT, POST, GET"));
    response.headers().add(U("Access-Control-Max-Age"), U("3600"));
    response.headers().add(U("Access-Control-Allow-Credentials"), U("true"));
    response.headers().add(U("Access-Control-Allow-Headers"), U("Content-Type"));
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handleGet(http_request message)
{
    std::cout << "[Info] Received 'GET' Request." << std::endl;
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handlePut(http_request message)
{
    std::cout << "[Info] Received 'PUT' Request" << std::endl;
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handlePost(http_request message)
{
    std::cout << "[Info] Received 'POST' Request." << std::endl;
    auto path = requestPath(message);

    if (not path.empty()) {
        if (path[0] == "api") {
            if (path[1] == "solve") {
                if (path.size() == 3) {
                    std::string resourceName = path[2];
                    std::cout << "[Info] Solve formula " << resourceName << std::endl;

                    if (not resourceName.empty()) {
                        auto response = http_response(status_codes::OK);
                        addCorsHeader(response);

                        // create JSON body
                        auto responseBody = json::value::object();
                        responseBody["message"] = json::value::string("Sledgehammer: solving formula...");
                        response.set_body(responseBody);
                        message.reply(response).wait();
                    } else {
                        auto response = http_response(status_codes::OK);
                        addCorsHeader(response);

                        // create JSON body
                        auto responseBody = json::value::object();
                        responseBody["message"] = json::value::string("Sledgehammer: Error, no name for formula given, abort solving.");
                        response.set_body(responseBody);
                        message.reply(response).wait();
                        return;
                    }

                    std::string dimacsFileName = std::string(MOUNT_PATH) + resourceName;
                    pid_t pidArray[NUM_DPLL_INSTANCES];
                    char numCopiesAsStr[100];
                    sprintf(numCopiesAsStr, "%d", NUM_DPLL_INSTANCES);

                    char* callArgs[5] = { DPLL_EXE_PATH, strdup(numCopiesAsStr), NULL,
                        (char*)dimacsFileName.c_str(), NULL };

                    unsigned int i;
                    for (i = 0; i < NUM_DPLL_INSTANCES; i++) {
                        char p[2] = { static_cast<char>(48 + i) /* value of i as character [48 = '0'] */,
                            0 /* 0 terminates string */ };
                        callArgs[2] = p;
                        if (0 == (pidArray[i] = fork())) {
                            if (execve(DPLL_EXE_PATH, callArgs, NULL) < 0) {
                                perror("execve");
                            }
                        }
                    }
                    printf("Solvers started\n");
                    fflush(0);

                    // Wait for first DPLL instance to terminate - this one
                    // will have written a complete solution file before termination.
                    pid_t firstPid = wait(NULL);

                    // All other DPLL instances can be killed, because we have the
                    // solution from teh first instance that terminated.
                    for ( i = 0; i < NUM_DPLL_INSTANCES; i++ ) {
                        if ( pidArray[i] != firstPid ) {
                            kill(pidArray[i],SIGKILL);
                        }
                    }

                }
            }
        }
    }
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handlePatch(http_request message)
{
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handleDelete(http_request message)
{
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handleHead(http_request message)
{
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handleOptions(http_request message)
{
    auto path = requestPath(message);
    std::cout << "[Info] Received 'OPTIONS' Request" << std::endl;
    auto response = http_response(status_codes::OK);

    response.headers().add(U("Access-Control-Allow-Origin"), U("*"));
    response.headers().add(U("Access-Control-Allow-Methods"), U("OPTIONS, PUT, POST, GET"));
    response.headers().add(U("Access-Control-Max-Age"), U("86400"));
    response.headers().add(U("Access-Control-Allow-Credentials"), U("true"));
    response.headers().add(U("Access-Control-Allow-Headers"), U("*"));

    auto jsonresponse = json::value::object();
    jsonresponse["version"] = json::value::string("1");
    jsonresponse["status"] = json::value::string("ready");

    response.set_body(jsonresponse);
    message.reply(response);
}

// --------------------------------------------------------------------------------------

void SledgehammerMicroService::handleTrace(http_request message)
{