21 Mayıs 2021 Cuma

GUID Partition Table - GPT - Bir Partition Table Tipi

Giriş
Açıklaması şöyle. MBR yetersiz kaldığı için kullanılan yeni partition table tipidir.
GPT is MBR's successor. It supports up to 128 partitions and addresses so large we won't reach them for decades, maybe never.
GPT genellikle UEFI bootloader ile kullanılır. Açıklaması şöyle
GPT is actually defined by UEFI spec and takes a different approach. Since UEFI isn't meant to be as thin and simple as BIOS (it's basically a small operating system, including the ability to load custom modules), its designers didn't bother with BIOS's "KISS" approach. Instead UEFI includes drivers for FAT-family filesystems and can be configured to load bootloader directly from a partition. Basically the filesystem driver which everyone tried to store in MBR is now provided by the platform, along with a built-in boot manager.

chronyd vs NTP service

Chronyd vs NTP service
Açıklaması şöyle. Yani chronyd sistemi saatini NTP'ye göre daha hızlı eşzamanlı hale getiriyor.
Chrony is a different implementation of the network time protocol (NTP) than the network time protocol daemon (ntpd) that is able to synchronize the system clock faster and with better accuracy than ntpd.
Hangisini Ne Zaman Kullanmalı
Açıklaması şöyle. Eğer bilgisayar sık sık kapatılıyor veya uykuya geçiriliyorsa, chronyd daha iyi sonuç verebilir.
When to use chrony
Chrony would be considered a best match for the systems which are frequently suspended or otherwise intermittently disconnected from a network (mobile and virtual servers etc).

When to use NTP
The NTP daemon (ntpd) should be considered for systems which are normally kept permanently on. Systems which are required to use broadcast or multicast IP, or to perform authentication of packets with the Autokey protocol, should consider using ntpd.
for current cloud and container based environment where applications are stateless chronyd play an important role to sync the time on each system.
chrony.conf dosyası
/etc/chrony. conf dosyası Chronyd için ayarları içerir.

Örnek
# Default chrony.conf on Ubuntu, sans comments and blank lines
confdir /etc/chrony/conf.d
pool ntp.ubuntu.com        iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
sourcedir /run/chrony-dhcp
sourcedir /etc/chrony/sources.d
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
ntsdumpdir /var/lib/chrony
logdir /var/log/chrony
maxupdateskew 100.0
rtcsync
makestep 1 3
leapsectz right/UTC

epoll metodu

Giriş
Şu satırı dahil ederiz.
#include <sys/epoll.h>
epoll neden Önemli
Açıklaması şöyle
2002 — LINUX releases epoll API
The 2.5.44 release of LINUX included a new API: epoll. The epoll API delivered effectively constant socket look-up time. If networking software used epoll and multiplexed connections across a handful of threads at most (as opposed to 1 thread per request), one could expect significantly better resource utilization on a server and handle 10K simultaneous connections well. This solution reduced the latency of packet routing within Linux, which enabled better scalability of open connections.
Kullanım
1. epoll_create() ile bir epollfd yarat
2. epoll_event yapısını socket fd ile doldur ve epoll_ctl() ile izlemeye başla
3. Döngü içinde epoll_wait metoduu sürekli çağır
4. Event tipine bakarak (EPOLLERR,EPOLLHUP,EPOLLIN,EPOLLOUT) socketler üzerinde işlem yap
5. Ya da socket fd'ye bakarak fd ile işlem yap. Bu yöntem sadece bir kerelik cevap veren sunucu yazılımlarda kullanılabilir.

Tasarım Hatası
epoll tasarım hatası ile ilgili bir açıklama şöyle. Bu yazıyı ilk kez bu sorudaki linkte gördüm. Kısaca dup ile çoklanan bir file descriptor close() metodu ile kapatılsa bile epoll açısından kapanmadığı için halen event'leri bildiriyor diyor.
epoll is broken because it mistakes the "file descriptor" with the underlying kernel object (the "file description"). The issue shows up when relying on the close() semantics to clean up the epoll subscriptions.

epoll_ctl(EPOLL_CTL_ADD) doesn't actually register a file descriptor. Instead it registers a tuple1 of a file descriptor and a pointer to underlying kernel object. Most confusingly the lifetime of an epoll subscription is not tied to the lifetime of a file descriptor. It's tied to the life of the kernel object.

Due to this implementation quirk calling close() on a file descriptor might or might not trigger epoll unsubscription. If the close call removes the last pointer to kernel object and causes the object to be freed, then it will cause epoll subscription cleanup. But if there are more pointers to kernel object, more file descriptors, in any process on the system, then close will not cause the epoll subscription cleanup. It is totally possible to receive events on previously closed file descriptors.
1. epoll_create metodu - size
Örnek ver

2. epoll_create1 metodu - flags
Örnek
Şöyle yaparız.
int epollfd = epoll_create1(0);
if (epollfd == -1)
{
  perror ("epoll_create");
  ...
}
Örnek
Şöyle yaparız.
int efd = epoll_create1 (EPOLL_CLOEXEC);
3. epoll_ctl metodu
epoll_ctl metodu yazısına taşıdım

4. epoll_event yapısı
Şöyle tanımlarız.
struct epoll_event events[MAXEVENTS];
events Alanı
Şu değerlerden birisi olabilir
EPOLLIN
EPOLLIN = 1 ise oku

EPOLLRDHUP
EPOLLRDHUP = 1 ise kapat. EPOLLRDHUP açıklamasını buradan aldım.
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing simple code to detect peer shutdown when using Edge Triggered monitoring.)
Örnek
Şöyle yaparız.
int servFd = socket (...);

struct epoll_event epollEvt;
epollEvt.events = EPOLLIN | EPOLLRDHUP;
epollEvt.data.u32 = servFd;
data Alanı
epoll_data tipindendir

epoll_data yapısı
epoll_event yapısının bir alanıdır. İçi şöyledir.
typedef union epoll_data {
  void        *ptr;
  int          fd;
  uint32_t     u32;
  uint64_t     u64;
} epoll_data_t;
Örnek
fd alanını kullanmak için şöyle yaparız.
epoll_event ev;
ev.data.fd = timerfd;
Örnek
ptr alanını kullanmak için bir ata sınıf tanımlarız.
class EventHandler {
public:
  virtual ~EventHandler() = default;
  virtual int fd() const = 0;
  virtual void fire() = 0;
};
Bu ata sınıfı ptr alanına vererek epoll() çağrısına dahil ederiz.
EventHandler* handler = new TimerHandler();
ev.data.ptr = handler;
epoll_ctl(epollfd, EPOLL_CTL_ADD, handler->fd(), &ev);
Sonuçları dolaşırken şöyle yaparız
int n = epoll_wait (epollfd, events , num_events, -1 ); 
for (int i = 0; i < n; ++i) {
  static_cast<EventHandler*>(events[i].data.ptr)->fire();
}
5. epoll_wait metodu
Şöyle yaparız.
int n = epoll_wait (epollfd, events, MAXEVENTS, -1);
Örnek
Sonucu dolaşmak için şöyle yaparız.
int n = epoll_wait (epollfd, events , num_events, -1 ); 
for (int i = 0; i < n; ++i) {
  if (events[i].data.fd == timerfd) {
    handle_timer_callback();
  }
  else {
    // something else
  }
}
Örnek
Sonucu dolaşmak için şöyle yaparız.
for (i = 0; i < n; i++)
{
  int fd = events[i].data.fd;

  if ((events[i].events & EPOLLERR) ||
      (events[i].events & EPOLLHUP))
  {
    /* An error has occured on this fd, or the socket is not
       ready for reading (why were we notified then?) */

    close(fd);
    
  }
  else if (fd == listendf && (events[i].events & EPOLLIN))
  {
    /* We have a notification on the listening socket, which
       means one or more incoming connections. */
    HandleAccept(epollfd, listenfd);
  }
  else if(events[i].events & EPOLLIN)
  {
    /* We have data on the fd waiting to be read. Read and
       display it. We must read whatever data is available
       completely, as we are running in edge-triggered mode
       and won't get a notification again for the same
       data. */

    HandleRead(epollfd, fd);
  }
  else if (events[i].events & EPOLLOUT)
  {
    ...
  }
}
Örnek
Şöyle yaparız.
for (;;) {
  struct epoll_event pollEvent[512];
  int eventCount = epoll_wait (efd, pollEvent, 512, -1);
  for (int i = 0; i < eventCount; ++i) {
    struct epoll_event* curEvent = &pollEvent[i];
    if (curEvent->data.u32 == servFd) {
      int clientFd = accept4 (servFd, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC);
      struct epoll_event epollEvt;
      epollEvt.events = EPOLLIN | EPOLLRDHUP | EPOLLET;
      epollEvt.data.u32 = clientFd;
      epoll_ctl (efd, EPOLL_CTL_ADD, clientFd, &epollEvt);
      continue;
    }

    int clientFd = curEvent->data.u32;
    char recvBuffer[2048];
    recvfrom (clientFd, recvBuffer, 2048, 0, NULL, NULL);
    char sndMsg[] = "...";
    size_t sndMsgLength = sizeof (sndMsg) - 1;
    struct iovec sndBuffer;
    sndBuffer.iov_base = sndMsg;
    sndBuffer.iov_len = sndMsgLength;
    writev (clientFd, &sndBuffer, 1);
    close (clientFd);
  }
}