ios개발/개념 정리

<Mac OS> About Multitasking on the Mac OS

studying develop 2020. 3. 29. 18:06

[https://cdn.ttgtmedia.com/searchNetworking/downloads/InsideIOS.pdf]

 

About Multitasking on the Mac OS

This chapter describes the basic concepts underlying multitasking and how Multiprocessing Services uses them on Macintosh computers.

You should read this chapter if you are not familiar with multitasking or multiprocessing concepts. Note that this chapter covers mostly concepts rather than implementation or programming details. For information about actually using the Multiprocessing Services API in your application, see Using Multiprocessing Services.

Multitasking Basics

Multitasking is essentially the ability to do many things concurrently. For example, you may be working on a project, eating lunch, and talking to a colleague at the same time. Not everything may be happening simultaneously, but you are jumping back and forth, devoting your attention to each task as necessary.

멀티 타스킹은 많은 일들을 동시적으로 하기 위해 필수 적이다.

 

In programming, a task is simply an independent execution path. On a computer, the system software can handle multiple tasks, which may be applications or even smaller units of execution. For example, the system may execute multiple applications, and each application may have independently executing tasks within it. Each such task has its own stack and register set.

 

프로그래밍에서, 타스크는 간단한 독립 execution path이다. 컴퓨터에서, 시스템 소프트웨어는 다수의 타스크들을 다룰 수 있다, 타스크들은 어플리케이션일 수 도 있고, 더 작은 실행 단위일 수 도 있다. 예를 들면, 시스템은 다수의 어플리케이션들을 실행한다, 각 어플은 그안에서 독립적으로 타스크들을 실행시키고 있을 수 있다. 각 타스크는 독자적인 스택과 레지스터 집합을 갖는다.

 

Multitasking may be either cooperative or preemptive. Cooperative multitasking requires that each task voluntarily give up control so that other tasks can execute. An example of cooperative multitasking is an unsupervised group of children wanting to look at a book. Each child can theoretically get a chance to look at the book. However, if a child is greedy, they may spend an inordinate amount of time looking at the book or refuse to give it up altogether. In such cases, the other children are deprived.

 

멀티타스킹은 cooperative하거나 preemptive할 수 있다. cooperative 멀티타스킹은 각 타스크가 자발적으로 컨트롤을 포기해서 다른 타스크가 돌아 갈 수 있기를 원한다. 예시로 여러아이가 책 보기를 원하는 상황을 생각해보라.

 

Preemptive multitasking allows an external authority to delegate execution time to the available tasks. Preemptive multitasking would be the case where a teacher (or other supervisor) was in charge of letting the children look at the book. They would assign the book to each child in turn, making sure that each one got a chance to look at it. The teacher could vary the amount of time each child got to look at the book depending on various circumstances (for example, some children may read more slowly and therefore need more time).

 

preemptive 멀티타스킹은 외부 권한으로 사용 가능한 스레드에 실행 시간을 위임할 수 있도록 허락한다. preemptive 멀티타스킹은 선생님이 애들이 책보게 하는 상황 생각해봐라;

 

The Mac OS 8 operating system implements cooperative multitasking between applications. The Process Manager can keep track of the actions of several applications. However, each application must voluntarily yield its processor time in order for another application to gain it. An application does so by calling WaitNextEvent, which cedes(포기하다) control of the processor until an event occurs that requires the application’s attention.

 

Mac OS 8은 어플리케이션간에 cooperative하게 구현했다. Process Manager가 몇몇 어플리케이션을 추적할 수 있다. 하지만 각 어플리케이션은 자발적으로 프로세서를 내주어야 한다. 각 어플리케이션은 WaitNextEvent를 호출 함으로서 내어준다, 그리고 프로세서가 다시 관심을 받을때 까지 컨트롤을 포기한다.

 

Multiprocessing Services allows you to create preemptive tasks within an application (or process). For example, you can create tasks within an application to process information in the background (such as manipulating an image or retrieving data over the network) while still allowing the user to interact with the application using the user interface.

 

멀티프로세싱 서비스들은 preemptive 타스크들을 어플리케이션이나 프로세스 내에서 만들 수 있게 허락해준다. 예를 들면 우리는 백그라운드에서 데이터를 처리하도록 하는 타스크를 만들 수 있다. 동시에 유저가 UI를 다루면서.

 

Note: The definition of task in this document is analogous to the use of the term thread in some other operating systems such as UNIX¬®. In older documentation, Apple has sometimes referred to separate units of execution as threads. For example, the Thread Manager allows you to create cooperatively scheduled threads within a task. You should not confuse these threads with the preemptively scheduled tasks created by Multiprocessing Services.

 

 


Tasks and Address Spaces

On the Mac OS, all applications are assigned a process or application context at runtime. The process contains all the resources required by the application, such as allocated memory, stack space, plug-ins, non-global data, and so on. Tasks created with Multiprocessing Services are automatically associated with the creating application’s process, as shown in Figure 1-1.

Figure 1-1  Tasks within processes

All resources within a process occupy the same address space, so tasks created by the same application are free to share memory. For example, if you want to divide an image filtering operation among multiple identical tasks, you can allocate space for the entire image in memory, and then assign each task the address and length of the portion it should process.

 

프로세스 내의 모든 자원들은 동일한 주소 공간을 공유한다, 그래서 동일한 어플에서 생긴 타스크들을 동일한 메모리 공간을 공유한다.

 


Task Scheduling

 

Multitasking environments require one or more task schedulers, which control how processor time (for one or more processors) is divided among available tasks. Each task you create is added to a task queue. The task scheduler then assigns processor time to each task in turn.

 

멀티타스킹 환경은 한개 또는 그 이상의 타스크 스케줄러를 요구한다, 스케줄러는 스로세서 시간을 어떻게 분할해 타스크들에 나누어 줄지 관리한다. 내가 만든 각 타스크는 타스크 큐에 들어간다. 타스크 스케줄러는 그리고 각 타스크로 프로세서 시간을 할당한다.

 

As seen by the Mac OS 9.0 task scheduler, all cooperatively multitasked programs (that is, all the applications that are scheduled by the Process Manager) occupy a single preemptive task called the Mac OS task, as shown in Figure 1-2.

 

맥os9의 타스크 스케줄러에서 보듯이, 모든 협력적인 멀티타스킹된 프로그램들은 (즉, 모든 어플리케이션들은 프로세스 메니저에 의해 스케줄된다.) 점유한다 단 하나의 preemptive 타스크인 맥OS 타스크를( 아래 그림에 나옴 ㅋㅋ)

 

Figure 1-2  The Mac OS task and other preemptive tasks

For example, if your cooperatively scheduled application creates a task, the task is preemptively scheduled. The application task (containing the main event loop) is not preemptively scheduled, but it resides within the Mac OS task, which is preemptively scheduled. Within the Mac OS task, the application must cooperatively share processor time with any other applications that are currently running.

 

예를 들어, 협력적으로 스케줄된 어플이 타스크를 생성시, 타스크는 선점적으로 스케줄된다. 어플 타스크는 선점적으로 스케줄되지 않는다, 하지만 선점적으로 스케줄된 맥os 타스크안에 남는다. 맥os 타스크 안에서, 어플은 다른 현재 돌아가는 어플들과 협력적으로 프로세서를 공유한다.

 

A task executes until it completes, is blocked, or is preempted. A task is blocked when it is waiting for some event or data. For example, a task may need output from a task that has not yet completed, or it may need certain resources that are currently in use by another task.

 

한 타스크는 완료되거나, 블록되거나, preempted될때 까지 실행된다. 타스크가 다른 이벤트나 데이터를 기다리면 블록된다. 예를 들어 한 타스크가 다른 타스크가 완료되어 나온 결과가 필요하면 그렇다, 또는 다른 타스크에 의해 사용중인 자원들을 사용하고 있으면 그럴 수 있다.

 

A blocked task is removed from the task queue until it becomes eligible(적격) for execution. The task becomes eligible if either the event that it was waiting for occurs or the waiting period allowed for the event expires.

If the task does not complete or block within a certain amount of time (determined by the scheduler), the task scheduler preempts the task, placing it at the end of the task queue, and gives processor time to the next task in the queue.

 

블록된 타스크가 실행에 적격일때 까지 타스크 큐로 부터 제거한다. 타스크는 만약 기다리던 이벤트가 만료되면 적격이된다. 만약 타스크가 일정 시간뒤(스케줄러가 정해준) 완료되지 않거나 블록되지 않으면 , 타스크 스케줄러는 타스크를 preempt하고, 타스크 큐 맨뒤로 보낸다, 그리고 프로세서 시간을 큐 내부의 다른 프로세서에 준다.

 

(주석, 음 내가 network 타스크를 중지시키려한게, 안먹힌 걸 수 도 있는게, 한 연결 생성시 요청하고 기다리는 중에 타스크큐에서 블록되어, 내가 새로 만든 연결이 선점된 거 일수도?? 음 근데, 시작자체를 안했던거 같아서 아닌거 같긴한데, 현재 타스크 큐안의 뭐가 들어있는지 알 수 있으면 좋을듯?)

 

Note that if the main application task is blocked while waiting for a Multiprocessing Services event, the blocking application does not get back to its event loop until the event finally occurs. This delay may be unacceptable for long-running tasks. Therefore, in general your application should poll for Multiprocessing Services events from its event loop, rather than block while waiting for them. The task notification mechanisms described in Shared Resources and Task Synchronization are ideal for this purpose.

 

만약 메인 어플이 멀티프로세싱 서비스들의 이벤트를 기다리고 있으면, 블로킹 되있는 어플리케이션은 이벤트가 최종적으로 발생하기 전까지 자신의 이벤트 루프로 돌아오지 않는다. 이러한 지연을 길게 실행되는 타스크들에는 받아들여 질 수 없다. 그러므로, 일반적으로 너의 어플리케이션은 멀티 프로세싱 서비스 이벤트들을 위해 이벤트 루프로 부터 poll 해야한다, 오히려 다른 이벤트들을 기다리는 동안 블록하지 말아라. 타스크 알림 메커니즘은 다음 링크에 설명되어 있다.[Shared Resources and Task Synchronization]

 

 

Note: In the future, application tasks will run as individual preemptive tasks, rather than within the Mac OS task. However, calls to non-reentrant Mac OS system software functions will cause the task to be blocked for the duration of the call, in a manner similar to a remote procedure call. See Making Remote Procedure Calls for more information.

 

 


Shared Resources and Task Synchronization

Although each created task may execute separately, it may need to share information or otherwise communicate with other tasks. For example, Task 1 may write information to memory that will be read by Task 2. In order for such operations to occur successfully, some synchronization method must exist to make sure that Task 2 does not attempt to read the memory until Task 1 has completed writing the data and until Task 2 knows that valid data actually exists in memory. The latter scenario can be an issue when using multiple processors, because the PowerPC architecture allows for writes to memory to be deferred. In addition, if multiple tasks are waiting for another task to complete, synchronization is necessary to ensure that only one task can respond at a time.

Multitasking environments offer several ways for tasks to coordinate and synchronize with each other. The sections that follow describe four notification mechanisms (or signaling mechanisms), which allow tasks to pass information between them, and one task sharing method.

Note that the time required to perform the work in a given request should be much more than the amount of time it takes to communicate the request and its results. Otherwise, delegating work to tasks may actually reduce overall performance. Typically the work performed should be greater than the intertask signaling time (20-50 microseconds).

Note that you should avoid creating your own synchronization or sharing methods, because they may work on some Mac OS implementations but not on others.

Semaphores

A semaphore is a single variable that can be incremented or decremented between zero and some specified maximum value. The value of the semaphore can communicate state information. A mail box flag is an example of a semaphore. You raise the flag to indicate that a letter is waiting in the mailbox. When the mailman picks up the letter, they lower the flag again. You can use semaphores to keep track of how many occurrences of a particular thing are available for use.

Binary semaphores, which have a maximum value of one, are especially efficient mechanisms for indicating to some other task that something is ready. When a task or application has finished preparing data at some previously agreed to location, it raises the value of a binary semaphore that the target task waits on. The target task lowers the value of the semaphore, performs any necessary processing, and raises the value of a different binary semaphore to indicate that it is through with the data.

Semaphores are quicker and less memory intensive than other notification mechanisms, but due to their size they can convey only limited information.

Message Queues

A message queue is a collection of data (messages) that must be processed by tasks in a first-in, first-out order. Several tasks can wait on a single queue, but only one will obtain any particular message. Messages are useful for telling a task what work to do and where to look for information relevant to the request being made. They are also useful for indicating that a given request has been processed and, if necessary, what the results are.

Typically a task has two message queues, one for input and one for output. You can think of message queues as In boxes and Out boxes. For example, your In box at work may contain a number of papers (messages) indicating work to do. After completing a job, you would place another message in the Out box. Note that if you have more than one person assigned to an In box/Out box pair, each person can work independently, allowing data to be processed in parallel.

In Multiprocessing Services, a message is 96-bits of data that can convey any desired information.

Message queues incur more overhead than the other two notification mechanisms. If you must synchronize frequently, you should try to use semaphores instead of message queues whenever possible.

Event Groups

An event group is essentially a group of binary semaphores. You can use event groups to indicate a number of simple events. For example, a task running on a server may need to be aware of multiple message queues. Instead of trying to poll each one in turn, the server task can wait on an event group. Whenever a message is posted on a queue, the poster can also set the bit corresponding to that queue in the event group. Doing so notifies the task, and it then knows which queue to access to extract the message. In Multiprocessing Services, an event group consists of thirty-two 1-bit flags, each of which may be set independently. When a task receives an event group, it receives all 32 bits at once (that is, it cannot poll individual bits), and all the bits in the event group are subsequently cleared.

Kernel Notifications

A kernel notification is a set of simple notification mechanisms (semaphores, message queues, and event groups) which can be notified with only one signal. For example, a kernel notification might contain both a semaphore and a message queue. When you signal the kernel notification, the semaphore is signaled and a message is sent to the specified queue. A kernel notification can contain one of each type of simple notification mechanism.

You use kernel notifications to hide complexity from the signaler. For example, say a server has three queues to process, ranked according to priority (queue 1 holds the most important tasks, queue 2 holds lesser priority tasks, and queue 3 holds low priority tasks). The server can wait on an event group, which indicates when a task is posted to a queue.

If you do not use a kernel notification, then when a client has a message to deliver, it must both send the message to the appropriate queue and signal the event group with the correct priority value. Doing so requires the client to keep track of which queue to send to as well as the proper event group bit to set.

A simpler method is to set up three kernel notifications, one for each priority. To signal a high priority message, the client simply loads the message and signals the high priority kernel notification.

Critical Regions

In addition to notification mechanisms, you can also specify critical regions in a multitasking environment. A critical region is a section of code that can be accessed by only one task at a time. For example, say part of a task’s job is to search a data tree and modify it. If multiple tasks were allowed to search and try to modify the tree at the same time, the tree would quickly become corrupted. An easy way to avoid the problem is to form a critical region around the tree searching and modification code. When a task tries to enter the critical region it can do so only if no other task is currently in it, thus preserving the integrity of the tree.

Critical regions differ from semaphores in that critical regions can handle recursive entries and code that has multiple entry points. For example, if three functions func1, func2, and func3 access some common resource (such as the tree described above), but never call each other, then you can use a semaphore to synchronize access. However, suppose func3 calls func1 internally. In that case, func3 would obtain the semaphore, but when it calls func1, it will deadlock. Synchronizing using a critical region instead allows the same task to enter multiple times, so func1 can enter the region again when called from func3.

Because critical regions introduce forced linearity into task execution, improper use can create bottlenecks that reduce performance. For example, if the tree search described above constituted the bulk of a task’s work, then the tasks would spend most of their time just trying to get permission to enter the critical region, at great cost to overall performance. A better approach in this case might be to use different critical regions to protect subtrees. You can then have one task search one part of the tree while others were simultaneously working on other parts of the tree.

 

 


Multiple Independent Tasks

In many cases, you can break down applications into different sections that do not necessarily depend on each other but would ideally run concurrently. For example, your application may have one code section to render images on the screen, another to do graphical computations in the background, and a third to download data from a server. Each such section is a natural candidate for preemptive tasking. Even if only one processor is available, it is generally advantageous to have such independent sections run as preemptive tasks. The application can notify the tasks (using any of the three notification mechanisms) and then poll for results within its event loop.

 

이 부분이 내가 서버에서 여러 타스크들로 요청한 경우 같다. 그런데 어떻게 타스크 큐, 스케줄러를 관리하지 내가??

 

결국 타스크 스케줄링 방법을 찾아야 했는데, 아래 두 링크가 방법이 아닐까 싶다.

[https://developer.apple.com/documentation/foundation/nsbackgroundactivityscheduler]

[https://www.raywenderlich.com/1197-nstask-tutorial-for-os-x]

Parallel Tasks With Parallel I/O Buffers

If you can divide the computational work of your application into several similar portions, each of which takes about the same amount of time to complete, you can create as many tasks as the number of processors and divide the work evenly among the tasks (“divide and conquer”). An example would be a filtering task on a large image. You could divide the image into as many equal sections as there are processors and have each do a fraction of the total work.

This method for using Multiprocessing Services involves creating two buffers per task: one for receiving work requests and one for posting results. You can create these buffers using either message queues or semaphores.

As shown in Figure 1-3, the application splits the data evenly among the tasks and posts a work request, which defines the work a task is expected to perform, to each task’s input buffer. Each task asks for a work request from its input buffer, and blocks if none is available. When a request arrives, the task performs the required work and then posts a message to its output buffer indicating that it has finished and providing the application with the results.

Figure 1-3  Parallel tasks with parallel I/O buffers

 


추가 링크

[https://github.com/luoxiu/Schedule] 이분이 타스크 스케줄러를 직접 만든듯?

 


[https://m.blog.naver.com/PostView.nhn?blogId=hucnpy&logNo=70085139069&proxyReferer=https%3A%2F%2Fwww.google.com%2F]

 

Mac OS

Mac OS 9 uses cooperative scheduling, where one process controls multiple cooperative threads. The kernel schedules the process using a Round-robin scheduling algorithm. Then, each process has its own copy of the thread manager that schedules each thread. The kernel then, using a preemptive scheduling algorithm, schedules all tasks to have processor time. Mac OS X[5]uses Mach (kernel) threads, and each thread is linked to its own separate process. If threads are being cooperative, then only one can run at a time. The thread must give up its right to the processor for other processes to run.

 

맥 OS 9는 한 프로세스가 여러 개의 협력 쓰레드를 제어하는 cooperative scheduling (협력 스케줄링)을 사용한다. 커널은 Round robin scheduling (라운드 로빈 스케줄링) 알고리즘을 이용하여 프로세스들을 스케줄한다. 그러고 나서 각 프로세스들은 각 쓰레드를 스케줄하는 쓰레드 메니저의 자신만의 복사본을 갖는다. 그리고 나서 커널은 선점적인 스케줄링을 이용하여 프로세스 시간을 갖기 위해 모든 일들을 스케줄한다. 맥 OS X[5]는 Mach(kernel) 쓰레드를 사용하고, 각 쓰레드는 그들 자신의 몇 개의 프로세스와 연결되어 있다. 만약 쓰레드가 협력적이라면, 오직 하나만 한 순간에 실행될 수 있다. 쓰레드는 다른 프로세스들이 실행되기 위하여 자신의 권리를 포기해야만 한다.

 

협력형(cooperative) 쓰레드 스케줄러는 실행중인 쓰레드가 CPU 사용권을 다른 쓰레드에게 넘길 때까지 기다린다. 협력형 쓰레드 스케줄링을 사용하는 가상머신은 선점형(preemptive) 쓰레드 스케줄링을 사용하는 것보다 기아 상태를 일으키기 쉽다. 우선순위가 높은 비협조적인 쓰레드가 CPU를 점유할 수 있기 때문이다.

 

라운드 로빈 스케줄링은 선점형 스케줄링의 하나로서, 시간 단위(Time Quantum)로 CPU를 할당하는 방식의 CPU 스케줄링 알고리즘이다.보통 시간 단위는 10ms ~ 100ms정도이다. 시간 단위동안 수행한 프로세스는 준비 큐의 끝으로 밀려나게 된다. 문맥 전환의 오버헤드가 큰 반면, 응답시간이 짧아지는 장점이 있어 실시간 시스템에 유리하다.

 

Operating System

preemption

algorithm

Windows 3.1x

None

Cooperative Scheduler

Windows 95,98,ME

Half

Only for 32 bit operations

Windows NT,XP,Vista

Yes

Multilevel Feedback Queue

Mac OS pre 9

None

Cooperative Scheduler

Mac OS X

Yes

Mach (kernel)

Linux pre 2.5

Yes

Multilevel Feedback Queue

Linux 2.5-2.6.23

Yes

O(1) scheduler

Linux post 2.6.23

Yes

Completely Fair Scheduler

Solaris

Yes

Multilevel feedback queue

NetBSD

Yes

Multilevel feedback queue

FreeBSD

Yes

Multilevel feedback queue