Greetings

I'm a 4th-year Ph.D. candidate in Computer Science at the University of Toronto, working under the supervision of Prof. Khai Truong.

My research centers human-AI interaction, with an emphasis on accessibility and creativity support, particularly in enhancing "music accessibility" for d/Deaf and hard-of-hearing individuals. One of my main projects involves song signing to support culturally responsive content creation and encourage collaboration between d/Deaf and non-d/Deaf artists. Another aspect of my work focuses on enhancing people's well-being. I am engaged in projects that support individuals with dementia in their out-of-home experiences and encourage mindful eating behaviours among children.

I completed my B.Sci in Computer Science and Engineering at Ewha Womans University, where I was advised by Prof. Uran Oh (Human-Computer Interaction Lab) and Prof. Hyokyung Bahn (Distributed Computing and Operating System Lab). Additionally, I worked as a research intern at the Samsung AI Centre Toronto under the guidance of Dr. Iqbal Mohomed, and at NAVER AI (HCI group) with Dr. Young-Ho Kim.

Computer Architecture 2

 <Pipelining>

Divide instruction to multi-steps and overlap the execution in clock cycle.

Do not wait until all of the steps are finished.

Pros: Throughput +

Cons: Latency not improved, Need additional HW


<5 steps for pipelining Processor>

IF - ID - EX - MEM - WB

Multiple instructions can be executed at the same time.


<Pipelining Hazards>

1) Structural hazard

 use the same resource simultaneously

해결) Von Neuman Architecture -> Harvard architecture: separate MEM


2) Control Hazard

make a decision before the condition is evaluated - branch hazard

해결) Stall / Predict / Delayed branch


3) Data hazards

Use data before its ready

해결) Stall/ Forwarding/ Reading


<Locality>

make data access time faster

- Temporal locality

- Spatial locality


<Directed Mapped Cache>

Data fetched from MEM

Cache index = Block Adress mod # of the blocks


<Page Fault>

Page table MISS

- Write Back:

 Page Fault handler: locate the page on DISK - choose a page to replace in Page Table - If dirty, write back to DISK - read page fro DISK to MEM



<Page Table>

Translate virtual Page to physical page

Indexed by virtual page #


<Questions>

1. 128 blocks, 32 byte/block -> tag, index, offeset bits?

Block 개수 = 128 = 2^7 -> Cache index = 7 bit

1 Block 크기 = 32 byte = 2^5 -> byte offset = 5bit

전체 32bit이므로,

tag = 32 - (7 + 5) = 20 bit


** Block address = Tag + Index = Byte adress / [Block 크기]

** Block number = Cache Index = Block Adress mod # of Blocks


** 32bit 주소 = Tag + Cache Index + byte Offset


2. Virtual MEM = 16K byte, virtual page #, the page offset?

1 page = 16K = 2^4*2^10 byte = 2^14 byte -> Page offset = 14bit

Page Number = 32 bit - 14 bit = 18 bit


** Virtual Address = Page # + Page Offset

** TLB - MEM - PT - DISK


<TLB>


Translation Lookahead Buffer.

The cache of Page table in CPU

locality가 높은 page를 모아두기


** TLB MISS: OS kernel mode, Exception

- Page in MEM: TLB miss handler

- Page not in MEM: Page fault


<MISS>

1) Compualsary MISS = cold start MISS

2) Capacity MISS 

3) Conflict MISS = Collision MISS


<Checking I/O devcices>

Polling vs Interrupts

Polling: periodically check status register, small & cheap

Interrupts: status/ data register interrupts CPU


<MultiTheadind Mechnisms>

1) Fine-grain: switch thread every clock cycle

2) Coarse grain: when there is a long stall

3) Simultaneous: dynamic


<IPC>

1) Shared MEM processor (SMP)

Each processor has its own private virtual address space

2) Message passing

The processor has its own physical MEM

HW send() receive () msg

Comments