Skip to content

Commit 18e651c

Browse files
update readme
1 parent 0c56dad commit 18e651c

File tree

1 file changed

+93
-65
lines changed

1 file changed

+93
-65
lines changed

README.md

+93-65
Original file line numberDiff line numberDiff line change
@@ -1,118 +1,146 @@
11
# Parallel-Concurrent-and-Distributed-Programming-in-Java
2-
Parallel, Concurrent, and Distributed Programming in Java | Coursera
32

3+
<p align="center">
4+
<img src="https://github.com/ashishgopalhattimare/Parallel-Concurrent-and-Distributed-Programming-in-Java/blob/master/certificate/NDV8ZGXD45BP.png
5+
" width="30%">
6+
</p>
7+
<p align="center"> <i>Parallel Concurrent and Distributed Programming in Java | Coursera Certification</i> </p>
8+
---
9+
10+
<b>LEGENDS LABELLING</b> </br>
11+
✔️ - The topics covered during the course</br>
12+
✅ - Self-done assignment</br>
13+
☑ - Instructor assistence required</br>
14+
15+
---
416

517
## Parallel Programming in Java
618

719
<b><u>Week 1 : Task Parallelism</u></b>
8-
- [ ] Demonstrate task parallelism using Asynkc/Finish constructs
9-
- [ ] Create task-parallel programs using Java's Fork/Join Framework
10-
- [ ] Interpret Computation Graph abstraction for task-parallel programs
11-
- [ ] Evaluate the Multiprocessor Scheduling problem using Computation Graphs
12-
- [ ] Assess sequetional bottlenecks using Amdahl's Law
20+
21+
✔️ Demonstrate task parallelism using Asynkc/Finish constructs </br>
22+
✔️ Create task-parallel programs using Java's Fork/Join Framework </br>
23+
✔️ Interpret Computation Graph abstraction for task-parallel programs </br>
24+
✔️ Evaluate the Multiprocessor Scheduling problem using Computation Graphs </br>
25+
✔️ Assess sequetional bottlenecks using Amdahl's Law </br>
26+
1327

1428
✅ <i>Mini project 1 : Reciproncal-Array-Sum using the Java Fork/Join Framework</i>
1529

1630
<b><u>Week 2 : Functional Parallelism</u></b>
1731

18-
- [ ] Demonstrate functional parallelism using the Future construct
19-
- [ ] Create functional-parallel programs using Java's Fork/Join Framework
20-
- [ ] Apply the princple of memoization to optimize functional parallelism
21-
- [ ] Create functional-parallel programs using Java Streams
22-
- [ ] Explain the concepts of data races and functional/structural determinism
32+
✔️ Demonstrate functional parallelism using the Future construct </br>
33+
✔️ Create functional-parallel programs using Java's Fork/Join Framework </br>
34+
✔️ Apply the princple of memoization to optimize functional parallelism </br>
35+
✔️ Create functional-parallel programs using Java Streams </br>
36+
✔️ Explain the concepts of data races and functional/structural determinism </br>
2337

2438
✅ <i>Mini project 2 : Analysing Student Statistics Using Java Parallel Streams</i>
2539

2640
<b><u>Week 3 : Loop Parallelism</u></b>
27-
- [ ] Create programs with loop-level parallelism using the Forall and Java Stream constructs
28-
- [ ] Evaluate loop-level parallelism in a matrix-multiplication example
29-
- [ ] Examine the barrier construct for parallel loops
30-
- [ ] Evaluate parallel loops with barriers in an iterative-averaging example
31-
- [ ] Apply the concept of iteration grouping/chunking to improve the performance of parallel loops
41+
42+
✔️ Create programs with loop-level parallelism using the Forall and Java Stream constructs </br>
43+
✔️ Evaluate loop-level parallelism in a matrix-multiplication example </br>
44+
✔️ Examine the barrier construct for parallel loops </br>
45+
✔️ Evaluate parallel loops with barriers in an iterative-averaging example </br>
46+
✔️ Apply the concept of iteration grouping/chunking to improve the performance of parallel loops </br>
3247

3348
✅ <i>Mini project 3 : Parallelizing Matrix-Matrix Multiply Using Loop Parallelism</i>
3449

3550
<b><u>Week 4 : Data flow Synchronization and Pipelining</u></b>
36-
- [ ] Create split-phase barriers using Java's Phaser construct
37-
- [ ] Create point-to-point synchronization patterns using Java's Phaser construct
38-
- [ ] Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example
39-
- [ ] Analyze pipeline parallelism using the principles of point-to-point synchronization
40-
- [ ] Interpret data flow parallelism using the data-driven-task construct
4151

42-
✅ <i>Mini project 4 : Using Phasers to Optimize Data-Parallel Applications</i>
52+
✔️ Create split-phase barriers using Java's Phaser construct </br>
53+
✔️ Create point-to-point synchronization patterns using Java's Phaser construct </br>
54+
✔️ Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example </br>
55+
✔️ Analyze pipeline parallelism using the principles of point-to-point synchronization </br>
56+
✔️ Interpret data flow parallelism using the data-driven-task construct </br>
57+
58+
☑ <i>Mini project 4 : Using Phasers to Optimize Data-Parallel Applications</i>
59+
60+
---
4361

4462
## Concurrent Programming in Java
4563

4664
<b><u>Week 1 : Threads and Locks</u></b>
47-
- [ ] Understand the role of Java threads in building concurrent programs
48-
- [ ] Create concurrent programs using Java threads and the synchronized statement (structured locks)
49-
- [ ] Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks)
50-
- [ ] Analyze programs with threads and locks to identify liveness and related concurrency bugs
51-
- [ ] Evaluate different approaches to solving the classical Dining Philosophers Problem
65+
66+
✔️ Understand the role of Java threads in building concurrent programs </br>
67+
✔️ Create concurrent programs using Java threads and the synchronized statement (structured locks) </br>
68+
✔️ Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks) </br>
69+
✔️ Analyze programs with threads and locks to identify liveness and related concurrency bugs </br>
70+
✔️ Evaluate different approaches to solving the classical Dining Philosophers Problem </br>
5271

5372
✅ <i>Mini project 1 : Locking and Synchronization</i>
5473

5574
<b><u>Week 2 : Critical Sections and Isolation</u></b>
56-
- [ ] Create concurrent programs with critical sections to coordinate accesses to shared resources
57-
- [ ] Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections
58-
- [ ] Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm
59-
- [ ] Create concurrent programs using Java's atomic variables
60-
- [ ] Evaluate the impact of read vs. write operations on concurrent accesses to shared resources
75+
76+
✔️ Create concurrent programs with critical sections to coordinate accesses to shared resources </br>
77+
✔️ Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections </br>
78+
✔️ Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm </br>
79+
✔️ Create concurrent programs using Java's atomic variables </br>
80+
✔️ Evaluate the impact of read vs. write operations on concurrent accesses to shared resources </br>
6181

6282
✅ <i>Mini project 2 : Global and Object-Based Isolation</i>
6383

6484
<b><u>Week 3 : Actors</u></b>
65-
- [ ] Understand the Actor model for building concurrent programs
66-
- [ ] Create simple concurrent programs using the Actor model
67-
- [ ] Analyze an Actor-based implementation of the Sieve of Eratosthenes program
68-
- [ ] Create Actor-based implementations of the Producer-Consumer pattern
69-
- [ ] Create Actor-based implementations of concurrent accesses on a bounded resource
85+
86+
✔️ Understand the Actor model for building concurrent programs </br>
87+
✔️ Create simple concurrent programs using the Actor model </br>
88+
✔️ Analyze an Actor-based implementation of the Sieve of Eratosthenes program </br>
89+
✔️ Create Actor-based implementations of the Producer-Consumer pattern </br>
90+
✔️ Create Actor-based implementations of concurrent accesses on a bounded resource </br>
7091

7192
✅ <i>Mini project 3 : Sieve of Eratosthenes Using Actor Parallelism</i>
7293

7394
<b><u>Week 4 : Concurrent Data Structures</u></b>
74-
- [ ] Understand the principle of optimistic concurrency in concurrent algorithms
75-
- [ ] Understand implementation of concurrent queues based on optimistic concurrency
76-
- [ ] Understand linearizability as a correctness condition for concurrent data structures
77-
- [ ] Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library
78-
- [ ] Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph
7995

80-
✅ <i>Mini project 4 : Parallelization of Boruvka's Minimum Spanning Tree Algorithm</i>
96+
✔️ Understand the principle of optimistic concurrency in concurrent algorithms </br>
97+
✔️ Understand implementation of concurrent queues based on optimistic concurrency </br>
98+
✔️ Understand linearizability as a correctness condition for concurrent data structures </br>
99+
✔️ Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library </br>
100+
✔️ Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph </br>
101+
102+
☑ <i>Mini project 4 : Parallelization of Boruvka's Minimum Spanning Tree Algorithm</i>
103+
104+
---
81105

82106
## Distributed Programming in Java
83107

84108
<b><u>Week 1 : Distributed Map Reduce</u></b>
85-
- [ ] Explain the MapReduce paradigm for analyzing data represented as key-value pairs
86-
- [ ] Apply the MapReduce paradigm to programs written using the Apache Hadoop framework
87-
- [ ] Create Map Reduce programs using the Apache Spark framework
88-
- [ ] Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm
89-
- [ ] Create an implementation of the PageRank algorithm using the Apache Spark framework
90109

91-
✅ <i>Mini project 1 : Page Rank with Spark</i>
110+
✔️ Explain the MapReduce paradigm for analyzing data represented as key-value pairs </br>
111+
✔️ Apply the MapReduce paradigm to programs written using the Apache Hadoop framework </br>
112+
✔️ Create Map Reduce programs using the Apache Spark framework </br>
113+
✔️ Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm </br>
114+
✔️ Create an implementation of the PageRank algorithm using the Apache Spark framework </br>
115+
116+
☑ <i>Mini project 1 : Page Rank with Spark</i>
92117

93118
<b><u>Week 2 : Client-Server Programming</u></b>
94-
- [ ] Generate distributed client-server applications using sockets
95-
- [ ] Demonstrate different approaches to serialization and deserialization of data structures for distributed programming
96-
- [ ] Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets)
97-
- [ ] Evaluate the use of multicast sockets as a generalization of sockets
98-
- [ ] Employ distributed publish-subscribe applications using the Apache Kafka framework
119+
120+
✔️ Generate distributed client-server applications using sockets </br>
121+
✔️ Demonstrate different approaches to serialization and deserialization of data structures for distributed programming </br>
122+
✔️ Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets) </br>
123+
✔️ Evaluate the use of multicast sockets as a generalization of sockets </br>
124+
✔️ Employ distributed publish-subscribe applications using the Apache Kafka framework </br>
99125

100126
✅ <i>Mini project 2 : Filer Server</i>
101127

102128
<b><u>Week 3 : Message Passing</u></b>
103-
- [ ] Create distributed applications using the Single Program Multiple Data (SPMD) model
104-
- [ ] Create message-passing programs using point-to-point communication primitives in MPI
105-
- [ ] Identify message ordering and deadlock properties of MPI programs
106-
- [ ] Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives
107-
- [ ] Explain collective communication as a generalization of point-to-point communication
108129

109-
✅ <i>Mini project 3 : Matrix Multiply in MPI</i>
130+
✔️ Create distributed applications using the Single Program Multiple Data (SPMD) model </br>
131+
✔️ Create message-passing programs using point-to-point communication primitives in MPI </br>
132+
✔️ Identify message ordering and deadlock properties of MPI programs </br>
133+
✔️ Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives </br>
134+
✔️ Explain collective communication as a generalization of point-to-point communication </br>
135+
136+
☑ <i>Mini project 3 : Matrix Multiply in MPI</i>
110137

111138
<b><u>Week 4 : Combining Distribution and Multuthreading</u></b>
112-
- [ ] Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs
113-
- [ ] Create multithreaded servers in Java using threads and processes
114-
- [ ] Demonstrate how multithreading can be combined with message-passing programming models like MPI
115-
- [ ] Analyze how the actor model can be used for distributed programming
116-
- [ ] Assess how the reactive programming model can be used for distrubted programming
139+
140+
✔️ Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs </br>
141+
✔️ Create multithreaded servers in Java using threads and processes </br>
142+
✔️ Demonstrate how multithreading can be combined with message-passing programming models like MPI </br>
143+
✔️ Analyze how the actor model can be used for distributed programming </br>
144+
✔️ Assess how the reactive programming model can be used for distrubted programming </br>
117145

118146
✅ <i>Mini project 4 : Multi-Threaded File Server</i>

0 commit comments

Comments
 (0)