Cloud Computing in Engineering Workflows: Transforming Design, Collaboration, and Innovation In today’s fast-paced engineering landscape, the need for speed, scalability, and seamless collaboration is greater than ever. Traditional engineering workflows often relied on on-premises servers, powerful local machines, and fragmented communication tools. But as projects grow in complexity and teams become more global, these systems can no longer keep up. This is where cloud computing steps in—reshaping how engineers design, simulate, collaborate, and deliver results. What is Cloud Computing in Engineering? Cloud computing refers to the use of remote servers hosted on the internet to store, process, and analyze data. Instead of being limited by the hardware capacity of a single computer or office server, engineers can leverage vast, scalable computing resources from cloud providers. This shift enables engineers to run simulations, share designs, and manage data more efficiently. Key Be...
Deadlock Detection
i. If deadlocks are not avoided, then another method is to detect when they have occurred and recover
somehow.
ii. In addition to the performance hit of frequently checking for deadlocks, a policy / algorithm must be in place for recovering from deadlocks, and there is potential for lost work when processes must be deleted or have their resources prevented.
Single Instance of Each Resource Type
i. If each resource category has a single instance, then we can use a variation of the resource-allocation
graph known as a wait-for graph.
ii. A wait-for graph can be built from a resource-allocation graph by eliminating the resources and collapsing the associated edges, as shown in the figure below.
iii) An arc from Pi to Pj in a wait-for graph implies that process Pi is waiting for a resource that process Pj is currently holding.
As before, cycles in the wait-for graph imply deadlocks.
This algorithm must maintain the wait-for graph, and constantly search it for cycles.
Several Instances of a Resource Type Available:
A vector of length m implies the number of available resources of each type.
Allocation: An n x m matrix defines the number of resources of each type presently assigned to each process.
Request: An n x m matrix implies the current request of each process. If Request [ij] = k, then process
Pi is requesting k more occurence of resource type. Rj .
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, accordingly Initialize:
(a) Work = Available(b) For i = 1,2, …, n,
if Allocationi != 0, then
Finish[i] = false;
otherwise,
Finish[i] = true
2. Find an index i similar that both:
(a) Finish[i] == false
(b) Requesti <=Work
If no such i exists, go to step 4
3. 3. Work = Work + Allocationi
Finish[i] = true go to step 2
4. If Finish[i] == false, for some i, 1 <= i <=n, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked
Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked
state.
Example of Detection Algorithm
Five processes P0 through P4; three resource types A (7 instances), B (2instances), and C (6 instances) Snapshot at time T0:
Now suppose that process P2 makes a request for an additional instance of type C, yielding the state shown below. Is the system now deadlocked?
Detection-Algorithm Usage
i. When should the deadlock detection be done? Frequently, or infrequently?
The answer may depend on how frequently deadlocks are expected to occur, as well as the possible consequences of not catching them immediately.(If deadlocks are not removed immediately when they occur, then more and more processes can "back up" behind the deadlock, making the eventual task of
unblocking the system more difficult and possibly damaging to more processes. )
ii. There are two obvious approaches, each with trade-offs:
1) Do deadlock detection after every resource allocation which cannot be immediately granted. This has the advantage of detecting the deadlock right away, while the minimum number of processes are involved in the deadlock (One might consider that the process whose request triggered the deadlock condition is the "cause" of the deadlock, but realistically all of the processes in the cycle are equally responsible for the resulting deadlock. ) The down side of this approach is the extensive overhead and performance hit caused by checking for deadlocks so frequently.
2) Do deadlock detection only when there is some clue that a deadlock may have occurred, such as when CPU utilization reduces to 40% or some other magic number. The advantage is that deadlock detection is done much less frequently, but the down side is that it becomes impossible to detect the processes involved in the original deadlock, and so deadlock recovery can be more complicated and damaging to more processes.
3) ( As I write this, a third alternative comes to mind: Keep a historical log of resource allocations, since that last known time of no deadlocks. Do deadlock checks periodically ( once an hour or when CPU usage is low?), and then use the historical log to trace through and determine when the deadlock occurred and what processes caused the initial deadlock. Unfortunately I'm not certain that breaking the original deadlock would then free up the resulting log jam. )