id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
24,600 | data structures using selection algorithms these algorithms are used to find the minimum and maximum values in linear or sub-linear time graph algorithms heaps can be used as internal traversal data structures this guarantees that runtime is reduced by an order of polynomial heaps are therefore used for implementing prim' minimal spanning tree algorithm and dijkstra' shortest path problem points to remember binary heap is defined as complete binary tree in which every node satisfies the heap property there are two types of binary heapsmax heap and min heap in min heapelements at every node will either be less than or equal to the element at its left and right child similarlyin max heapelements at every node will either be greater than or equal to element at its left and right child binomial tree of order has root node whose children are the root nodes of binomial trees of order - - binomial tree bi of height has nodes binomial heap is collection of binomial trees that satisfy the following propertieso every binomial tree in satisfies the minimum heap property there can be one or zero binomial trees for each order including zero order fibonacci heap is collection of trees fibonacci heaps differ from binomial heapsas they have more relaxed structureallowing improved asymptotic time bounds exercises review questions define binary heap differentiate between min-heap and max-heap compare binary trees with binary heaps explain the steps involved in inserting new value in binary heap with the help of suitable example explain the steps involved in deleting value from binary heap with the help of suitable example discuss the applications of binary heaps form binary max-heap and min-heap from the following sequence of data heaps are excellent data structures to implement priority queues justify this statement define binomial heap draw its structure differentiate among binarybinomialand fibonacci heaps explain the operations performed on fibonacci heap why are fibonacci heaps preferred over binary and binomial heaps analyse the complexity of the algorithm to unite two binomial heaps the running time of the algorithm to find the minimum key in binomial heap is (log ncomment discuss the process of inserting new node in binomial heap explain with the help of an example the algorithm min-extract_binomial-heap(runs in (log ntime where is the number of nodes in justify this statement explain how an existing node is deleted from binomial heap with the help of relevant example explain the process of inserting new node in fibonacci heap write down the algorithm to unite two fibonacci heaps what is the procedure to extract the node with the minimum value from fibonacci heapgive the algorithm and analyse its complexity consider the figure given below and state whether it is heap or not |
24,601 | draw heap that is also binary search tree analyse the complexity of heapify algorithm consider the fibonacci heap given below and then decrease the value of node to insert new node with value and finally delete node from it min[ reheap the following structure to make it heap multiple-choice questions show the array implementation of the following heap given the following array structuredraw the heap alsofind out (athe parent of nodes and and (bindex of left and right child of node which of the following sequences represents binary heap( ( ( heap sequence is given as which element will be deleted when the deletion algorithm is called thrice show the resulting heap when values and are added to the heap of the above question the height of binary heap with nodes is equal to (ao( (bo(log (co( log (do( an element at position in an array has its left child stored at position ( ( (ci/ (di/ in the worst casehow much time does it take to build binary heap of elements(ao( (bo(log (co( log (do( the height of binomial tree bi is ( ( (ci/ (di how many nodes does binomial tree of order have( ( ( ( the running time of link_binomial-tree(procedure is (ao( (bo(log (co( log (do( in fibonacci heaphow much time does it take to find the minimum node(ao( (bo(log (co( log (do( |
24,602 | data structures using true or false fill in the blanks binary heap is complete binary tree in min heapthe root node has the highest key value in the heap an element at position has its parent stored at position / all levels of binary heap except the last level are completely filled in min-heapelements at every node will be greater than its left and right child binomial tree bi has nodes binomial heaps are ordered fibonacci heaps are rooted and ordered the running time of min_binomial-heap(procedure is (log if there are roots in the root lists of and then merge_binomial-heap(runs in ( log mtime fibonacci heaps are preferred over binomial heaps an element at position in the array has its right child stored at position heaps are used to implement heaps are also known as in elements at every node will either be less than or equal to the element at its left and right child an element is always deleted from the the height of binomial tree bi is binomial heap is defined as binomial tree bi has nodes binomial heap is created in time fibonacci heap is in fibonacci heapmark[xindicates |
24,603 | graphs learning objective in this we will discuss another non-linear data structure called graphs we will discuss the representation of graphs in the memory as well as the different operations that can be performed on them last but not the leastwe will discuss some of the real-world applications of graphs introduction graph is an abstract data structure that is used to implement the mathematical concept of graphs it is basically collection of vertices (also called nodesand edges that connect these vertices graph is often viewed as generalization of the tree structurewhere instead of having purely parent-to-child relationship between tree nodesany kind of complex relationship can exist why are graphs usefulgraphs are widely used to model any situation where entities or things are related to each other in pairs for examplethe following information can be represented by graphsfamily trees in which the member nodes have an edge from parent to each of their children transportation networks in which nodes are airportsintersectionsportsetc the edges can be airline flightsone-way roadsshipping routesetc figure undirected graph definition graph is defined as an ordered set (ve)where (grepresents the set of vertices and (grepresents the edges that connect these vertices figure shows graph with ( {abcd and eand ( {(ab)(bc)(ad)(bd)(de)(ce)note that there are five vertices or nodes and six edges in the graph |
24,604 | data structures using graph can be directed or undirected in an undirected graphedges do not have any direction associated with them that isif an edge is drawn between nodes and bthen the nodes can be traversed from to as well as from to figure shows an undirected graph because it does not give figure directed graph any information about the direction of the edges look at fig which shows directed graph in directed graphedges form an ordered pair if there is an edge from to bthen there is path from to but not from to the edge (abis said to initiate from node (also known as initial nodeand terminate at node (terminal nodea graph terminology adjacent nodes or neighbours for every edgee (uvthat connects nodes and vthe nodes and are the end-points and are said to be the adjacent nodes or neighbours degree of node degree of node udeg( )is the total number of edges containing the node if deg( it means that does not belong to any edge and such node is known as an isolated node regular graph it is graph where each vertex has the same number of neighbours that isevery node has the same degree regular graph with vertices of degree is called -regular graph or regular graph of degree figure shows regular graphs ( -regular graph( -regular graph( -regular graphfigure regular graphs closed path vn path path written as { vn)of length from node to is defined as sequence of ( + nodes hereu vn and vi- is adjacent to vi for path is known as closed path if the edge has the same end-points that isif simple path path is known as simple path if all the nodes in the path are distinct with an exception that may be equal to vn if vnthen the path is called closed simple path cycle path in which the first and the last vertices are same simple cycle has no repeated edges or vertices (except the first and last verticesconnected graph graph is said to be connected if for any two vertices (uvin there is path from to that is to say that there are no isolated nodes in connected graph connected graph that does not have any cycle is called tree thereforea tree is treated as special graph (refer fig ( )complete graph (amulti-graph graph is said to be complete if all its nodes are fully connected that isthere is path from one node to every other node in the graph complete graph has ( - )/ edgesa where is the number of nodes in (btree (cweighted graph figure multi-graphtreeand weighted graph clique in an undirected graph (ve)clique is subset of the vertex set vsuch that for every two vertices in cthere is an edge that connects two vertices |
24,605 | labelled graph or weighted graph graph is said to be labelled if every edge in the graph is assigned some data in weighted graphthe edges of the graph are assigned some weight or length the weight of an edge denoted by (eis positive value which indicates the cost of traversing the edge figure (cshows weighted graph multiple edges distinct edges which connect the same end-points are called multiple edges that ise (uvand (uvare known as multiple edges of loop an edge that has identical end-points is called loop that ise (uumulti-graph graph with multiple edges and/or loops is called multi-graph figure (ashows multi-graph size of graph the size of graph is the total number of edges in it directed graphs directed graph galso known as digraphis graph in which every edge has direction assigned to it an edge of directed graph is given as an ordered pair (uvof nodes in for an edge (uv)the edge begins at and terminates at is known as the origin or initial point of correspondinglyv is known as the destination or terminal point of is the predecessor of correspondinglyv is the successor of nodes and are adjacent to each other terminology of directed graph out-degree of node the out-degree of node uwritten as outdeg( )is the number of edges that originate at in-degree of node terminate at the in-degree of node uwritten as indeg( )is the number of edges that degree of node the degree of nodewritten as deg( )is equal to the sum of in-degree and out-degree of that node thereforedeg(uindeg(uoutdeg(uisolated vertex vertex with degree zero such vertex is not an end-point of any edge pendant vertex (also known as leaf vertexa vertex with degree one cut vertex vertex which when deleted would disconnect the remaining graph source node is known as source if it has positive out-degree but zero in-degree sink node is known as sink if it has positive in-degree but zero out-degree reachability node is said to be reachable from node uif and only if there exists (directedpath from node to node for exampleif you consider the directed graph given in fig ( )you will observe that node is reachable from node strongly connected directed graph digraph is said to be strongly connected if and only if there exists path between every pair of nodes in that isif there is path from node to vthen there must be path from node to unilaterally connected graph digraph is said to be unilaterally connected if there exists path between any pair of nodes uv in such that there is path from to or path from to ubut not both |
24,606 | data structures using weakly connected digraph directed graph is said to be weakly connected if it is connected by ignoring the direction of edges that isin such graphit is possible to reach any node from any other node by traversing edges in any direction (may not be in the direction they pointthe nodes in weakly connected directed graph must have either out-degree or in-degree of at least parallel/multiple edges distinct edges which connect the same end-points are called multiple edges that ise (uvand (uvare known as multiple edges of in fig ( ) and are multiple edges connecting nodes and simple directed graph directed graph is said to be simple directed graph if and only if it has no parallel edges howevera simple directed graph may contain cycles with an exception that it cannot have more than one loop at given node the graph given in fig (ais directed graph in which there are four nodes and eight edges note that edges and are parallel ( (bsince they begin at and end at the edge is loop since it originates and terminates at the same node the sequence of nodesfigure (adirected acyclic abdand cdoes not form path because (dcis not an edge graph and (bstrongly although there is path from node to dthere is no way from to connected directed acyclic graph in the graphwe see that there is no path from node to any other node in gso the graph is not strongly connected howeverg is said (ato be unilaterally connected we also observe that node is sink since it has positive in-degree but zero out-degree transitive closure of directed graph transitive closure of graph is constructed to answer reachability figure (aa graph and its questions that isis there path from node to node in one or (btransitive closure more hopsa binary relation indicates only whether the node is gconnected to node bwhether node is connected to node cetc but once the transitive closure is constructed as shown in fig we can easily determine in ( time whether node is reachable from node or not like the adjacency listdiscussed in section transitive closure is also stored as matrix tso if [ ][ then node can be reached from node in one or more hops (bdefinition for directed graph ( , )where is the set of vertices and is the set of edgesthe transitive closure of is graph ( , *in *for every vertex pair vw in there is an edge (vwin eif and only if there is valid path from to in where and why is it neededfinding the transitive closure of directed graph is an important problem in the following computational taskstransitive closure is used to find the reachability analysis of transition networks representing distributed and parallel systems it is used in the construction of parsing automata in compiler construction recentlytransitive closure computation is being used to evaluate recursive database queries (because almost all practical recursive queries are transitive in nature |
24,607 | algorithm the algorithm to find the transitive closure of graph is given in fig in order to determine the transitive closure of graphwe define matrix where tkij for ijk if there if (ij is not in when tij exists path in from the vertex to vertex with oo if (ijis intermediate vertices in the set ( kand tikj kv ( ki - kk - when > ij otherwise that isgis constructed by adding an edge (ijinto eif and only if tkij look at fig which shows the relation between and tkij figure relation between and ij transitive_closure(atnstep set = = = step repeat steps and while <= step repeat step while <= step if ( [ ][ set [ ][ else set [ ][jincrement [end of loopincrement [end of loopstep repeat steps to while <= step repeat steps to while <= step repeat steps and while <= step set [ ,jt[ ][jv ( [ ][kl [ ][ ]increment step [end of loopstep increment [end of loopstep increment [end of loopstep end figure algorithm to find the transitive enclosure of graph bi-connected components vertex of is called an articulation pointif removing along with the edges incident on vresults in graph that has at least two connected components bi-connected graph (shown in fig is defined as connected graph that has no articulation vertices that isa bi-connected graph is connected and non-separable in the sense that even if we remove any vertex from the graphthe resultant graph is still connected by definitiona (afigure (bnon bi-connected graph bi-connected undirected graph is connected graph that cannot be broken into disconnected pieces by deleting any single vertex in bi-connected directed graphfor any two vertices and wthere are two directed paths from to which have no vertices in common other than and note that the graph shown in fig (ais not bi-connected graphas deleting vertex from the graph results in two disconnected components of the original graph (fig ( ) |
24,608 | data structures using as for verticesthere is related concept for edges an edge in graph is called bridge if removing that edge results in disconnected graph alsoan edge in graph that does not lie on cycle is figure bi-connected figure graph with bridge this means that bridge has at least one bridges graph articulation point at its endalthough it is not necessary that the articulation point is linked to bridge look at the graph shown in fig in the graphcd and de are bridges consider some more examples shown in fig (cd is bridge(there are no bridgesa representation of graphs there are three common ways of storing graphs in the computer' memory they aresequential representation by using an adjacency figure graph with bridges matrix linked representation by using an adjacency list that stores the neighbours of node using linked list adjacency multi-list which is an extension of linked representation in this sectionwe will discuss both these schemes in detail (cd is bridge(all edges are bridgesadjacency matrix representation an adjacency matrix is used to represent which nodes are adjacent to one another by definitiontwo nodes are said to be adjacent if there is an edge connecting them in directed graph gif node is adjacent to node uthen there is definitely an edge from to that isif is adjacent to uwe can get from to by traversing one edge for any graph having nodesthe adjacency matrix will have the dimension of yn in an adjacency matrixthe rows and columns are labelled by graph vertices an entry aij in the adjacency matrix will contain if vertices vi and vj are adjacent to each other howeverif the nodes are not adjacentaij will be set to zero it is summarized in fig since an adjacency matrix contains only and sit is [if vi is adjacent to vjthat is called bit matrix or boolean matrix the entries in the aij there is an edge (vivj)] matrix depend on the ordering of the nodes in therefore [otherwisea change in the order of nodes will result in different adjacency matrix figure shows some graphs and their corresponding adjacency matrices figure adjacency matrix entry bcde (adirected graph bcde (cundirected graph bcd (bdirected graph with loop bcde (dweighted graph figure graphs and their corresponding adjacency matrices |
24,609 | from the above exampleswe can draw the following conclusionsfor simple graph (that has no loops)the adjacency matrix has on the diagonal the adjacency matrix of an undirected graph is symmetric the memory use of an adjacency matrix is ( )where is the number of nodes in the graph number of (or non-zero entriesin an adjacency matrix is equal to the number of edges in the graph the adjacency matrix for weighted graph contains the weights of the edges connecting the nodes now let us discuss the powers of an adjacency matrix from adjacency matrix we can conclude that an entry in the ith row and jth column means that there exists path of length from vi to vj now considera and (aij) aaik akj any entry aij if aik akj that isif there is an edge (vivkand (vkvj)then there is path from vi to vj of length similarlyevery entry in the ith row and jth column of gives the bcd number of paths of length from node vi to vj in general termswe can conclude that every entry in the ith row and jth column of an (where is the number of nodes in the graphgives the number of paths of length from node vi to vj consider directed figure directed graph graph given in fig given its adjacency matrix alet us calculate with its adjacency and matrix ya ii ii ii ii ii ii nowbased on the above calculationswe define matrix asbr ar an entry in the ith row and jth column of matrix br gives the number of paths of length or less than from vertex vi to vj the main goal to define matrix is to obtain the path matrix the path matrix can be calculated from by setting an entry pij if bij is non-zero and pij |
24,610 | data structures using if otherwise the path matrix is used to show whether there exists simple path from node vi to vj or not this is shown in fig let us now calculate matrix and matrix using the above discussion [if there is path from vi to vjpij [otherwisefigure path matrix entry ii ii ii ii now the path matrix can be given ase adjacency list representation an adjacency list is another way in which graphs can be represented in the computer' memory this structure consists of list of all nodes in furthermoreevery node is in turn linked to its own list that contains the names of all other nodes that are adjacent to it the key advantages of using an adjacency list areit is easy to follow and clearly shows the adjacent nodes of particular node it is often used for storing graphs that have small-to-moderate number of edges that isan adjacency list is preferred for representing sparse graphs in the computer' memoryotherwisean adjacency matrix is good choice adding new nodes in is easy and straightforward when is represented using an adjacency list adding new nodes in an adjacency matrix is difficult taskas the size of the matrix needs to figure graph and its adjacency list be changed and existing nodes may have to be reordered consider the graph given in fig and see how its adjacency list is stored in the memory for directed graphthe sum of the lengths of all adjacency lists is equal to the number of edges in (undirected graphg howeverfor an undirected graphthe sum of the lengths of all adjacency lists is equal to twice the number of edges in because an edge (uvmeans an edge from node to as well as an edge from adjacency lists can also be modified to store to (weighted graphweighted graphs let us now see an adjacency list figure adjacency list for an undirected graph for an undirected graph as well as weighted graph and weighted graph this is shown in fig |
24,611 | adjacency multi-list representation graphs can also be represented using multi-lists which can be said to be modified version of adjacency lists adjacency multi-list is an edge-based rather than vertex-based representation of graphs multi-list representation basically consists of two parts-- directory of nodesinformation and set of linked lists storing information about edges while there is single entry for each node in the node directoryevery nodeon the other handappears in two adjacency lists (one for the node at each end of the edgefor examplethe directory entry for node points to the adjacency list for node this means that the nodes are shared among several lists in multi-list representationthe information about an edge (vivjof an undirected graph can be stored using the following attributesma single bit field to indicate whether the edge has been examined or not via vertex in the graph that is connected to vertex vj by an edge vja vertex in the graph that is connected to vertex vi by an edge for via link that points to another node that has an edge incident on vi link link for via link that points to another node that has an edge incident on vj figure undirected consider the undirected graph given in fig graph the adjacency multi-list for the graph can be given asedge edge edge edge null edge edge null edge edge null edge edge null edge edge edge null edge null null using the adjacency multi-list given abovethe adjacency list for vertices can be constructed as shown belowvertex list of edges edge edge edge edge edge edge edge edge edge edge edge edge edge edge |
24,612 | data structures using programming example write program to create graph of vertices using an adjacency list also write the code to read and print its information and finally to delete the graph #include #include #include struct node char vertexstruct node *next}struct node *gnodevoid displaygraph(struct node *adj[]int no_of_nodes)void deletegraph(struct node *adj[]int no_of_nodes)void creategraph(struct node *adj[]int no_of_nodes)int main(struct node *adj[ ]int ino_of_nodesclrscr()printf("\ enter the number of nodes in ")scanf("% "&no_of_nodes)for( no_of_nodesi++adj[inullcreategraph(adjno_of_nodes)printf("\ the graph is")displaygraph(adjno_of_nodes)deletegraph(adjno_of_nodes)getch()return void creategraph(struct node *adj[]int no_of_nodesstruct node *new_node*lastint ijnvalfor( no_of_nodesi++last nullprintf("\ enter the number of neighbours of % " )scanf("% "& )for( <nj++printf("\ enter the neighbour % of % "ji)scanf("% "&val)new_node (struct node *malloc(sizeof(struct node))new_node -vertex valnew_node -next nullif (adj[ =nulladj[inew_nodeelse last -next new_nodelast new_node void displaygraph (struct node *adj[]int no_of_nodes |
24,613 | struct node *ptrint ifor( no_of_nodesi++ptr adj[ ]printf("\ the neighbours of node % are:" )while(ptr !nullprintf("\ % "ptr -vertex)ptr ptr -nextvoid deletegraph (struct node *adj[]int no_of_nodesint istruct node *temp*ptrfor( <no_of_nodesi++ptr adj[ ]while(ptr nulltemp ptrptr ptr -nextfree(temp)adj[inulloutput enter the number of nodes in enter the number of neighbours of enter the neighbour of enter the number of neighbours of enter the neighbour of enter the neighbour of enter the number of neighbours of enter the neighbour of the neighbours of node are the neighbours of node are the neighbours of node are note if the graph in the above program had been weighted graphthen the structure of the node would have beentypedef struct node int vertexint weightstruct node *next} graph traversal algorithms in this sectionwe will discuss how to traverse graphs by traversing graphwe mean the method of examining the nodes and edges of the graph there are two standard methods of graph traversal which we will discuss in this section these two methods are |
24,614 | data structures using breadth-first search depth-first search while breadth-first search uses queue as an auxiliary data structure to store nodes for further processingthe depth-first search scheme uses stack but both these algorithms make use of variable status during the execution of the algorithmevery node in the graph will have the variable status set to or depending on its current state table shows the value of status and its significance table status value of status and its significance state of the node description ready the initial state of the node waiting node is placed on the queue or stack and waiting to be processed processed node has been completely processed breadth-first search algorithm breadth-first search (bfsis graph search algorithm that begins at the root node and explores all the neighbouring nodes then for each of those nearest nodesthe algorithm (fig explores their unexplored neighbour nodesand so onuntil it finds the goal that iswe start examining the node and then all the neighbours of are examined in the next stepwe examine step set status (ready statethe neighbours of neighbours of aso on and so forth this for each node in step enqueue the starting node means that we need to track the neighbours of the node and and set its status guarantee that every node in the graph is processed and no (waiting statenode is processed more than once this is accomplished step repeat steps and until queue is empty by using queue that will hold the nodes that are waiting step dequeue node process it for further processing and variable status to represent and set its status the current state of the node (processed statestep enqueue all the neighbours of that are in the ready state (whose status and set their status (waiting state[end of loopstep exit figure algorithm for breadth-first search figure adjacency lists abcd be cbg dcg ecf fch gfhi hei if graph and its adjacency list example consider the graph given in fig the adjacency list of is also given assume that represents the daily flights between different cities and we want to fly from city to with minimum stops that isfind the minimum path from to given that every edge has length of solution the minimum path can be found by applying the breadth-first search algorithm that begins at city and ends when is encountered during the execution of the algorithmwe use two arraysqueue and orig while queue is used to hold the nodes that have to be processedorig is used to keep track of the origin of each edge initiallyfront rear - the algorithm for this is as follows(aadd to queue and add null to orig front rear queue orig \ |
24,615 | (bdequeue node by setting front front (remove the front element of queueand enqueue the neighbours of alsoadd as the orig of its neighbours front queue rear orig \ (cdequeue node by setting front front and enqueue the neighbours of alsoadd as the orig of its neighbours front queue rear orig \ (ddequeue node by setting front front and enqueue the neighbours of alsoadd as the orig of its neighbours note that has two neighbours and since has already been added to the queue and it is not in the ready statewe will not add and only add front queue rear orig \ (edequeue node by setting front front and enqueue the neighbours of alsoadd as the orig of its neighbours note that has two neighbours and since both of them have already been added to the queue and they are not in the ready statewe will not add them again front queue rear orig \ (fdequeue node by setting front front and enqueue the neighbours of alsoadd as the orig of its neighbours note that has two neighbours and since has already been added to the queue and it is not in the ready statewe will not add and add only front queue rear orig \ (gdequeue node by setting front front and enqueue the neighbours of alsoadd as the orig of its neighbours note that has three neighbours fhand front queue rear orig \ since has already been added to the queuewe will only add and as is our final destinationwe stop the execution of this algorithm as soon as it is encountered and added to the queue nowbacktrack from using orig to find the minimum path thuswe have as - - - features of breadth-first search algorithm space complexity in the breadth-first search algorithmall the nodes at particular level must be saved until their child nodes in the next level have been generated the space complexity is therefore proportional to the number of nodes at the deepest level of the graph given graph with branching factor (number of children at each nodeand depth dthe asymptotic space complexity is the number of nodes at the deepest level (bd |
24,616 | data structures using if the number of vertices and edges in the graph are known ahead of timethe space complexity can also be expressed as )where is the total number of edges in and is the number of nodes or vertices time complexity in the worst casebreadth-first search has to traverse through all paths to all possible nodesthus the time complexity of this algorithm asymptotically approaches (bdhoweverthe time complexity can also be expressed as oe )since every vertex and every edge will be explored in the worst case completeness breadth-first search is said to be complete algorithm because if there is solutionbreadth-first search will find it regardless of the kind of graph but in case of an infinite graph where there is no possible solutionit will diverge optimality breadth-first search is optimal for graph that has edges of equal lengthsince it always returns the result with the fewest edges between the start node and the goal node but generallyin real-world applicationswe have weighted graphs that have costs associated with each edgeso the goal next to the start does not have to be the cheapest goal available applications of breadth-first search algorithm breadth-first search can be used to solve many problems such asfinding all connected components in graph finding all nodes within an individual connected component finding the shortest path between two nodesu and vof an unweighted graph finding the shortest path between two nodesu and vof weighted graph programming example write program to implement the breadth-first search algorithm #include #define max void breadth_first_search(int adj[][max],int visited[],int startint queue[max],rear - ,front = iqueue[++rearstartvisited[start while(rear !frontstart queue[++front]if(start = printf(" \ ")else printf("% \ ",start )for( maxi++if(adj[start][ = &visited[ = queue[++rearivisited[ int main( |
24,617 | int visited[max{ }int adj[max][max]ijprintf("\ enter the adjacency matrix")for( maxi++for( maxj++scanf("% "&adj[ ][ ])breadth_first_search(adj,visited, )return output enter the adjacency matrix depth-first search algorithm the depth-first search algorithm (fig progresses by expanding the starting node of and then going deeper and deeper until the goal node is foundor until node that has no children is encountered when dead-end is reachedthe algorithm backtracksreturning to the most recent node that has not been completely explored in other wordsdepth-first search begins at starting node which becomes the current node thenit examines each node along path which begins at that iswe process neighbour of athen neighbour of neighbour of aand so on during the execution of the algorithmif we reach path that has node that has already been processedthen we backtrack to the current node otherwisethe unvisited (unprocessednode becomes the current node the algorithm proceeds like this until we reach dead-end (end step set status (ready statefor each node in step push the starting node on the stack and set of path on reaching the deadits status (waiting stateendwe backtrack to find another step repeat steps and until stack is empty path cthe algorithm terminates step pop the top node process it and set its status (processed statewhen backtracking leads back to the step push on the stack all the neighbours of that starting node in this algorithmare in the ready state (whose status and edges that lead to new vertex are set their status (waiting state[end of loopcalled discovery edges and edges that step exit lead to an already visited vertex are called back edges figure algorithm for depth-first search observe that this algorithm is similar to the in-order traversal of binary tree its implementation is similar to that of the breadth-first search algorithm but here we use stack instead of queue againwe use variable status to represent the current state of the node example consider the graph given in fig the adjacency list of is also given suppose we want to print all the nodes that can be reached from the node (including itselfone alternative is to use depth-first search of starting at node the procedure can be explained here |
24,618 | data structures using adjacency lists abcd be cbg dcg ecf fch gfhi hei if figure graph and its adjacency list solution (apush onto the stack stackh (bpop and print the top element of the stackthat ish push all the neighbours of onto the stack that are in the ready state the stack now becomes printh stackei (cpop and print the top element of the stackthat isi push all the neighbours of onto the stack that are in the ready state the stack now becomes printi stackef (dpop and print the top element of the stackthat isf push all the neighbours of onto the stack that are in the ready state (note has two neighboursc and but only will be addedas is not in the ready state the stack now becomes printf stackec (epop and print the top element of the stackthat isc push all the neighbours of onto the stack that are in the ready state the stack now becomes printc stackebg (fpop and print the top element of the stackthat isg push all the neighbours of onto the stack that are in the ready state since there are no neighbours of that are in the ready stateno push operation is performed the stack now becomes printg stackeb (gpop and print the top element of the stackthat isb push all the neighbours of onto the stack that are in the ready state since there are no neighbours of that are in the ready stateno push operation is performed the stack now becomes printb stacke (hpop and print the top element of the stackthat ise push all the neighbours of onto the stack that are in the ready state since there are no neighbours of that are in the ready stateno push operation is performed the stack now becomes empty printe stacksince the stack is now emptythe depth-first search of starting at node is complete and the nodes which were printed are |
24,619 | hifcgbe these are the nodes which are reachable from the node features of depth-first search algorithm space complexity the space complexity of depth-first search is lower than that of breadthfirst search time complexity the time complexity of depth-first search is proportional to the number of vertices plus the number of edges in the graphs that are traversed the time complexity can be given as ( (| | |)completeness depth-first search is said to be complete algorithm if there is solutiondepthfirst search will find it regardless of the kind of graph but in case of an infinite graphwhere there is no possible solutionit will diverge applications of depth-first search algorithm depth-first search is useful forfinding path between two specified nodesu and vof an unweighted graph finding path between two specified nodesu and vof weighted graph finding whether graph is connected or not computing the spanning tree of connected graph programming example write program to implement the depth-first search algorithm #include #define max void depth_first_search(int adj[][max],int visited[],int startint stack[max]int top - iprintf("% -",start )visited[start stack[++topstartwhile(top - start stack[top]for( maxi++if(adj[start][ &visited[ = stack[++topiprintf("% -" )visited[ breakif( =maxtop--int main(int adj[max][max]int visited[max{ }ij |
24,620 | data structures using printf("\ enter the adjacency matrix")for( maxi++for( maxj++scanf("% "&adj[ ][ ])printf("dfs traversal")depth_first_search(adj,visited, )printf("\ ")return output enter the adjacency matrix dfs traversala - - - topological sorting topological sort of directed acyclic graph (dagg is defined as linear ordering of its nodes in which each node comes before all nodes to which it has outbound edges every dag has one or more number of topological sorts topological sort of dag is an ordering of the vertices of such that if contains an edge (uv)then appears before in the ordering note that topological sort is possible only on directed acyclic graphs that do not have any example consider three dags shown in fig cycles for dag that contains cyclesno and their possible topological sorts linear ordering of its vertices is possible in simple wordsa topological ordering of dag is an ordering of its vertices such that any directed path in traverses the vertices in increasing order topological sorting is widely used in scheduling applicationsjobsor tasks the jobs that have to be completed are represented by nodesand there is an edge from node to if job must be completed before job can be started topological sort topological sort topological sort topological sort can be given ascan be given ascan be given asof such graph gives an order in which the abcde abdcef abcfdec given jobs must be performed abced acbde acbed figure abdcfe abcdef abcdfe abfed abcdefg abcdfeg abdcefg abdcfeg topological sort one main property of dag is that more the number of edges in dagfewer the number of topological orders it has this is because each edge (uvforces node to occur before vwhich restricts the number of valid permutations of the nodes algorithm the algorithm for the topological sort of graph (fig that has no cycles focuses on selecting node with zero in-degreethat isa node that has no predecessor the two main steps involved in the topological sort algorithm includeselecting node with zero in-degree deleting from the graph along with its edges |
24,621 | step find the in-degree indeg(nof every node in the graph step enqueue all the nodes with zero in-degree step repeat steps and until the queue is empty step remove the front node of the queue by setting front front step repeat for each neighbour of node nadelete the edge from to by setting indeg(mindeg( bif indeg(mthen enqueue mthat isadd to the rear of the queue [end of inner loop[end of loopstep exit figure algorithm for topological sort we will use queue to hold the nodes with zero in-degree the order in which the nodes will be deleted from the graph will depend on the sequence in which the nodes are inserted in the queue thenwe will use variable indegwhere indeg(nwill represent the in-degree of node notes the in-degree can be calculated in two ways--either by counting the incoming edges from the graph or traversing through the adjacency list the running time of the algorithm for topological sorting can be given linearly as the number of nodes plus the number of edges (| |+| |example consider directed acyclic graph given in fig we use the algorithm given above to find topological sort of the steps are given as belowc adjacency lists ab bcde ce de ef gd figure graph step find the in-degree indeg(nof every node in the graph indeg( indeg( indeg( indeg( indeg( indeg( indeg( step enqueue all the nodes with zero in-degree front rear queue ag step remove the front element from the queue by setting front front so front rear queue ag step set indeg(bindeg( since is the neighbour of note that indeg(bis so add it on the queue the queue now becomes front rear queue agb delete the edge from to the graph now becomes as shown in the figure below |
24,622 | data structures using step remove the front element from the queue by setting front front so front rear queue agb step set indeg(dindeg( since is the neighbour of nowindeg( indeg( indeg( indeg( delete the edge from to the graph now becomes as shown in the figure below step remove the front element from the queue by setting front front so front rear queue agb step set indeg(cindeg( indeg(dindeg( indeg(eindeg( since cdand are the neighbours of nowindeg( indeg( and indeg( step since the in-degree of node and is zeroadd and at the rear of the queue the queue can be given as belowfront rear queue agbcd the graph now becomes as shown in the figure below step remove the front element from the queue by setting front front so front rear queue agbcd step set indeg(eindeg( since is the neighbour of nowindeg( the graph now becomes as shown in the figure below step remove the front element from the queue by setting front front so front rear queue abgcd step set indeg(eindeg( since is the neighbour of nowindeg( so add to the queue the queue now becomes front rear queue agbcde step delete the edge between an the graph now becomes as shown in the figure below |
24,623 | step remove the front element from the queue by setting front front so front rear queue agbcde step set indeg(findeg( since is the neighbour of now indeg( so add to the queue the queue now becomesfront rear queue agbcdef step delete the edge between and the graph now becomes as shown in the figure below there are no more edges in the graph and all the nodes have been added to the queueso the topological sort of can be given asagbcdef when we arrange these nodes in sequencewe find that if there is an edge from to vthen appears before figure topological sort of programming example write program to implement topological sorting #include #include #define max int ,adj[max][max]int front - ,rear - ,queue[max]void create_graph(void)void display()void insert_queue(int)int delete_queue(void)int find_indegree(int)void main(int node, ,del_nodeiint topsort[max],indeg[max]create_graph()printf("\ the adjacency matrix is:")display()/*find the in-degree of each node*for(node node <nnode++indeg[nodefind_indegree(node)ifindeg[node= |
24,624 | data structures using insert_queue(node)while(front <rear/*continue loop until queue is empty *del_node delete_queue()topsort[jdel_node/*add the deleted node to topsort* ++/*delete the del_node edges *for(node node <nnode++if(adj[del_node][node= adj[del_node][node indeg[nodeindeg[node if(indeg[node= insert_queue(node)printf("the topological sorting can be given as :\ ")for(node= ; < ;node++printf("% ",topsort[node])void create_graph(int ,max_edges,org,destprintf("\ enter the number of vertices")scanf("% ",& )max_edges *( )for( <max_edgesi++printf("\ enter edge % ( to quit)", )scanf("% % ",&org,&dest)if((org = &(dest = )breakiforg |dest |org < |dest < printf("\ invalid edge") --else adj[org][dest void display(int ,jfor( = ; <= ; ++printf("\ ")for( = ; <= ; ++printf("% ",adj[ ][ ])void insert_queue(int nodeif (rear==max- printf("\ overflow ")else if (front =- /*if queue is initially empty * |
24,625 | front= queue[++rearnode int delete_queue(int del_nodeif (front =- |front rearprintf("\ underflow ")return else del_node queue[front++]return del_nodeint find_indegree(int nodeint ,in_deg for( <ni++ifadj[ ][node= in_deg++return in_degoutput enter number of vertices enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) enter edge ( to quit) the topological sorting can be given as shortest path algorithms in this sectionwe will discuss three different algorithms to calculate the shortest path between the vertices of graph these algorithms includeminimum spanning tree dijkstra' algorithm warshall' algorithm while the first two use an adjacency list to find the shortest pathwarshall' algorithm uses an adjacency matrix to do the same minimum spanning trees spanning tree of connectedundirected graph is sub-graph of which is tree that connects all the vertices together graph can have many different spanning trees we can assign weights to each edge (which is number that represents how unfavourable the edge is)and use it to assign weight to spanning tree by calculating the sum of the weights of the edges in that spanning |
24,626 | data structures using tree minimum spanning tree (mstis defined as spanning tree with weight less than or equal to the weight of every other spanning tree in other wordsa minimum spanning tree is spanning tree that has weights associated with its edgesand the total weight of the tree (the sum of the weights of its edgesis at minimum an analogy take an analogy of cable tv company laying cable in new neighbourhood if it is restricted to bury the cable only along particular pathsthen we can make graph that represents the points that are connected by those paths some paths may be more expensive (due to their length or the depth at which the cable should be buriedthan the others we can represent these paths by edges with larger weights thereforea spanning tree for such graph would be subset of those paths that has no cycles but still connects to every house many distinct spanning trees can be obtained from this graphbut minimum spanning tree would be the one with the lowest total cost properties possible multiplicity there can be multiple minimum spanning trees of the same weight particularlyif all the weights are the samethen every spanning tree will be minimum uniqueness when each edge in the graph is assigned different weightthen there will be only one unique minimum spanning tree minimum-cost subgraph if the edges of graph are assigned non-negative weightsthen minimum spanning tree is in fact the minimum-cost subgraph or tree that connects all vertices cycle property if there exists cycle in the graph that has weight larger than that of other edges of cthen this edge cannot belong to an mst usefulness minimum spanning trees can be computed quickly and easily to provide optimal solutions these trees create sparse subgraph that reflects lot about the original graph simplicity the minimum spanning tree of weighted graph is nothing but spanning tree of the graph which comprises of - edges of minimum total weight note that for an unweighted graphany spanning tree is minimum spanning tree example consider an unweighted graph given below (fig from gwe can draw many distinct spanning trees eight of them are given here for an unweighted graphevery spanning tree is minimum spanning tree (unweighted grapha figure unweighted graph and its spanning trees example consider weighted graph shown in fig from gwe can draw three distinct spanning trees but only single minimum spanning tree can be obtainedthat isthe one that has the minimum weight (costassociated with it |
24,627 | of all the spanning trees given in fig the one that is highlighted is called the minimum spanning treeas it has the lowest cost associated with it (weighted grapha (total cost (total cost (total cost (total cost (total cost (total cost (total cost (total cost (total cost figure weighted graph and its spanning trees applications of minimum spanning trees msts are widely used for designing networks for instancepeople separated by varying distances wish to be connected together through telephone network minimum spanning tree is used to determine the least costly paths with no cycles in this networkthereby providing connection that has the minimum cost involved msts are used to find airline routes while the vertices in the graph denote citiesedges represent the routes between these cities no doubtmore the distance between the citieshigher will be the amount charged thereforemsts are used to optimize airline routes by finding the least costly path with no cycles msts are also used to find the cheapest way to connect terminalssuch as citieselectronic components or computers via roadsairlinesrailwayswires or telephone lines msts are applied in routing algorithms for finding the most efficient path prim' algorithm prim' algorithm is greedy algorithm that is used to form minimum spanning tree for connected weighted undirected graph in other wordsthe algorithm builds tree that includes every vertex and subset of the edges in such way that the total weight of all the edges in the tree is minimized for thisthe algorithm maintains three sets of vertices which can be given as belowtree vertices vertices that are part of the minimum spanning tree fringe vertices vertices that are currently not part of tbut are adjacent to some tree vertex unseen vertices vertices that are neither tree vertices nor fringe vertices fall under this category the steps involved in the prim' algorithm are shown in fig choose starting vertex branch out from the starting vertex and during each iterationstep select starting vertex select new vertex and an edge step repeat steps and until there are fringe vertices basicallyduring each iteration step select an edge connecting the tree vertex and of the algorithmwe have to fringe vertex that has minimum weight step add the selected edge and the vertex to the select vertex from the fringe minimum spanning tree vertices in such way that the [end of loopedge connecting the tree vertex step exit and the new vertex has the figure prim' algorithm minimum weight assigned to it |
24,628 | data structures using the running time of prim' algorithm can be given as ( log vwhere is the number of edges and is the number of vertices in the graph figure example construct minimum spanning tree of the graph given in fig graph step choose starting vertex step add the fringe vertices (that are adjacent to athe edges connecting the vertex and fringe vertices are shown with dotted lines step select an edge connecting the tree vertex and the fringe vertex that has the minimum weight and add the selected edge and the vertex to the minimum spanning tree since the edge connecting and has less weightadd to the tree now is not fringe vertex but tree vertex step add the fringe vertices (that are adjacent to cstep select an edge connecting the tree vertex and the fringe vertex that has the minimum weight and add the selected edge and the vertex to the minimum spanning tree since the edge connecting and has less weightadd to the tree now is not fringe vertex but tree vertex step add the fringe vertices (that are adjacent to bstep select an edge connecting the tree vertex and the fringe vertex that has the minimum weight and add the selected edge and the vertex to the minimum spanning tree since the edge connecting and has less weightadd to the tree now is not fringe vertex but tree vertex step notenow node is not connectedso we will add it in the tree because minimum spanning tree is one in which all the nodes are connected with - edges that have minimum weight sothe minimum spanning tree can now be given as step step step step step step step step example construct minimum spanning tree of the graph given in fig start the prim' algorithm from vertex figure graph |
24,629 | kruskal' algorithm kruskal' algorithm is used to find the minimum spanning tree for connected weighted graph the algorithm aims to find subset of the edges that forms tree that includes every vertex the total weight of all the edges in the tree is minimized howeverif the graph is not connectedthen it finds minimum spanning forest note that forest is collection of trees similarlya minimum spanning forest is collection of minimum spanning trees kruskal' algorithm is an example of greedy algorithmas it makes the locally optimal choice at each stage with the hope of finding the global optimum the algorithm is shown in fig step create forest in such way that each graph is separate tree step create priority queue that contains all the edges of the graph step repeat steps and while is not empty step remove an edge from step if the edge obtained in step connects two different treesthen add it to the forest (for combining two trees into one treeelse discard the edge step end figure kruskal' algorithm |
24,630 | data structures using in the algorithmwe use priority queue in which edges that have minimum weight takes priority over any other edge in the graph when the kruskal' algorithm terminatesthe forest has only one component and forms minimum spanning tree of the graph the running time of kruskal' algorithm can be given as ( log )where is the number of edges and is the number of vertices in the graph example apply kruskal' algorithm on the graph given in fig initiallywe have {{ }{ }{ }{ }{ }{ }mst { {(ad)(ef)(ce)(ed)(cd)(df)(ac)(ab)(bc)step remove the edge (adfrom and make the following changesfigure {{ad}{ }{ }{ }{ }mst {adq {(ef)(ce)(ed)(cd)(df)(ac)(ab)(bc) step remove the edge (effrom and make the following changesa {{ad}{ }{ }{ef}mst {(ad)(ef) {(ce)(ed)(cd)(df)(ac)(ab)(bc) step remove the edge (cefrom and make the following changesa {{ad}{ }{cef}mst {(ad)(ce)(ef) {(ed)(cd)(df)(ac)(ab)(bc) step remove the edge (edfrom and make the following changesa {{acdef}{ }mst {(ad)(ce)(ef)(ed) {(cd)(df)(ac)(ab)(bc) step remove the edge (cdfrom note that this edge does not connect different treesso simply discard this edge only an edge connecting (adcefto will be added to the mst therefore |
24,631 | {{acdef}{ }mst {(ad)(ce)(ef)(ed) {(df)(ac)(ab)(bc)step remove the edge (dffrom note that this edge does not connect different treesso simply discard this edge only an edge connecting (adcefto will be added to the mst {{acdef}{ }mst {(ad)(ce)(ef)(ed) {(ac)(ab)(bc)step remove the edge (acfrom note that this edge does not connect different treesso simply discard this edge only an edge connecting (adcefto will be added to the mst {{acdef}{ }mst {(ad)(ce)(ef)(ed) {(ab)(bc)step remove the edge (abfrom and make the following changesa {abcdefmst {(ad)(ce)(ef)(ed)(ab) {(bc) step the algorithm continues until is empty since the entire forest has become one treeall the remaining edges will simply be discarded the resultant ms can be given as shown below {abcdefmst {(ad)(ce)(ef)(ed)(ab) { programming example write program which finds the cost of minimum spanning tree #include #include #define max int adj[max][max]tree[max][max]nvoid readmatrix(int ijprintf("\ enter the number of nodes in the graph ")scanf("% "& )printf("\ enter the adjacency matrix of the graph")for ( <ni++for ( <nj++scanf("% "&adj[ ][ ])int spanningtree(int src |
24,632 | data structures using int visited[max] [max]parent[max]int ijkminuvcostfor ( <ni++ [iadj[src][ ]visited[ parent[isrcvisited[src cost for ( ni++min for ( <nj++if (visited[ ]== & [jminmin [ ] jcost + [ ]visited[ //cost cost [ ]tree[ ][ parent[ ]tree[ ++][ ufor ( <nv++if (visited[ ]== &(adj[ ][vd[ ]) [vadj[ ][ ]parent[vureturn costvoid display(int costint iprintf("\ the edges of the mininum spanning tree are")for ( ni++printf(% % \ "tree[ ][ ]tree[ ][ ])printf("\ the total cost of the minimum spanning tree is % "cost)main(int sourcetreecostreadmatrix()printf("\ enter the source ")scanf("% "&source)treecost spanningtree(source)display(treecost)return output enter the number of nodes in the graph enter the adjacency matrix |
24,633 | enter the source the edges of the minimum spanning tree are the total cost of the minimum spanning tree is dijkstra' algorithm dijkstra' algorithmgiven by dutch scientist edsger dijkstra in is used to find the shortest path tree this algorithm is widely used in network routing protocolsmost notably is-is and ospf (open shortest path firstgiven graph and source node athe algorithm is used to find the shortest path (one having the lowest costbetween (source nodeand every other node moreoverdijkstra' algorithm is also used for finding the costs of the shortest paths from source node to destination node for exampleif we draw graph in which nodes represent the cities and weighted edges represent the driving distances between pairs of cities connected by direct roadthen dijkstra' algorithm when applied gives the shortest route between one city and all other cities algorithm dijkstra' algorithm is used to find the length of an optimal path between two nodes in graph the term optimal can mean anythingshortestcheapestor fastest if we start the algorithm with an initial nodethen the distance of node can be given as the distance from the initial node to that node figure explains the dijkstra' algorithm select the source node also called the initial node define an empty set that will be used to hold nodes to which shortest path has been found label the initial node with and insert it into repeat steps to until the destination node is in or there are no more labelled nodes in consider each node that is not in and is connected by an edge from the newly inserted node (aif the node that is not in has no label then set the label of the node the label of the newly inserted node the length of the edge (belse if the node that is not in was already labelledthen set its new label minimum (label of newly inserted vertex length of edgeold label pick node not in that has the smallest label assigned to it and add it to figure dijkstra' algorithm dijkstra' algorithm labels every node in the graph where the labels represent the distance (costfrom the source node to that node there are two kinds of labelstemporary and permanent temporary labels are assigned to nodes that have not been reachedwhile permanent labels are given to nodes that have been reached and their distance (costto the source node is known node must be permanent label or temporary labelbut not both the execution of this algorithm will produce either of the following two results if the destination node is labelledthen the label will in turn represent the distance from the source node to the destination node if the destination node is not labelledthen there is no path from the source to the destination node |
24,634 | data structures using example consider the graph given in fig taking as the initial nodeexecute the dijkstra' algorithm on it step set the label of and { step label of and thereforen {dfa step label of has been re-labelled because minimum ( has been re-labelled ( thereforen {dc fc step label of thereforen {dfcbf step label of and ( thereforen {dfcbgfigure graph step label of and thereforen {dfcbganote that we have no labels for node ethis means that is not reachable from only the nodes that are in are reachable from the running time of dijkstra' algorithm can be given as (| | +| |)= (| | where is the set of vertices and in the graph difference between dijkstra' algorithm and minimum spanning tree minimum spanning tree algorithm is used to traverse graph in the most efficient mannerbut dijkstra' algorithm calculates the distance from given vertex to every other vertex in the graph dijkstra' algorithm is very similar to prim' algorithm both the algorithms begin at specific node and extend outward within the graphuntil all other nodes in the graph have been reached the point where these algorithms differ is that while prim' algorithm stores minimum cost edgedijkstra' algorithm stores the total cost from source node to the current node moreoverdijkstra' algorithm is used to store the summation of minimum cost edgeswhile prim' algorithm stores at most one minimum cost edge warshall' algorithm if graph is given as =(ve)where is the set of vertices and is the set of edgesthe path matrix of can be found asp an this is lengthy processso warshall has given very efficient algorithm to calculate the path matrix warshall' algorithm defines matrices [if there is path from vi to vj opn as given in fig the path should not use any pk[ ][jthis means that if [ ][ then there other nodes except vkexists an edge from node vi to vj [otherwiseif [ ][ then there exists an edge from vi to vj that does not use any other vertex figure path matrix entry except if [ ][ then there exists an edge from vi to vj that does not use any other vertex except and note that is equal to the adjacency matrix of if contains nodesthen pn which is the path matrix of the graph from the above discussionwe can conclude that pk[ ][jis equal to only when either of the two following cases occurthere is path from vi to vj that does not use any other node except vk- thereforepk- [ ][ |
24,635 | there is path from vi to vk and path from vk to vj where all the nodes use vk- thereforepk- [ ][ and pk- [ ][ hencethe path matrix pn can be calculated with the formula given aspk[ ][jpk- [ ][jv (pk- [ ][kl pk- [ ][ ]where indicates logical or operation and indicates logical and operation figure shows the warshall' algorithm to find the path matrix using the adjacency matrix step [initialize the path matrixrepeat step for to - where is the number of nodes in the graph step repeat step for to - step if [ ][jthen set [ ][jelse [ ][ [end of loop[end of loopstep [calculate the path matrix prepeat step for to - step repeat step for to - step repeat step for jto - step set pk[ ][jpk- [ ][jv (pk- [ ][kl pk- [ ][ ]step exit figure warshall' algorithm example consider the graph in fig and its adjacency matrix we can straightaway calculate the path matrix using the warshall' algorithm the path matrix can be given in single step asa ae ~ bcd figure graph and its path matrix thuswe see that calculating aa to calculate is very slow and inefficient technique as compared to the warshall' technique programming exam write program to implement warshall' algorithm to find the path matrix #include #include void read (int mat[ ][ ]int )void display (int mat[ ][ ]int )void mul(int mat[ ][ ]int )int main(int adj[ ][ ] [ ][ ]nijkclrscr() |
24,636 | data structures using printf("\ enter the number of nodes in the graph ")scanf("% "& )printf("\ enter the adjacency matrix ")read(adjn)clrscr()printf("\ the adjacency matrix is ")display(adjn)for( = ; < ; ++for( = ; < ; ++if(adj[ ][ = [ ][ else [ ][ for( = < ; ++for( = ; < ; ++for( = ; < ; ++ [ ][jp[ ][jp[ ][kp[ ][ ])printf("\ the path matrix is :")display (pn)getch()return void read(int mat[ ][ ]int nint ijfor( = ; < ; ++for( = ; < ; ++printf("\ mat[% ][% "ij)scanf("% "&mat[ ][ ])void display(int mat[ ][ ]int nint ijfor( = ; < ; ++printf("\ ")for( = ; < ; ++printf("% \ "mat[ ][ ])output the adjacency matrix is |
24,637 | the path matrix is modified warshall' algorithm warshall' algorithm can be modified to obtain matrix that gives the shortest paths between the nodes in graph as an input to the algorithmwe take the adjacency matrix of and replace all the values of which are zero by infinity (*infinity (*denotes very large number and indicates that there is no path between the vertices in warshall' modified algorithmwe obtain set of matrices qm using the formula given below qk[ ][jminimummk- [ ][ ]mk- [ ][kmk- [ ][ ] is exactly the same as with little difference that every element having zero value in is replaced by (*in using the given formulathe matrix qn will give the path matrix that has the shortest path between the vertices of the graph warshall' modified algorithm is shown in fig step [initialize the shortest path matrixqrepeat step for to - where is the number of nodes in the graph step repeat step for to - step if [ ][jthen set [ ][jinfinity (or else [ ][ja[ ][ [end of loop[end of loopstep [calculate the shortest path matrix qrepeat step for to - step repeat step for to - step repeat step for jto - step if [ ][ < [ ][kq[ ][jset [ ][jq[ ][jelse set [ ][jq[ ][kq[ ][ [end of if[end of loop[end of loop[end of loopstep exit figure modified warshall' algorithm example consider the unweighted graph given in fig and apply warshall' algorithm to it figure graph |
24,638 | data structures using example consider weighted graph given in fig and apply warshall' shortest path algorithm to it figure graph programming example write program to implement warshall' modified algorithm to find the shortest path #include #include #define infinity void read (int mat[ ][ ]int )void display(int mat[ ][ ]int )int main(int adj[ ][ ] [ ][ ]nijkclrscr()printf("\ enter the number of nodes in the graph ")scanf("% "& )printf("\ enter the adjacency matrix ")read(adjn)clrscr()printf("\ the adjacency matrix is ")display(adjn)for( = ; < ; ++for( = ; < ; ++if(adj[ ][ = [ ][jinfinityelse [ ][jadj[ ][ ]for( = < ; ++for( = ; < ; ++ |
24,639 | for( = ; < ; ++if( [ ][ < [ ][kq[ ][ ] [ ][jq[ ][ ]else [ ][jq[ ][kq[ ][ ]printf("\ \ ")display(qn)getch()return void read(int mat[ ][ ]int nint ijfor( = ; < ; ++for( = ; < ; ++printf("\ mat[% ][% "ij)scanf("% "&mat[ ][ ])void display(int mat[ ][ ]int nint ijfor( = ; < ; ++{printf("\ ")for( = ; < ; ++printf("% \ "mat[ ][ ])output applications of graphs graphs are constructed for various types of applications such asin circuit networks where points of connection are drawn as vertices and component wires become the edges of the graph in transport networks where stations are drawn as vertices and routes become the edges of the graph in maps that draw cities/states/regions as vertices and adjacency relations as edges in program flow analysis where procedures or modules are treated as vertices and calls to these procedures are drawn as edges of the graph once we have graph of particular conceptthey can be easily used for finding shortest pathsproject planningetc in flowcharts or control-flow graphsthe statements and conditions in program are represented as nodes and the flow of control is represented by the edges |
24,640 | data structures using in state transition diagramsthe nodes are used to represent states and the edges represent legal moves from one state to the other graphs are also used to draw activity network diagrams these diagrams are extensively used as project management tool to represent the interdependent relationships between groupsstepsand tasks that have significant impact on the project an activity network diagram (andalso known as an arrow diagram or pert (program evaluation review techniqueis used to identify time sequences of events which are pivotal to objectives it is also helpful when project has multiple activities which need simultaneous management ands help the project development team to create realistic project schedule by drawing graphs that exhibitthe total amount of time needed to complete the project the sequence in which activities must be performed the activities that can be performed simultaneously the critical activities that must be monitored on regular basis sample and is shown in fig and may be performed simultaneously must be completed before start end-point of the project starting point of the project end must be completed before but after boxes denote activity arrows denote dependency among activities figure activity network diagram points to remember graph is basically collection of vertices (also called nodesand edges that connect these vertices degree of node is the total number of edges containing the node when the degree of node is zeroit is also called an isolated node path is known as closed path if the edge has the same end-points closed simple path with length or more is known as cycle graph in which there exists path between any two of its nodes is called connected graph an edge that has identical end-points is called loop the size of graph is the total number of edges in it the out-degree of node is the number of edges that originate at the in-degree of node is the number of edges that terminate at node is known as sink if it has positive in-degree but zero out-degree transitive closure of graph is constructed to answer reachability questions since an adjacency matrix contains only and sit is called bit matrix or boolean matrix the memory use of an adjacency matrix is ( )where is the number of nodes in the graph topological sort of directed acyclic graph is defined as linear ordering of its nodes in which each node comes before all the nodes to which it has outbound edges every dag has one or more number of topological sorts vertex of is called an articulation point if removing along with the edges incident to results in graph that has at least two connected components biconnected graph is defined as connected graph that has no articulation vertices |
24,641 | breadth-first search is graph search algorithm that begins at the root node and explores all the neighbouring nodes then for each of those nearest nodesthe algorithm explores their unexplored neighbour nodesand so onuntil it finds the goal the depth-first search algorithm progresses by expanding the starting node of and thus going deeper and deeper until goal node is foundor until node that has no children is encountered spanning tree of connectedundirected graph is sub-graph of which is tree that connects all the vertices together kruskal' algorithm is an example of greedy algorithmas it makes the locally optimal choice at each stage with the hope of finding the global optimum dijkstra' algorithm is used to find the length of an optimal path between two nodes in graph exercises review questions explain the relationship between linked list structure and digraph what is graphexplain its key terms how are graphs represented inside computer' memorywhich method do you prefer and why consider the graph given below (awrite the adjacency matrix of (bwrite the path matrix of (cis the graph biconnected(dis the graph complete(efind the shortest path matrix using warshall' algorithm consider the graph given below state all the simple paths from to db to dand to alsofind out the in-degree and out-degree of each node is there any source or sink in the grapha consider the graph given below find out its depth-first and breadth-first traversal scheme explain the graph traversal algorithms in detail with example draw complete undirected graph having five nodes consider the graph given below and find out the degree of each node differentiate between depth-first search and breadth-first search traversal of graph explain the topological sorting of graph define spanning tree when is spanning tree called minimum spanning treetake weighted graph of your choice and find out its minimum spanning tree explain prim' algorithm write brief note on kruskal' algorithm write short note on dijkstra' algorithm |
24,642 | data structures using differentiate between dijkstra' algorithm and minimum spanning tree algorithm consider the graph given below find the minimum spanning tree of this graph using (aprim' algorithm(bkruskal' algorithmand (cdijkstra' algorithm ae given the following adjacency matrixdraw the weighted graph briefly discuss warshall' algorithm alsodiscuss its modified version show the working of floyd-warshall' algorithm to find the shortest paths between all pairs of nodes in the following graph write short note on transitive closure of graph given the adjacency matrix of graphwrite program to calculate the degree of node in the graph given the adjacency matrix of graphwrite program to calculate the in-degree and the outdegree of node in the graph given the adjacency matrix of graphwrite function isfullconnectedgraph which returns if the graph is fully connected and otherwise in which kind of graph do we use topological sorting consider the graph given below and show its adjacency list in the memory consider the graph given in question and show the changes in the graph as well as its adjacency list when node and edges (aeand (ceare added to it alsodelete edge (bdfrom the graph ~ ~ consider five cities( new delhi( mumbai( chennai( bangaloreand ( kolkataand list of flights that connect these cities as shown in the following table use the given information to construct graph flight no origin destination programming exercises write program to create and print graph write program to determine whether there is at least one path from the source to the destination multiple-choice questions an edge that has identical end-points is called (amulti-path (bloop (ccycle (dmulti-edge the total number of edges containing the node is called (ain-degree (bout-degree (cdegree (dnone of these graph in which there exists path between any two of its nodes is called (acomplete graph (bconnected graph (cdigraph (din-directed graph the number of edges that originate at are called (ain-degree (bout-degree (cdegree (dsource |
24,643 | the memory use of an adjacency matrix is (ao( (bo( (co( (do(log the term optimal can mean (ashortest (bcheapest (cfastest (dall of these how many articulation vertices does biconnected graph contain( ( ( ( kruskal' algorithm is an example of greedy algorithm true or false graph with multiple edges and/or loop is called graph is linear data structure in-degree of node is the number of edges leaving that node the size of graph is the total number of vertices in it sink has zero in-degree but positive outdegree the space complexity of depth-first search is lower than that of breadth-first search node is known as sink if it has positive outdegree but the in-degree directed graph that has no cycles is called directed acyclic graph graph can have many different spanning trees fringe vertices are not part of tbut are adjacent to some tree vertex fill in the blanks has zero degree in-degree of node is the number of edges that at adjacency matrix is also known as path is known as path if the edge has the same end-points vertices that are part of the minimum spanning tree are called of graph is constructed to answer reachability questions an is vertex of if removing along with the edges incident to results in graph that has at least two connected components graph is connected graph that is not broken into disconnected pieces by deleting any single vertex an edge is called if removing that edge results in disconnected graph |
24,644 | searching and sorting learning objective searching and sorting are two of the most common operations in computer science searching refers to finding the position of value in collection of values sorting refers to arranging data in certain order the two commonly used orders are numerical order and alphabetical order in this we will discuss the different techniques of searching and sorting arrays of numbers or characters introduction to searching searching means to find whether particular value is present in an array or not if the value is present in the arraythen searching is said to be successful and the searching process gives the location of that value in the array howeverif the value is not present in the arraythe searching process displays an appropriate message and in this case searching is said to be unsuccessful there are two popular methods for searching the array elementslinear search and binary search the algorithm that should be used depends entirely on how the values are organized in the array for exampleif the elements of the array are arranged in ascending orderthen binary search should be usedas it is more efficient for sorted lists in terms of complexity we will discuss all these methods in detail in this section linear search linear searchalso called as sequential searchis very simple method used for searching an array for particular value it works by comparing the value to be searched with every element of the array one by one in sequence until match is found linear search is mostly used to search an unordered list of elements (array in which data elements are not sortedfor exampleif an array [is declared and initialized asint [{ } |
24,645 | linear_search(anvalstep [initializeset pos - step [initializeset step repeat step while <= step if [ival set pos print pos go to step [end of ifset [end of loopstep if pos - print "value is not present in the array[end of ifstep exit figure algorithm for linear search and the value to be searched is val then searching means to find whether the value ' is present in the array or not if yesthen it returns the position of its occurrence herepos (index starting from figure shows the algorithm for linear search in steps and of the algorithmwe initialize the value of pos and in step while loop is executed that would be executed till is less than (total number of elements in the arrayin step check is made to see if match is found between the current array element and val if match is foundthen the position of the array element is printedelse the value of is incremented to match the next element with val howeverif all the array elements have been compared with val and no match is foundthen it means that val is not present in the array complexity of linear search algorithm linear search executes in (ntime where is the number of elements in the array obviouslythe best case of linear search is when val is equal to the first element of the array in this caseonly one comparison will be made likewisethe worst case will happen when either val is not present in the array or it is equal to the last element of the array in both the casesn comparisons will have to be made howeverthe performance of the linear search algorithm can be improved by using sorted array programming example write program to search an element in an array using the linear search technique #include #include #include #define size /added so the size of the array can be altered more easily int main(int argcchar *argv[]int arr[size]numinfound pos - printf("\ enter the number of elements in the array ")scanf("% "& )printf("\ enter the elements")for( = ; < ; ++scanf("% "&arr[ ])printf("\ enter the number that has to be searched ")scanf("% "&num)for( = ; < ; ++if(arr[ =numfound = pos=iprintf("\ % is found in the array at position% "num, + )/+ added in line so that it would display the number in the first place in the array as in position instead of *breakif (found = |
24,646 | data structures using printf("\ % does not exist in the array"num)return binary search binary search is searching algorithm that works efficiently with sorted list the mechanism of binary search can be better understood by an analogy of telephone directory when we are searching for particular name in directorywe first open the directory from the middle and then decide whether to look for the name in the first part of the directory or in the second part of the directory againwe open some page in the middle and the whole process is repeated until we finally find the right name take another analogy how do we find words in dictionarywe first open the dictionary somewhere in the middle thenwe compare the first word on that page with the desired word whose meaning we are looking for if the desired word comes before the word on the pagewe look in the first half of the dictionaryelse we look in the second half againwe open page in the first half of the dictionary and compare the first word on that page with the desired word and repeat the same procedure until we finally get the word the same mechanism is applied in the binary search nowlet us consider how this mechanism is applied to search for value in sorted array consider an array [that is declared and initialized as int [{ }and the value to be searched is val the algorithm will proceed in the following manner beg end mid ( )/ nowval and [mida[ [ is less than valthereforewe now search for the value in the second half of the array sowe change the values of beg and mid nowbeg mid end mid ( )/ = / val and [mida[ [ is less than valthereforewe now search for the value in the second half of the segment soagain we change the values of beg and mid nowbeg mid end mid ( )/ nowval and [mid in this algorithmwe see that beg and end are the beginning and ending positions of the segment that we are looking to search for the element mid is calculated as (beg end)/ initiallybeg lower_bound and end upper_bound the algorithm will terminate when [midval when the algorithm endswe will set pos mid pos is the position at which the value is present in the array howeverif val is not equal to [mid]then the values of begendand mid will be changed depending on whether val is smaller or greater than [mid(aif val [mid]then val will be present in the left segment of the array sothe value of end will be changed as end mid (bif val [mid]then val will be present in the right segment of the array sothe value of beg will be changed as beg mid finallyif val is not present in the arraythen eventuallyend will be less than beg when this happensthe algorithm will terminate and the search will be unsuccessful figure shows the algorithm for binary search in step we initialize the value of variablesbegendand pos in step while loop is executed until beg is less than or equal to end in step the value of mid is calculated in step we check if the array value at mid is equal to val (item to be searched in the arrayif match is |
24,647 | binary_search(alower_boundupper_boundvalstep [initializeset beg lower_bound end upper_boundpos step repeat steps and while beg <end step set mid (beg end)/ step if [midval set pos mid print pos go to step else if [midval set end mid else set beg mid [end of if[end of loopstep if pos - print "value is not present in the array[end of ifstep exit foundthen the value of pos is printed and the algorithm exits howeverif match is not foundand if the value of [midis greater than valthe value of end is modifiedotherwise if [midis greater than valthen the value of beg is altered in step if the value of pos - then val is not present in the array and an appropriate message is printed on the screen before the algorithm exits complexity of binary search algorithm the complexity of the binary search algorithm can be expressed as (nwhere is the number of elements in the array the complexity of the algorithm is calculated depending on the number of comparisons that are made in the binary figure algorithm for binary search search algorithmwe see that with each comparisonthe size of the segment where search has to be made is reduced to half thuswe can say thatin order to locate particular value in the arraythe total number of comparisons that will be made is given as (nn or (nlog programming example write program to search an element in an array using binary search ##include #include #include #define size /added to make changing size of array easier int smallest(int arr[]int kint )/added to sort array void selection_sort(int arr[]int )/added to sort array int main(int argcchar *argv[]int arr[size]numinbegendmidfound= printf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter the elements")for( = ; < ; ++scanf("% "&arr[ ])selection_sort(arrn)/added to sort the array printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])printf("\ \ enter the number that has to be searched")scanf("% "&num)beg end - while(beg<=endmid (beg end)/ if (arr[mid=numprintf("\ % is present in the array at position % "nummid+ )found = break |
24,648 | data structures using else if (arr[mid]>numend mid- else beg mid+ if (beg end &found = printf("\ % does not exist in the array"num)return int smallest(int arr[]int kint nint pos ksmall=arr[ ]ifor( = + ; < ; ++if(arr[ ]smallsmall arr[ ]pos ireturn posvoid selection_sort(int arr[],int nint kpostempfor( = ; < ; ++pos smallest(arrkn)temp arr[ ]arr[karr[pos]arr[postemp interpolation search interpolation searchalso known as extrapolation searchis searching technique that finds specified value in sorted array the interpolation_search (alower_boundupper_boundvalconcept of interpolation search is similar to how we search for names in step [initializeset low lower_boundtelephone book or for keys by which high upper_boundpos - book' entries are ordered for examplestep repeat steps to while low <high when looking for name "bharatin step set mid low (high lowx telephone directorywe know that it will ((val [low]( [higha[low])be near the extreme leftso applying step if val [midpos mid binary search technique by dividing the print pos list in two halves each time is not good go to step idea we must start scanning the extreme else if val [midleft in the first pass itself set high mid in each step of interpolation searchelse the remaining search space for the set low mid value to be found is calculated the [end of if[end of loopcalculation is done based on the values step if pos - at the bounds of the search space and the print "value is not present in the arrayvalue to be searched the value found at [end of ifthis estimated position is then compared step exit with the value being searched for if the two values are equalthen the search is figure algorithm for interpolation search complete |
24,649 | howeverin case the values are not equal then depending on the comparisonthe remaining search space is reduced to the part before or after the estimated position thuswe see that interpolation search is similar to the binary search technique howeverthe important difference between the two techniques is that binary search always selects the middle value of the remaining search space it discards half of the values based on the comparison between the value found at the estimated position and the value to be searched but in interpolation searchinterpolation is used to find an item near the one being searched forand then linear search is used to find the exact item the algorithm for interpolation search is given in fig figure helps us to visualize how the search space is divided in case of binary search and interpolation search low val item to be searched high middle (lowhigh)/ (abinary search divides the list into two equal halves low val item to be searched middle low (high lowy(key [low]/( [higha[low])high (binterpolation search divides the list into two equal halves figure difference between binary search and interpolation search complexity of interpolation search algorithm when elements of list to be sorted are uniformly distributed (average case)interpolation search makes about log(log ncomparisons howeverin the worst casethat is when the elements increase exponentiallythe algorithm can make up to (ncomparisons example given list of numbers [{ search for value using interpolation search technique solution low high val [low [high middle low (high low) ((val [low]/( [higha[low) +( (( ( [middlea[ which is equal to value to be searched programming example write program to search an element in an array using interpolation search #include #include #define max int interpolation_search(int []int lowint highint valint midwhile(low <highmid low (high low)*((val [low]( [higha[low]))if(val = [mid]return midif(val [mid] |
24,650 | data structures using high mid else low mid return - int main(int arr[max]invalposclrscr()printf("\ enter the number of elements in the array ")scanf("% "& )printf("\ enter the elements ")for( <ni++scanf("% "&arr[ ])printf("\ enter the value to be searched ")scanf("% "&val)pos interpolation_search(arr - val)if(pos =- printf("\ % is not found in the array"val)else printf("\ % is found at position % "valpos)getche()return jump search when we have an already sorted listthen the other efficient algorithm to search for value is jump search or block search in jump searchit is not necessary to scan all the elements in the list to find the desired value we just check an element and if it is less than the desired valuethen some of the elements following it are skipped by jumping ahead after moving little forward againthe element is checked if the checked element is greater than the desired valuethen we have boundary and we are sure that the desired value lies between the previously checked element and the currently checked element howeverif the checked element is less than the value being searched forthen we again make small jump and repeat the process once the boundary of the value is determineda linear search is done to find the value and its position in the array for exampleconsider an array [{ , , , , , , , , the length of the array is if we have to find value then following steps are performed using the jump search technique step first three elements are checked since is smaller than we will have to make jump ahead step next three elements are checked since is smaller than we will have to make jump ahead step next three elements are checked since is greater than the desired value lies within the current boundary step linear search is now done to find the value in the array the algorithm for jump search is given in fig |
24,651 | jump_search (alower_boundupper_boundvalnstep [initializeset step sqrt( ) low lower_boundhigh upper_boundpos - step repeat step while step step if val [stepset high step else set low step [end of ifset [end of loopstep set low step repeat step while <high step if [ival pos print pos go to step [end of ifset [end of loopstep if pos - print "value is not present in the array[end of ifstep exit figure algorithm for jump search advantage of jump search over linear search suppose we have sorted list of elements where the elements have values then sequential search will find the value in exactly iterations but with jump searchthe same value can be found in iterations hencejump search performs far better than linear search on sorted list of elements advantage of jump search over binary search no doubtbinary search is very easy to implement and has complexity of (log )but in case of list having very large number of elementsjumping to the middle of the list to make comparisons is not good idea because if the value being searched is at the beginning of the list then one (or even morelarge step(sin the backward direction would have to be taken in such casesjump search performs better as we have to move little backward that too only once hencewhen jumping back is slower than jumping forwardthe jump search algorithm always performs better how to choose the step lengthfor the jump search algorithm to work efficientlywe must define fixed size for the step if the step size is then algorithm is same as linear search nowin order to find an appropriate step sizewe must first try to figure out the relation between the size of the list (nand the size of the step (kusuallyk is calculated as further optimization of jump search till nowwe were dealing with lists having small number of elements but in real-world applicationslists can be very large in such large lists searching the value from the beginning of the list may not be good idea better option is to start the search from the -th element as shown in the figure below |
24,652 | data structures using searching can start from somewhere middle in the list rather than from the beginning to optimize performance we can also improve the performance of jump search algorithm by repeatedly applying jump search for exampleif the size of the list is , , (nthe jump interval would then be = noweven the identified interval has elements and is again large list sojump search can be applied again with new step size of thusevery time we have desired interval with large number of valuesthe jump search algorithm can be applied again but with smaller step howeverin this casethe complexity of the algorithm will no longer be (nbut will approach logarithmic value complexity of jump search algorithm jump search works by jumping through the array with step size (optimally chosen to be nto find the interval of the value once this interval is identifiedthe value is searched using the linear search technique thereforethe complexity of the jump search algorithm can be given as (nprogramming example write program to search an element in an array using jump search #include #include #include #define max int jump_search(int []int lowint highint valint nint stepistep sqrt( )for( = ; <step; ++if(val [step]high step else low step for( =low; <=high; ++if( [ ]==valreturn ireturn - int main(int arr[max]invalposclrscr()printf("\ enter the number of elements in the array ")scanf("% "& )printf("\ enter the elements ")for( <ni++scanf("% "&arr[ ])printf("\ enter the value to be searched ")scanf("% "&val)pos jump_search(arr - valn) |
24,653 | if(pos =- printf("\ % is not found in the array"val)else printf("\ % is found at position % "valpos)getche()return fibonacci search we are all well aware of the fibonacci series in which the first two terms are and and then each successive term is the sum of previous two terms in the fibonacci series given beloweach number is called fibonacci number the same series and concept can be used to search for given value in list of numbers such search algorithm which is based on fibonacci numbers is called fibonacci search and was developed by kiefer in the search follows divide-and-conquer technique and narrows down possible locations with the help of fibonacci numbers fibonacci search is similar to binary search it also works on sorted list and has run time complexity of (log nhoweverunlike the binary search algorithmfibonacci search does not divide the list into two equal halves rather it subtracts fibonacci number from the index to reduce the size of the list to be searched sothe key advantage of fibonacci search over binary search is that comparison dispersion is low introduction to sorting sorting means arranging the elements of an array so that they are placed in some relevant order which may be either ascending or descending that isif is an arraythen the elements of are arranged in sorted order (ascending orderin such way that [ [ [ [nfor exampleif we have an array that is declared and initialized as int [{ }then the sorted array (ascending ordercan be given asa[{ sorting algorithm is defined as an algorithm that puts the elements of list in certain orderwhich can be either numerical orderlexicographical orderor any user-defined order efficient sorting algorithms are widely used to optimize the use of other algorithms like search and merge algorithms which require sorted lists to work correctly there are two types of sortinginternal sorting which deals with sorting the data stored in the computer' memory external sorting which deals with sorting the data stored in files external sorting is applied when there is voluminous data that cannot be stored in the memory sorting on multiple keys many timeswhen performing real-world applicationsit is desired to sort arrays of records using multiple keys this situation usually occurs when single key is not sufficient to uniquely identify record for examplein big organization we may want to sort list of employees on the basis of their departments first and then according to their names in alphabetical order other examples of sorting on multiple keys can be telephone directories in which names are sorted by locationcategory (business or residential)and then in an alphabetical order |
24,654 | data structures using in librarythe information about books can be sorted alphabetically based on titles and then by authorsnames customersaddresses can be sorted based on the name of the city and then the street note data records can be sorted based on property such component or property is called sort key sort key can be defined using two or more sort keys in such casethe first key is called the primary sort keythe second is known as the secondary sort keyetc consider the data records given belowdepartment salary phone number janak name telecommunications raj computer science aditya electronics huma telecommunications divya computer science now if we take department as the primary key and name as the secondary keythen the sorted order of records can be given assalary phone number divya name computer science department raj computer science aditya electronics huma telecommunications janak telecommunications observe that the records are sorted based on department howeverwithin each department the records are sorted alphabetically based on the names of the employees practical considerations for internal sorting as mentioned aboverecords can be sorted either in ascending or descending order based on field often called as the sort key the list of records can be either stored in contiguous and randomly accessible data structure (arrayor may be stored in dispersed and only sequentially accessible data structure like linked list but irrespective of the underlying data structure used to store the recordsthe logic to sort the records will be same and only the implementation details will differ when analysing the performance of different sorting algorithmsthe practical considerations would be the followingnumber of sort key comparisons that will be performed number of times the records in the list will be moved best case performance worst case performance average case performance stability of the sorting algorithm where stability means that equivalent elements or records retain their relative positions even after sorting is done bubble sort bubble sort is very simple method that sorts the array elements by repeatedly moving the largest element to the highest index position of the array segment (in case of arranging elements |
24,655 | in ascending orderin bubble sortingconsecutive adjacent pairs of elements in the array are compared with each other if the element at the lower index is greater than the element at the higher indexthe two elements are interchanged so that the element is placed before the bigger one this process will continue till the list of unsorted elements exhausts this procedure of sorting is called bubble sorting because elements 'bubbleto the top of the list note that at the end of the first passthe largest element in the list will be placed at its proper position ( at the end of the listnote if the elements are to be sorted in descending orderthen in first pass the smallest element is moved to the highest index of the array technique the basic methodology of the working of bubble sort is given as follows(ain pass [ and [ are comparedthen [ is compared with [ ] [ is compared with [ ]and so on finallya[ - is compared with [ - pass involves - comparisons and places the biggest element at the highest index of the array (bin pass [ and [ are comparedthen [ is compared with [ ] [ is compared with [ ]and so on finallya[ - is compared with [ - pass involves - comparisons and places the second biggest element at the second highest index of the array (cin pass [ and [ are comparedthen [ is compared with [ ] [ is compared with [ ]and so on finallya[ - is compared with [ - pass involves - comparisons and places the third biggest element at the third highest index of the array (din pass - [ and [ are compared so that [ ]< [ after this stepall the elements of the array are arranged in ascending order example to discuss bubble sort in detaillet us consider an array [that has the following elementsa[{ pass (acompare and since no swapping is done (bcompare and since swapping is done (ccompare and since no swapping is done (dcompare and since swapping is done (ecompare and since swapping is done (fcompare and since swapping is done (gcompare and since swapping is done observe that after the end of the first passthe largest element is placed at the highest index of the array all the other elements are still unsorted pass (acompare and since swapping is done (bcompare and since no swapping is done (ccompare and since no swapping is done (dcompare and since swapping is done (ecompare and since swapping is done |
24,656 | data structures using (fcompare and since swapping is done observe that after the end of the second passthe second largest element is placed at the second highest index of the array all the other elements are still unsorted pass (acompare and since no swapping is done (bcompare and since no swapping is done (ccompare and since swapping is done (dcompare and since swapping is done (ecompare and since no swapping is done observe that after the end of the third passthe third largest element is placed at the third highest index of the array all the other elements are still unsorted pass (acompare and since no swapping is done (bcompare and since swapping is done (ccompare and since swapping is done (dcompare and since no swapping is done observe that after the end of the fourth passthe fourth largest element is placed at the fourth highest index of the array all the other elements are still unsorted pass (acompare and since swapping is done (bcompare and since swapping is done (ccompare and since no swapping is done observe that after the end of the fifth passthe fifth largest element is placed at the fifth highest index of the array all the other elements are still unsorted pass (acompare and since swapping is done (bcompare and since no swapping is done observe that after the end of the sixth passthe sixth largest element is placed at the sixth largest index of the array all the other elements are still unsorted pass (acompare and since no swapping is done observe that the entire list is sorted now bubble_sort(anstep repeat step for to - step repeat for to step if [ja[ swap [jand [ + [end of inner loop[end of outer loopstep exit figure algorithm for bubble sort figure shows the algorithm for bubble sort in this algorithmthe outer loop is for the total number of passes which is - the inner loop will be executed for every pass howeverthe frequency of the inner loop will decrease with every pass because after every passone element will be in its correct position thereforefor every passthe inner loop will be executed - timeswhere is the number of elements in the array and is the count of the pass |
24,657 | complexity of bubble sort the complexity of any sorting algorithm depends upon the number of comparisons in bubble sortwe have seen that there are - passes in total in the first passn- comparisons are made to place the highest element in its correct position thenin pass there are - comparisons and the second highest element is placed in its position thereforeto compute the complexity of bubble sortwe need to calculate the total number of comparisons it can be given asf( ( ( ( (nn ( )/ (nn / (no( thereforethe complexity of bubble sort algorithm is ( it means the time required to execute bubble sort is proportional to where is the total number of elements in the array programming example write program to enter numbers in an array redisplay the array with elements being sorted in ascending order #include #include int main(int intempjarr[ ]clrscr()printf("\ enter the number of elements in the array ")scanf("% "& )printf("\ enter the elements")for( = ; < ; ++scanf("% "&arr [ ])for( = ; < ; ++for( = ; < - - ; ++if(arr[jarr[ + ]temp arr[ ]arr[jarr[ + ]arr[ + tempprintf("\ the array sorted in ascending order is :\ ")for( = ; < ; ++printf("% \ "arr[ ])getch()return output enter the number of elements in the array enter the elements the array sorted in ascending order is bubble sort optimization consider case when the array is already sorted in this situation no swapping is done but we still have to continue with all - passes we may even have an array that will be sorted in or |
24,658 | data structures using passes but we still have to continue with rest of the passes so once we have detected that the array is sortedthe algorithm must not be executed further this is the optimization over the original bubble sort algorithm in order to stop the execution of further passes after the array is sortedwe can have variable flag which is set to true before each pass and is made false when swapping is performed the code for the optimized bubble sort can be given asvoid bubble_sort(int *arrint nint ijtempflag for( = <ni++for( = < - - ++if(arr[ ]>arr[ + ]flag temp arr[ + ]arr[ + arr[ ]arr[jtempif(flag = /array is sorted returncomplexity of optimized bubble sort algorithm in the best casewhen the array is already sortedthe optimized bubble sort will take (ntime in the worst casewhen all the passes are performedthe algorithm will perform slower than the original algorithm in average case alsothe performance will see an improvement compare it with the complexity of original bubble sort algorithm which takes ( in all the cases insertion sort insertion sort is very simple sorting algorithm in which the sorted array (or listis built one element at time we all are familiar with this technique of sortingas we usually use it for ordering deck of cards while playing bridge the main idea behind insertion sort is that it inserts each item into its proper place in the final list to save memorymost implementations of the insertion sort algorithm work by moving the current data element past the already sorted values and repeatedly interchanging it with the preceding value until it is in its correct place insertion sort is less efficient as compared to other more advanced algorithms such as quick sortheap sortand merge sort technique insertion sort works as followsthe array of values to be sorted is divided into two sets one that stores sorted values and another that contains unsorted values the sorting algorithm will proceed until there are elements in the unsorted set suppose there are elements in the array initiallythe element with index (assuming lb is in the sorted set rest of the elements are in the unsorted set the first element of the unsorted partition has array index (if lb during each iteration of the algorithmthe first element in the unsorted set is picked up and inserted into the correct position in the sorted set |
24,659 | example consider an array of integers given below we will sort the values in the array using insertion sort solution ais the only element in sorted list (pass (pass (pass (pass (pass sorted (pass (pass (pass (pass unsorted initiallya[ is the only element in the sorted set in pass [ will be placed either before or after [ ]so that the array is sorted in pass [ will be placed either before [ ]in between [ and [ ]or after [ in pass [ will be placed in its proper place in pass - [ - will be placed in its proper place to keep the array sorted to insert an element [kin sorted list [ [ [ - we need to compare [kwith [ - ]then with [ - ] [ - ]and so on until we meet an element [jsuch that [ < [kin order to insert [kin its correct positionwe need to move elements [ - insertion-sort (arrna[ - ] [jby one position and then [kis step repeat steps to for to inserted at the ( + )th location the algorithm for step set temp arr[kinsertion sort is given in fig step set in the algorithmstep executes for step repeat while temp <arr[jset arr[ arr[jloop which will be repeated for each element in set the array in step we store the value of the kth [end of inner loopelement in temp in step we set the jth index step set arr[ temp in the array in step for loop is executed [end of loopstep exit that will create space for the new element from figure algorithm for insertion sort the unsorted list to be stored in the list of sorted elements finallyin step the element is stored at the ( + )th location complexity of insertion sort for insertion sortthe best case occurs when the array is already sorted in this casethe running time of the algorithm has linear running time ( ( )this is becauseduring each iterationthe first element from the unsorted set is compared only with the last element of the sorted set of the array similarlythe worst case of the insertion sort algorithm occurs when the array is sorted in the reverse order in the worst casethe first element of the unsorted set has to be compared with almost every element in the sorted set furthermoreevery iteration of the inner loop will have to shift the elements of the sorted set of the array before inserting the next element thereforein the worst caseinsertion sort has quadratic running time ( ( )even in the average casethe insertion sort algorithm will have to make at least ( - )/ comparisons thusthe average case also has quadratic running time |
24,660 | data structures using advantages of insertion sort the advantages of this sorting algorithm are as followsit is easy to implement and efficient to use on small sets of data it can be efficiently implemented on data sets that are already substantially sorted it performs better than algorithms like selection sort and bubble sort insertion sort algorithm is simpler than shell sortwith only small trade-off in efficiency it is over twice as fast as the bubble sort and almost per cent faster than the selection sort it requires less memory space (only ( of additional memory spaceit is said to be onlineas it can sort list as and when it receives new elements programming example write program to sort an array using insertion sort algorithm #include #include #define size void insertion_sort(int arr[]int )void main(int arr[size]inprintf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter the elements of the array")for( = ; < ; ++scanf("% "&arr[ ])insertion_sort(arrn)printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])getch()void insertion_sort(int arr[]int nint ijtempfor( = ; < ; ++temp arr[ ] - while((temp = )arr[ + arr[ ] --arr[ + tempoutput enter the number of elements in the array enter the elements of the array the sorted array is selection sort selection sort is sorting algorithm that has quadratic running time complexity of ( )thereby making it inefficient to be used on large lists although selection sort performs worse than insertion sort algorithmit is noted for its simplicity and also has performance advantages over |
24,661 | more complicated algorithms in certain situations selection sort is generally used for sorting files with very large objects (recordsand small keys technique consider an array arr with elements selection sort works as followsfirst find the smallest value in the array and place it in the first position thenfind the second smallest value in the array and place it in the second position repeat this procedure until the entire array is sorted thereforein pass find the position pos of the smallest value in the array and then swap arr[posand arr[ thusarr[ is sorted in pass find the position pos of the smallest value in sub-array of - elements swap arr[poswith arr[ nowarr[ and arr[ is sorted in pass - find the position pos of the smaller of the elements arr[ - and arr[ - swap arr[posand arr[ - so that arr[ ]arr[ ]arr[ - is sorted example sort the array given below using selection sort pass pos arr[ arr[ arr[ arr[ arr[ arr[ arr[ arr[ the algorithm for selection sort is shown in fig in the algorithmduring the kth passwe need to find the position pos of the smallest elements from arr[ ]arr[ + ]arr[nto find the smallest elementwe use variable small to hold the smallest value in the sub-array ranging from arr[kto arr[nthenswap arr[kwith arr[posthis procedure is repeated until all the elements in the array are sorted smallest (arrknposselection sort(arrnstep [initializeset small arr[kstep [initializeset pos step repeat for + to - if small arr[jset small arr[jset pos [end of if[end of loopstep return pos step repeat steps and for to - step call smallest(arrknposstep swap [kwith arr[pos[end of loopstep exit figure algorithm for selection sort complexity of selection sort selection sort is sorting algorithm that is independent of the original order of elements in the array in pass selecting the element with the smallest value calls for scanning all elements |
24,662 | data structures using thusn- comparisons are required in the first pass thenthe smallest value is swapped with the element in the first position in pass selecting the second smallest value requires scanning the remaining elements and so on therefore( ( ( ( comparisons advantages of selection sort it is simple and easy to implement it can be used for small data sets it is per cent more efficient than bubble sort howeverin case of large data setsthe efficiency of selection sort drops as compared to insertion sort programming example write program to sort an array using selection sort algorithm #include #include #include int smallest(int arr[]int kint )void selection_sort(int arr[]int )void main(int argcchar *argv[]int arr[ ]inprintf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter the elements of the array")for( = ; < ; ++scanf("% "&arr[ ])selection_sort(arrn)printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])int smallest(int arr[]int kint nint pos ksmall=arr[ ]ifor( = + ; < ; ++if(arr[ ]smallsmall arr[ ]pos ireturn posvoid selection_sort(int arr[],int nint kpostempfor( = ; < ; ++pos smallest(arrkn)temp arr[ ] |
24,663 | arr[karr[pos]arr[postemp merge sort merge sort is sorting algorithm that uses the divideconquerand combine algorithmic paradigm divide means partitioning the -element array to be sorted into two sub-arrays of / elements if is an array containing zero or one elementthen it is already sorted howeverif there are more elements in the arraydivide into two sub-arraysa and each containing about half of the elements of conquer means sorting the two sub-arrays recursively using merge sort combine means merging the two sorted sub-arrays of size / to produce the sorted array of elements merge sort algorithm focuses on two main concepts to improve its performance (running time) smaller list takes fewer steps and thus less time to sort than large list as number of steps is relatively lessthus less time is needed to create sorted list from two sorted lists rather than creating it using two unsorted lists the basic steps of merge sort algorithm are as followsif the array is of length or then it is already sorted otherwisedivide the unsorted array into two sub-arrays of about half the size use merge sort algorithm recursively to sort each sub-array merge the two sub-arrays to form single sorted list example sort the array given below using merge sort solution (divide and conquer the array (combine the elements to form sorted arraythe merge sort algorithm (fig uses function merge which combines the sub-arrays to form sorted array while the merge sort algorithm recursively divides the list into smaller liststhe merge algorithm conquers the list to sort the elements in individual lists finallythe smaller lists are merged to form one list to understand the merge algorithmconsider the figure below which shows how we merge two lists to form one list for ease of understandingwe have taken two sub-lists each containing four elements the same concept can be utilized to merge four sub-lists containing two elementsor eight sub-lists having one element each begi mid temp end index |
24,664 | data structures using compare arr[iand arr[ ]the smaller of the two is placed in temp at the location specified by index and subsequently the value or is incremented beg beg beg beg mid mid mid imid temp index end end end end imid beg end mid beg index index index index index end when is greater than midcopy the remaining elements of the right sub-array in temp beg mid end merge (arrbegmidendstep [initializeset begj mid index step repeat while ( <midand ( <=endif arr[iarr[jset temp[indexarr[iset else set temp[indexarr[jset [end of ifset index index [end of loopstep [copy the remaining elements of right sub-arrayif anyif mid repeat while <end set temp[indexarr[jset index index set [end of loop[copy the remaining elements of left sub-arrayif anyelse repeat while <mid set temp[indexarr[iset index index set [end of loop[end of ifstep [copy the contents of temp back to arrset kstep repeat while index set arr[ktemp[kset [end of loopstep end index |
24,665 | complexity of merge sort merge_sort(arrbegendstep if beg end set mid (beg end)/ call merge_sort (arrbegmidcall merge_sort (arrmid endmerge (arrbegmidend[end of ifstep end figure the running time of merge sort in the average case and the worst case can be given as ( log nalthough merge sort has an optimal time complexityit needs an additional space of (nfor the temporary array temp algorithm for merge sort programming example write program to implement merge sort #include #include #define size void merge(int []intintint)void merge_sort(int [],intint)void main(int arr[size]inprintf("\ enter the number of elements in the array ")scanf("% "& )printf("\ enter the elements of the array")for( = ; < ; ++scanf("% "&arr[ ])merge_sort(arr - )printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])getch()void merge(int arr[]int begint midint endint =begj=mid+ index=begtemp[size]kwhile(( <=mid&( <=end)if(arr[iarr[ ]temp[indexarr[ ] ++else temp[indexarr[ ] ++index++if( >midwhile( <=end |
24,666 | data structures using temp[indexarr[ ] ++index++else while( <=midtemp[indexarr[ ] ++index++for( =beg; <index; ++arr[ktemp[ ]void merge_sort(int arr[]int begint endint midif(beg<endmid (beg+end)/ merge_sort(arrbegmid)merge_sort(arrmid+ end)merge(arrbegmidend) quick sort quick sort is widely used sorting algorithm developed by hoare that makes ( log ncomparisons in the average case to sort an array of elements howeverin the worst caseit has quadratic running time given as ( basicallythe quick sort algorithm is faster than other ( log nalgorithmsbecause its efficient implementation can minimize the probability of requiring quadratic time quick sort is also known as partition exchange sort like merge sortthis algorithm works by using divide-and-conquer strategy to divide single unsorted array into two smaller sub-arrays the quick sort algorithm works as follows select an element pivot from the array elements rearrange the elements in the array in such way that all elements that are less than the pivot appear before the pivot and all elements greater than the pivot element come after it (equal values can go either wayafter such partitioningthe pivot is placed in its final position this is called the partition operation recursively sort the two sub-arrays thus obtained (one with sub-list of values smaller than that of the pivot element and the other having higher value elements like merge sortthe base case of the recursion occurs when the array has zero or one element because in that case the array is already sorted after each iterationone element (pivotis always in its final position hencewith every iterationthere is one less element to be sorted in the array thusthe main task is to find the pivot elementwhich will partition the array into two halves to understand how we find the pivot elementfollow the steps given below (we take the first element in the array as pivot |
24,667 | technique quick sort works as follows set the index of the first element in the array to loc and left variables alsoset the index of the last element of the array to the right variable that isloc left and right - (where in the number of elements in the array start from the element pointed by right and scan the array from right to leftcomparing each element on the way with the element pointed by the variable loc that isa[locshould be less than [right(aif that is the casethen simply continue comparing until right becomes equal to loc once right locit means the pivot has been placed in its correct position (bhoweverif at any pointwe have [loca[right]then interchange the two values and jump to step (cset loc right start from the element pointed by left and scan the array from left to rightcomparing each element on the way with the element pointed by loc that isa[locshould be greater than [left(aif that is the casethen simply continue comparing until left becomes equal to loc once left locit means the pivot has been placed in its correct position (bhoweverif at any pointwe have [loca[left]then interchange the two values and jump to step (cset loc left example sort the elements given in the following array using quick sort algorithm we choose the first element as the pivot set loc left and right loc left left loc left left loc right scan from right to left since [loca[right]decrease the value of right right since [loca[right]interchange the two values and set loc right left right loc since [loca[left]interchange the values and set loc left right start scanning from left to right since [loca[left]increment the value of left right loc scan from right to left since [loca[right]decrement the value of right left right loc |
24,668 | data structures using since [loca[right]interchange the two values and set loc right start scanning from left to right since [loca[left]increment the value of left left right loc right loc left now left locso the procedure terminatesas the pivot element (the first element of the arraythat is is placed in its correct position all the elements smaller than are placed before it and those greater than are placed after it the left sub-array containing and the right sub-array containing and are sorted in the same manner the quick sort algorithm (fig makes use of function partition to divide the array into two sub-arrays partition (arrbegendlocstep [initializeset left begright endloc begflag step repeat steps to while flag step repeat while arr[loc<arr[rightand loc !right set right right [end of loopstep if loc right set flag else if arr[locarr[rightswap arr[locwith arr[rightset loc right [end of ifstep if flag repeat while arr[loc>arr[leftand loc !left set left left [end of loopstep if loc left set flag else if arr[locarr[leftswap arr[locwith arr[leftset loc left [end of if[end of ifstep [end of loopstep end quick_sort (arrbegendstep if (beg endcall partition (arrbegendloccall quicksort(arrbegloc call quicksort(arrloc end[end of ifstep end figure algorithm for quick sort |
24,669 | complexity of quick sort in the average casethe running time of quick sort can be given as ( log nthe partitioning of the array which simply loops over the elements of the array once uses (ntime in the best caseevery time we partition the arraywe divide the list into two nearly equal pieces that isthe recursive call processes the sub-array of half the size at the mostonly log nested calls can be made before we reach sub-array of size it means the depth of the call tree is (log nand because at each levelthere can only be ( )the resultant time is given as ( log ntime practicallythe efficiency of quick sort depends on the element which is chosen as the pivot its worst-case efficiency is given as ( the worst case occurs when the array is already sorted (either in ascending or descending orderand the left-most element is chosen as the pivot howevermany implementations randomly choose the pivot element the randomized version of the quick sort algorithm always has an algorithmic complexity of ( log npros and cons of quick sort it is faster than other algorithms such as bubble sortselection sortand insertion sort quick sort can be used to sort arrays of small sizemedium sizeor large size on the flip sidequick sort is complex and massively recursive programming example write program to implement quick sort algorithm #include #include #define size int partition(int []int begint end)void quick_sort(int []int begint end)void main(int arr[size]inprintf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter the elements of the array")for( = ; < ; ++scanf("% "&arr[ ])quick_sort(arr - )printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])getch()int partition(int []int begint endint leftrighttemplocflagloc left begright endflag while(flag ! while(( [loc< [right]&(loc!=right)right-- |
24,670 | data structures using if(loc==rightflag = else if( [loc]> [right]temp [loc] [loca[right] [righttemploc rightif(flag!= while(( [loc> [left]&(loc!=left)left++if(loc==leftflag = else if( [loc< [left]temp [loc] [loca[left] [lefttemploc leftreturn locvoid quick_sort(int []int begint endint locif(beg<endloc partition(abegend)quick_sort(abegloc- )quick_sort(aloc+ end) radix sort radix sort is linear sorting algorithm for integers and uses the concept of sorting names in alphabetical order when we have list of sorted namesthe radix is (or bucketsbecause there are letters in the english alphabet so radix sort is also known as bucket sort observe that words are first sorted according to the first letter of the name that is classes are used to arrange the nameswhere the first class stores the names that begin with athe second class contains the names with band so on during the second passnames are grouped according to the second letter after the second passnames are sorted on the first two letters this process is continued till the nth passwhere is the length of the name with maximum number of letters after every passall the names are collected in order of buckets that isfirst pick up the names in the first bucket that contains the names beginning with in the second passcollect the names from the second bucketand so on when radix sort is used on integerssorting is done on each of the digits in the number the sorting procedure proceeds by sorting the least significant to the most significant digit while sorting the numberswe have ten bucketseach for one digit ( and the number of passes will depend on the length of the number having maximum number of digts |
24,671 | algorithm for radixsort (arrnstep find the largest number in arr as large step [initializeset nop number of digits in large step set pass step repeat step while pass <nop- step set and initialize buckets step repeat steps to while < - step set digit digit at passth place in [istep add [ito the bucket numbered digit step incerement bucket count for bucket numbered digit [end of loopstep collect the numbers in the bucket [end of loopstep end figure algorithm for radix sort example sort the numbers given below using radix sort in the first passthe numbers are sorted according to the digit at ones place the buckets are pictured upside down as shown below number after this passthe numbers are collected bucket by bucket the new list thus formed is used as an input for the next pass in the second passthe numbers are sorted according to the digit at the tens place the buckets are pictured upside down number |
24,672 | data structures using in the third passthe numbers are sorted according to the digit at the hundreds place the buckets are pictured upside down number the numbers are collected bucket by bucket the new list thus formed is the final sorted result after the third passthe list can be given as complexity of radix sort to calculate the complexity of radix sort algorithmassume that there are numbers that have to be sorted and is the number of digits in the largest number in this casethe radix sort algorithm is called total of times the inner loop is executed times hencethe entire radix sort algorithm takes (kntime to execute when radix sort is applied on data set of finite size (very small set of numbers)then the algorithm runs in (nasymptotic time pros and cons of radix sort radix sort is very simple algorithm when programmed properlyradix sort is one of the fastest sorting algorithms for numbers or strings of letters but there are certain trade-offs for radix sort that can make it less preferable as compared to other sorting algorithms radix sort takes more space than other sorting algorithms besides the array of numberswe need buckets to sort numbers buckets to sort strings containing only charactersand at least buckets to sort string containing alphanumeric characters another drawback of radix sort is that the algorithm is dependent on digits or letters this feature compromises with the flexibility to sort input of any data type for every different data typethe algorithm has to be rewritten even if the sorting order changesthe algorithm has to be rewritten thusradix sort takes more time to write and writing general purpose radix sort algorithm that can handle all kinds of data is not trivial task radix sort is good choice for many programs that need fast sortbut there are faster sorting algorithms available this is the main reason why radix sort is not as widely used as other sorting algorithms programming example write program to implement radix sort algorithm ##include #include #define size int largest(int arr[]int )void radix_sort(int arr[]int )void main( |
24,673 | int arr[size]inprintf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter the elements of the array")for( = ; < ; ++scanf("% "&arr[ ])radix_sort(arrn)printf("\ the sorted array is\ ")for( = ; < ; ++printf(% \ "arr[ ])getch()int largest(int arr[]int nint large=arr[ ]ifor( = ; < ; ++if(arr[ ]>largelarge arr[ ]return largevoid radix_sort(int arr[]int nint bucket[size][size]bucket_count[size]int ijkremaindernop= divisor= largepasslarge largest(arrn)while(large> nop++large/=sizefor(pass= ;pass<nop;pass++/initialize the buckets for( = ; <size; ++bucket_count[ ]= for( = ; < ; ++/sort the numbers according to the digit at passth place remainder (arr[ ]/divisor)%sizebucket[remainder][bucket_count[remainder]arr[ ]bucket_count[remainder+ /collect the numbers after pass pass = for( = ; <size; ++for( = ; <bucket_count[ ]; ++arr[ibucket[ ][ ] ++divisor *size |
24,674 | data structures using heapsort(arrn heap sort we have discussed binary heaps in thereforewe already know how to build heap from an arrayhow to insert new element in an already existing heapand how to delete an element from nowusing these basic conceptswe will discuss the application of heaps to write an efficient algorithm of heap sort (also known as tournament sortthat has running time complexity of ( log nfigure algorithm for heap sort given an array arr with elementsthe heap sort algorithm can be used to sort arr in two phasesin phase build heap using the elements of arr in phase repeatedly delete the root element of the heap formed in phase in max heapwe know that the largest value in is always present at the root node so in phase when the root element is deletedwe are actually collecting the elements of arr in decreasing order the algorithm of heap sort is shown in fig step [build heap hrepeat for to - call insert_heap(arrnarr[ ][end of loopstep (repeatedly delete the root elementrepeat while ncall delete_heap(arrnvalset [end of loopstep end complexity of heap sort heap sort uses two heap operationsinsertion and root deletion each element extracted from the root is placed in the last empty location of the array in phase when we build heapthe number of comparisons to find the right location of the new element in cannot exceed the depth of since is complete treeits depth cannot exceed mwhere is the number of elements in heap thusthe total number of comparisons (nto insert elements of arr in is bounded asg( < log hencethe running time of the first phase of the heap sort algorithm is ( log nin phase we have which is complete tree with elements having left and right sub-trees as heaps assuming to be the root of the treereheaping the tree would need comparisons to move one step down the tree since the depth of cannot exceed (log )reheaping the tree will require maximum of log comparisons to find the right location of in since elements will be deleted from heap hreheaping will be done times thereforethe number of comparisons to delete elements is bounded ash( < log hencethe running time of the second phase of the heap sort algorithm is ( log neach phase requires time proportional to ( log nthereforethe running time to sort an array of elements in the worst case is proportional to ( log nthereforewe can conclude that heap sort is simplefastand stable sorting algorithm that can be used to sort large sets of data efficiently programming example write program to implement heap sort algorithm #include #include #define max |
24,675 | void restoreheapup(int *,int)void restoreheapdown(int*,int,int)int main(int heap[max], , ,jclrscr()printf("\ enter the number of elements ")scanf("% ",& )printf("\ enter the elements ")for( = ; <= ; ++scanf("% ",&heap[ ])restoreheapup(heapi)/heapify /delete the root element and heapify the heap =nfor( = ; <= ; ++int temptemp=heap[ ]heap[ ]heap[ ]heap[ ]=tempn - /the element heap[nis supposed to be deleted restoreheapdown(heap, , )/heapify =jprintf("\ the sorted elements are")for( = ; <= ; ++printf("% ",heap[ ])return void restoreheapup(int *heap,int indexint val heap[index]while(index> &(heap[index/ val/check parent' value heap[index]=heap[index/ ]index / heap[index]=valvoid restoreheapdown(int *heap,int index,int nint val heap[index]int =index* while( <=nif( < &(heap[jheap[ + ]))/check sibling' value ++if(heap[jheap[ / ]/check parent' value breakheap[ / ]=heap[ ] = * heap[ / ]val |
24,676 | data structures using shell sort shell sortinvented by donald shell in is sorting algorithm that is generalization of insertion sort while discussing insertion sortwe have observed two thingsfirstinsertion sort works well when the input data is 'almost sortedsecondinsertion sort is quite inefficient to use as it moves the values just one position at time shell sort is considered an improvement over insertion sort as it compares elements separated by gap of several positions this enables the element to take bigger steps towards its expected position in shell sortelements are sorted in multiple passes and in each passdata are taken with smaller and smaller gap sizes howeverthe last step of shell sort is plain insertion sort but by the time we reach the last stepthe elements are already 'almost sorted'and hence it provides good performance if we take scenario in which the smallest element is stored in the other end of the arraythen sorting such an array with either bubble sort or insertion sort will execute in ( time and take roughly comparisons and exchanges to move this value all the way to its correct position on the other handshell sort first moves small values using giant step sizesso small value will move long way towards its final positionwith just few comparisons and exchanges technique to visualize the way in which shell sort worksperform the following stepsstep arrange the elements of the array in the form of table and sort the columns (using insertion sortstep repeat step each time with smaller number of longer columns in such way that at the endthere is only one column of data to be sorted note that we are only visualizing the elements being arranged in tablethe algorithm does its sorting in-place example sort the elements given below using shell sort solution arrange the elements of the array in the form of table and sort the columns result the elements of the array can be given as repeat step with smaller number of long columns result the elements of the array can be given as repeat step with smaller number of long columns |
24,677 | result the elements of the array can be given as finallyarrange the elements of the array in single column and sort the column result finallythe elements of the array can be given as the algorithm to sort an array of elements using shell sort is shown in fig in the algorithmwe sort the elements of the array arr in multiple passes in each passwe reduce the gap_size (visualize it as the number of columnsby factor of half as done in step in each iteration of the for loop in step we compare the values of the array and interchange them if we have larger value preceding the smaller one shell_sort(arrnstep set flag gap_size step repeat steps to while flag or gap_size set flag step step set gap_size (gap_size step repeat step for to ( gap_sizestep if arr[ gap_sizearr[iswap arr[ gap_size]arr[iset flag step end figure algorithm for shell sort |
24,678 | data structures using programming example write program to implement shell sort algorithm #include void main(int arr[ ]={- }int ijnflag gap_sizetempprintf("\ enter the number of elements in the array")scanf("% "& )printf("\ enter % numbers", )/ was added for( = ; < ; ++scanf("% "&arr[ ])gap_size nwhile(flag = |gap_size flag gap_size (gap_size for( = ( gap_size) ++ifarr[ +gap_sizearr[ ]temp arr[ +gap_size]arr[ +gap_sizearr[ ]arr[itempflag printf("\ the sorted array is\ ")for( = ; < ; ++)printf(% \ "arr[ ]) tree sort tree sort is sorting algorithm that sorts numbers by making use of the properties of binary search tree (discussed in the algorithm first builds binary search tree using the numbers to be sorted and then does an in-order traversal so that the numbers are retrieved in sorted order we will not discuss this topic in detail here because we assume that reader has already studied it in sufficient details in the complexity of tree sort algorithm let us study the complexity of tree sort algorithm in all three cases best case inserting number in binary search tree takes (log ntime sothe complete binary search tree with numbers is built in ( log ntime binary tree is traversed in (ntime total time required ( log no (no ( log nworst case occurs with an unbalanced binary search treei when the numbers are already sorted binary search tree with numbers is built in ( time binary tree is traversed in (ntime total time required ( (no( the worst case can be improved by using self-balancing binary search tree |
24,679 | programming example write program to implement tree sort algorithm #include #include #include struct tree struct tree *leftint numstruct tree *rightvoid insert (struct tree **int)void inorder (struct tree *)void mainstruct tree * int arr[ ]int clrscrprintf("\ enter elements ")for( = ; < ; ++scanf("% "&arr[ ]) null printf ("\ the elements of the array are \nfor ( < ++printf ("% \ "arr[ ]for ( < ++insert (&tarr[ ]printf ("\ the sorted array is \ "inorder ( getchevoid insert (struct tree **tree_nodeint numif *tree_node =null *tree_node malloc (sizeof struct tree )*tree_node -left null *tree_node -num num *tree_node -right null else if num num insert &*tree_node -left )num else insert &*tree_node -right )num void inorder (struct tree *tree_node if tree_node !null inorder tree_node -left printf "% \ "tree_node -num inorder tree_node -right |
24,680 | data structures using comparison of sorting algorithms table comparison of algorithms algorithm average case ( worst case ( selection sort ( ko( ( ko( insertion sort ( ( shell sort ( log nmerge sort ( log no( log nheap sort ( log nquick sort ( log no( log no( bubble sort bucket sort table compares the average-case and worst-case time complexities of different sorting algorithms discussed so far external sorting external sorting is sorting technique that can handle massive amounts of data it is usually applied when the data being sorted does not fit into the main memory (ramandthereforea slower memory (usually magnetic disk or even magnetic tapeneeds to be used we will explain the concept of external sorting using an example discussed below example let us consider we need to sort mb of data using only mb of ram the steps for sorting are given below step read mb of the data in ram and sort this data using any conventional sorting algorithm like quick sort step write the sorted data back to the magnetic disk step repeat steps and until all the data (in mb chunksis sorted all these seven chunks that are sorted need to be merged into one single output file step read the first mb of each of the sorted chunks and call them input buffers sonow we have mb of data in the ram allocate the remaining ram for output buffer step perform seven-way merging and store the result in the output buffer if at any point of timethe output buffer becomes fullthen write its contents to the final sorted file howeverif any of the input buffers gets emptyfill it with the next mb of its associated mb sorted chunk or else mark the input buffer (sorted chunkas exhausted if it does not has any more left with it make sure that this chunk is not used for further merging of data the external merge sorting can be visualized as given in fig / buffers in ram disk disk input input output input figure external merge sorting generalized external merge sort algorithm from the example abovewe can now present generalized merge sort algorithm for external sorting if the amount of data to be sorted exceeds the available memory by factor of kthen chunks (also known as run listsof data are created these chunks are sorted and then -way merge is performed if the amount of ram available is given as xthen there will be input buffers and output buffer |
24,681 | in the above examplea single-pass merge was used but if the ratio of data to be sorted and available ram is particularly largea multi-pass sorting is used we can first merge only the first half of the sorted chunksthen the other halfand finally merge the two sorted chunks the exact number of passes depends on the following factorssize of the data to be sorted when compared with the available ram physical characteristics of the magnetic disk such as transfer rateseek timeetc applications of external sorting external sorting is used to update master file from transaction file for exampleupdating the employees file based on new hirespromotionsappraisalsand dismissals it is also used in database applications for performing operations like projection and join projection means selecting subset of fields and join means joining two files on common field to create new file whose fields are the union of the fields of the two files external sorting is also used to remove duplicate records points to remember searching refers to finding the position of value in collection of values some of the popular searching techniques are linear searchbinary searchinterpolation searchand jump search linear search works by comparing the value to be searched with every element of the array one by one is sequence until match is found binary search works efficiently with sorted list in this algorithmthe value to be searched is compared with the middle element of the array segment in each step of interpolation searchthe search space for the value to be found is calculated the calculation is done based on the values at the bounds of the search space and the value to be searched jump search is used with sorted lists we first check an element and if it is less than the desired valuethen block of elements is skipped by jumping aheadand the element following this block is checked if the checked element is greater than the desired valuethen we have boundary and we are sure that the desired value lies between the previously checked element and the currently checked element internal sorting deals with sorting the data stored in the memorywhereas external sorting deals with sorting the data stored in files in bubble sortingconsecutive adjacent pairs of elements in the array are compared with each other insertion sort works by moving the current data element past the already sorted values and repeatedly interchanging it with the preceding value until it is in the correct place selection sort works by finding the smallest value and placing it in the first position it then finds the second smallest value and places it in the second position this procedure is repeated until the whole array is sorted merge sort is sorting algorithm that uses the divideconquerand combine algorithmic paradigm divide means partitioning the -element array to be sorted into two sub-arrays of / elements in each subarray conquer means sorting the two sub-arrays recursively using merge sort combine means merging the two sorted sub-arrays of size / each to produce sorted array of elements the running time of merge sort in average case and worst case can be given as ( log nquick sort works by using divide-and-conquer strategy it selects pivot element and rearranges the elements in such way that all elements less than pivot appear before it and all elements greater than pivot appear after it radix sort is linear sorting algorithm that uses the concept of sorting names in alphabetical order heap sort sorts an array in two phases in the first phaseit builds heap of the given array in the second phasethe root element is deleted repeatedly and inserted into an array shell sort is considered as an improvement over insertion sortas it compares elements separated by gap of several positions tree sort is sorting algorithm that sorts numbers by making use of the properties of binary search tree the algorithm first builds binary search tree using the numbers to be sorted and then does an in-order traversal so that the numbers are retrieved in sorted order |
24,682 | data structures using exercises review questions which technique of searching an element in an array would you prefer to use and in which situation define sorting what is the importance of sorting what are the different types of sorting techniqueswhich sorting technique has the least worst case explain the difference between bubble sort and quick sort which one is more efficient sort the elements using (ainsertion sort (bselection sort (cbubble sort (dmerge sort (equick sort (fradix sort (gshell sort compare heap sort and quick sort quick sort shows quadratic behaviour in certain situations justify if the following sequence of numbers is to be sorted using quick sortthen show the iterations of the sorting process sort the following sequence of numbers in descending order using heap sort certain sorting technique was applied to the following data set after two passesthe rearrangement of the data set is given as below identify the sorting algorithm that was applied certain sorting technique was applied to the following data set after two passesthe rearrangement of the data set is given as below identify the sorting algorithm that was applied certain sorting technique was applied to the following data set after two passesthe rearrangement of the data set is given as below identify the sorting algorithm that was applied write recursive function to perform selection sort compare the running time complexity of different sorting algorithms discuss the advantages of insertion sort programming exercises write program to implement bubble sort given the numbers and how many swaps will be performed to sort these numbers using the bubble sort write program to implement sort technique that works by repeatedly stepping through the list to be sorted write program to implement sort technique in which the sorted array is built one entry at time write program to implement an in-place comparison sort write program to implement sort technique that works on the principle of divide and conquer strategy write program to implement partition-exchange sort write program to implement sort technique which sorts the numbers based on individual digits write program to sort an array of integers in descending order using the following sorting techniques(ainsertion sort (bselection sort (cbubble sort (dmerge sort (equick sort (fradix sort (gshell sort write program to sort an array of floating point numbers in descending order using the following sorting techniques(ainsertion sort (bselection sort (cbubble sort (dmerge sort (equick sort (fradix sort (gshell sort write program to sort an array of names using the bucket sort |
24,683 | multiple-choice questions the worst case complexity is when compared with the average case complexity of binary search algorithm (aequal (bgreater (cless (dnone of these the complexity of binary search algorithm is (ao( (bo( (co( log (do(log which of the following cases occurs when searching an array using linear search the value to be searched is equal to the first element of the array(aworst case (baverage case (cbest case (damortized case card game player arranges his cards and picks them one by one with which sorting technique can you compare this example(abubble sort (bselection sort (cmerge sort (dinsertion sort which of the following techniques deals with sorting the data stored in the computer' memory(ainsertion sort (binternal sort (cexternal sort (dradix sort in which sortingconsecutive adjacent pairs of elements in the array are compared with each other(abubble sort (bselection sort (cmerge sort (dradix sort which term means sorting the two sub-arrays recursively using merge sort(adivide (bconquer (ccombine (dall of these which sorting algorithm sorts by moving the current data element past the already sorted values and repeatedly interchanging it with the preceding value until it is in its correct place(ainsertion sort (binternal sort (cexternal sort (dradix sort which algorithm uses the divideconquerand combine algorithmic paradigm(aselection sort (binsertion sort (cmerge sort (dradix sort quick sort is faster than (aselection sort (binsertion sort (cbubble sort (dall of these which sorting algorithm is also known as tournament sort(aselection sort (cbubble sort (binsertion sort (dheap sort true or false binary search is also called sequential search linear search is performed on sorted array for insertion sortthe best case occurs when the array is already sorted selection sort has linear running time complexity the running time of merge sort in the average case and the worst case is ( log the worst case running time complexity of quick sort is ( log heap sort is an efficient and stable sorting algorithm external sorting deals with sorting the data stored in the computer' memory insertion sort is less efficient than quick sortheap sortand merge sort the average case of insertion sort has quadratic running time the partitioning of the array in quick sort is done in (ntime fill in the blanks performance of the linear search algorithm can be improved by using the complexity of linear search algorithm is sorting means sort shows the best average-case behaviour deals with sorting the data stored in files ( is the running time complexity of algorithm in the worst caseinsertion sort has running time sort uses the divideconquerand combine algorithmic paradigm in the average casequick sort has running time complexity of the execution time of bucket sort in average case is the running time of merge sort in the average and the worst case is the efficiency of quick sort depends on |
24,684 | hashing and collision learning objective in this we will discuss another data structure known as hash table we will see what hash table is and why do we prefer hash tables over simple arrays we will also discuss hash functionscollisionsand the techniques to resolve collisions introduction in we discussed two search algorithmslinear search and binary search linear search has running time proportional to ( )while binary search takes time proportional to (log )where is the number of elements in the array binary search and binary search trees are efficient algorithms to search for an element but what if we want to perform the search operation in time proportional to ( )in other wordsis there way to search an array in constant timeirrespective of its sizethere are two solutions to this problem let us take an example to explain the first key array of employeesrecords solution in small company of employeeskey [ employee record with emp_id each employee is assigned an emp_id in the key [ employee record with emp_id range - to store the records in an arraykey [ employee record with emp_id each employee' emp_id acts as an index into the array where the employee' record will be stored as shown in fig in this casewe can directly access the key [ employee record with emp_id record of any employeeonce we know his key [ employee record with emp_id emp_idbecause the array index is the same as the emp_id number but practicallythis implementation is hardly feasible figure records of employees |
24,685 | let us assume that the same company uses five-digit emp_id as the primary key in this casekey values will range from to if we want to use the same technique as abovewe need an array of size , of which only elements will be used this is illustrated in fig key array of employeesrecords key employee record with emp_id [ key employee record with emp_id [nkey [ employee record with emp_id key [ employee record with emp_id figure records of employees with five-digit emp_id it is impractical to waste so much storage space just to ensure that each employee' record is in unique and predictable location whether we use two-digit primary key (emp_idor five-digit keythere are just employees in the company thuswe will be using only locations in the array thereforein order to keep the array size down to the size that we will actually be using ( elements)another good option is to use just the last two digits of the key to identify each employee for examplethe employee with emp_id will be stored in the element of the array with index similarlythe employee with emp_id will have his record stored in the array at the th location in the second solutionthe elements are not stored according to the value of the key so in this casewe need way to convert five-digit key number to two-digit array index we need function which will do the transformation in this casewe will use the term hash table for an array and the function that will carry out the transformation will be called hash function hash tables hash table is data structure in which keys are mapped to array positions by hash function in the example discussed here we will use hash function that extracts the last two digits of the key thereforewe map the keys to array locations or array indices value stored in hash table can be searched in ( time by using hash function which generates an address from the key (by producing the index of the array where the value is storedfigure shows direct correspondence between the keys and the indices of the array this concept is useful when the total universe of keys is small and when most of the keys are actually used from the whole set of keys this is equivalent to our first examplewhere there are keys for employees howeverwhen the set of keys that are actually used is smaller than the universe of keys ( ) hash table consumes less storage space the storage requirement for hash table is ( )where is the number of keys actually used in hash tablean element with key is stored at index (kand not it means hash function is used to calculate the index at which the element with key will be stored this process of mapping the keys to appropriate locations (or indicesin hash table is called hashing figure shows hash table in which each key from the set is mapped to locations generated by using hash function note that keys and point to the same memory location this is known as collision that iswhen two or more keys map to the same memory locationa collision |
24,686 | data structures using is said to occur similarlykeys and also collide the main goal of using hash function is to reduce the range of array indices that have to be handled thusinstead of having valueswe just need valuesthereby reducing the amount of storage space required universe of keys ( actual keys ( figure actual keys (kfigure null null null null direct relationship between key and index in the array universe of keys (uk null null null null null relationship between keys and hash table index hash functions hash function is mathematical formula whichwhen applied to keyproduces an integer which can be used as an index for the key in the hash table the main aim of hash function is that elements should be relativelyrandomlyand uniformly distributed it produces unique set of integers within some suitable range in order to reduce the number of collisions in practicethere is no hash function that eliminates collisions completely good hash function can only minimize the number of collisions by spreading the elements uniformly throughout the array in this sectionwe will discuss the popular hash functions which help to minimize collisions but before thatlet us first look at the properties of good hash function properties of good hash function low cost the cost of executing hash function must be smallso that using the hashing technique becomes preferable over other approaches for exampleif binary search algorithm can search an element from sorted table of items with log key comparisonsthen the hash function must cost less than performing log key comparisons determinism hash procedure must be deterministic this means that the same hash value must be generated for given input value howeverthis criteria excludes hash functions that depend |
24,687 | on external variable parameters (such as the time of dayand on the memory address of the object being hashed (because address of the object may change during processinguniformity good hash function must map the keys as evenly as possible over its output range this means that the probability of generating every hash value in the output range should roughly be the same the property of uniformity also minimizes the number of collisions different hash functions in this sectionwe will discuss the hash functions which use numeric keys howeverthere can be cases in real-world applications where we can have alphanumeric keys rather than simple numeric keys in such casesthe ascii value of the character can be used to transform it into its equivalent numeric key once this transformation is doneany of the hash functions given below can be applied to generate the hash value division method it is the most simple method of hashing an integer this method divides by and then uses the remainder obtained in this casethe hash function can be given as (xx mod the division method is quite good for just about any value of and since it requires only single division operationthe method works very fast howeverextra care should be taken to select suitable value for for examplesuppose is an even number then (xis even if is even and (xis odd if is odd if all possible keys are equi-probablethen this is not problem but if even keys are more likely than odd keysthen the division method will not spread the hashed values uniformly generallyit is best to choose to be prime number because making prime number increases the likelihood that the keys are mapped with uniformity in the output range of values should also be not too close to the exact powers of if we have (xx mod then the function will simply extract the lowest bits of the binary representation of the division method is extremely simple to implement the following code segment illustrates how to do thisint const / prime number int (int xreturn ( ) potential drawback of the division method is that while using this methodconsecutive keys map to consecutive hash values on one handthis is good as it ensures that consecutive keys do not collidebut on the otherit also means that consecutive array locations will be occupied this may lead to degradation in performance example calculate the hash values of keys and solution setting hash values can be calculated ash( ( multiplication method the steps involved in the multiplication method are as followsstep choose constant such that |
24,688 | data structures using step multiply the key by step extract the fractional part of ka step multiply the result of step by the size of hash table (mhencethe hash function can be given ash(ki (ka mod where (ka mod gives the fractional part of ka and is the total number of indices in the hash table the greatest advantage of this method is that it works practically with any value of although the algorithm works better with some valuesthe optimal choice depends on the characteristics of the data being hashed knuth has suggested that the best choice of is (sqrt / example given hash table of size map the key to an appropriate location in the hash table solution we will use and ( ( mod ( ( mod ( ( ( ( mid-square method the mid-square method is good hash function which works in two stepsstep square the value of the key that isfind step extract the middle digits of the result obtained in step the algorithm works well because most or all digits of the key value contribute to the result this is because all the digits in the original key value contribute to produce the middle digits of the squared value thereforethe result is not dominated by the distribution of the bottom digit or the top digit of the original key value in the mid-square methodthe same digits must be chosen from all the keys thereforethe hash function can be given ash(ks where is obtained by selecting digits from example calculate the hash value for keys and using the mid-square method the hash table has memory locations solution note that the hash table has memory locations whose indices vary from to this means that only two digits are needed to map the key to location in the hash tableso when ( when ( observe that the rd and th digits starting from the right are chosen folding method the folding method works in the following two stepsstep divide the key value into number of parts that isdivide into parts knwhere each part has the same number of digits except the last part which may have lesser digits than the other parts |
24,689 | step add the individual parts that isobtain the sum of kn the hash value is produced by ignoring the last carryif any note that the number of digits in each part of the key will vary depending upon the size of the hash table for exampleif the hash table has size of then there are locations in the hash table to address these locationswe need at least three digitsthereforeeach part of the key must have three digits except the last part which may have lesser digits example given hash table of locationscalculate the hash value using folding method for keys and solution since there are memory locations to addresswe will break the key into parts where each part (except the lastwill contain two digits the hash values can be obtained as shown belowkey parts sum hash value and (ignore the last carry and and collisions as discussed earlier in this collisions occur when the hash function maps two different keys to the same location obviouslytwo records cannot be stored in the same location thereforea method used to solve the problem of collisionalso called collision resolution techniqueis applied the two most popular methods of resolving collisions are open addressing chaining in this sectionwe will discuss both these techniques in detail collision resolution by open addressing once collision takes placeopen addressing or closed hashing computes new positions using probe sequence and the next record is stored in that position in this techniqueall the values are stored in the hash table the hash table contains two types of valuessentinel values ( - and data values the presence of sentinel value indicates that the location contains no data value at present but can be used to hold value when key is mapped to particular memory locationthen the value it holds is checked if it contains sentinel valuethen the location is free and the data value can be stored in it howeverif the location already has some data value stored in itthen other slots are examined systematically in the forward direction to find free slot if even single free location is not foundthen we have an overflow condition the process of examining memory locations in the hash table is called probing open addressing technique can be implemented using linear probingquadratic probingdouble hashingand rehashing linear probing the simplest approach to resolve collision is linear probing in this techniqueif value is already stored at location generated by ( )then the following hash function is used to resolve the collisionh(ki[hc/(kimod |
24,690 | data structures using where is the size of the hash tablehc/( ( mod )and is the probe number that varies from to - thereforefor given key kfirst the location generated by [hc/(kmod mis probed because for the first time = if the location is freethe value is stored in itelse the second probe generates the address of the location given by [hc/( ]mod similarlyif the location is occupiedthen subsequent probes generate the address as [hc/( ]mod [hc/( ]mod [hc/( ]mod [hc/( ]mod mand so onuntil free location is found note linear probing is known for its simplicity when we have to store valuewe try the slots[hc/( )mod [hc/( ]mod [hc/( ]mod [hc/( ]mod [hc/( ]mod [hc/( ]mod mand so nountil vacant location is found example consider hash table of size using linear probinginsert the keys and into the table let hc/(kk mod mm initiallythe hash table can be given as - - - - - - - - - - - - - - - - - - - step key ( ( mod mod ( mod = since [ is vacantinsert key at this location - - - - - step key ( ( mod mod ( mod = since [ is vacantinsert key at this location - - - - - step key ( ( mod mod ( mod = since [ is vacantinsert key at this location - step - - key ( ( mod mod - - |
24,691 | ( mod = since [ is vacantinsert key at this location - step - - - - - - - - - key ( ( mod mod ( mod since [ is vacantinsert key at this location - - - step key ( ( mod mod ( mod = since [ is vacantinsert key at this location - step key ( ( mod mod ( mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( ( mod mod ( mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( ( mod mod ( mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( ( mod mod ( mod = since [ is vacantinsert key at this location |
24,692 | data structures using - - - step key ( ( mod mod ( mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( ( mod mod ( mod = [ is also occupiedso we cannot store the key in this location the procedure will be repeated until the hash function generates the address of location which is vacant and can be used to store the value in it searching value using linear probing the procedure for searching value in hash table is same as for storing value in hash table while searching for value in hash tablethe array index is re-computed and the key of the element stored at that location is compared with the value that has to be searched if match is foundthen the search operation is successful the search time in this case is given as ( if the key does not matchthen the search function begins sequential search of the array that continues untilthe value is foundor the search function encounters vacant location in the arrayindicating that the value is not presentor the search function terminates because it reaches the end of the table and the value is not present in the worst casethe search operation may have to make - comparisonsand the running time of the search algorithm may take (ntime the worst case will be encountered when after scanning all the - elementsthe value is either present at the last location or not present in the table thuswe see that with the increase in the number of collisionsthe distance between the array index computed by the hash function and the actual location of the element increasesthereby increasing the search time pros and cons linear probing finds an empty location by doing linear search in the array beginning from position (kalthough the algorithm provides good memory caching through good locality of referencethe drawback of this algorithm is that it results in clusteringand thus there is higher risk of more collisions where one collision has already taken place the performance of linear probing is sensitive to the distribution of input values as the hash table fillsclusters of consecutive cells are formed and the time required for search increases with the size of the cluster in addition to thiswhen new value has to be inserted into the table at position which is already occupiedthat value is inserted at the end of the clusterwhich again increases the length of the cluster generallyan insertion is made between two clusters that are separated by one vacant location but with linear probingthere are more chances that subsequent insertions will also end up in one of the clustersthereby potentially increasing the cluster length by an amount much greater than one more the number of collisionshigher the |
24,693 | probes that are required to find free location and lesser is the performance this phenomenon is called primary clustering to avoid primary clusteringother techniques such as quadratic probing and double hashing are used quadratic probing in this techniqueif value is already stored at location generated by ( )then the following hash function is used to resolve the collisionh(ki[hc/(kc mod where is the size of the hash tablehc/( ( mod ) is the probe number that varies from to - and and are constants such that and quadratic probing eliminates the primary clustering phenomenon of linear probing because instead of doing linear searchit does quadratic search for given key kfirst the location generated by hc/(kmod is probed if the location is freethe value is stored in itelse subsequent locations probed are offset by factors that depend in quadratic manner on the probe number although quadratic probing performs better than linear probingin order to maximize the utilization of the hash tablethe values of and need to be constrained example consider hash table of size using quadratic probinginsert the keys and into the table take and solution let hc/(kk mod mm initiallythe hash table can be given as - - - - - - - - - - we haveh(ki[hc/(kc mod step key ( [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - - - step key ( [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - - |
24,694 | data structures using step key ( [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - step key ( [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - step key ( [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - step key ( , [ mod mod [ mod mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - step key ( , [ mod mod [ mod mod mod = since [ is already occupiedthe key cannot be stored in [ thereforetry again for next location thus probei this time key ( , [ mod mod |
24,695 | [ mod mod [ mod mod [ mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - searching value using quadratic probing while searching value using the quadratic probing techniquethe array index is re-computed and the key of the element stored at that location is compared with the value that has to be searched if the desired key value matches with the key value at that locationthen the element is present in the hash table and the search is said to be successful in this casethe search time is given as ( howeverif the value does not matchthen the search function begins sequential search of the array that continues untilthe value is foundor the search function encounters vacant location in the arrayindicating that the value is not presentor the search function terminates because it reaches the end of the table and the value is not present in the worst casethe search operation may take - comparisonsand the running time of the search algorithm may be (nthe worst case will be encountered when after scanning all the - elementsthe value is either present at the last location or not present in the table thuswe see that with the increase in the number of collisionsthe distance between the array index computed by the hash function and the actual location of the element increasesthereby increasing the search time pros and cons quadratic probing resolves the primary clustering problem that exists in the linear probing technique quadratic probing provides good memory caching because it preserves some locality of reference but linear probing does this task better and gives better cache performance one of the major drawbacks of quadratic probing is that sequence of successive probes may only explore fraction of the tableand this fraction may be quite small if this happensthen we will not be able to find an empty location in the table despite the fact that the table is by no means full in example try to insert the key and you will encounter this problem although quadratic probing is free from primary clusteringit is still liable to what is known as secondary clustering it means that if there is collision between two keysthen the same probe sequence will be followed for both with quadratic probingthe probability for multiple collisions increases as the table becomes full this situation is usually encountered when the hash table is more than full quadratic probing is widely applied in the berkeley fast file system to allocate free blocks double hashing to start withdouble hashing uses one hash value and then repeatedly steps forward an interval until an empty location is reached the interval is decided using secondindependent hash function |
24,696 | data structures using hence the name double hashing in double hashingwe use two hash functions rather than single function the hash function in the case of double hashing can be given ash(ki[ (kih ( )mod where is the size of the hash tableh (kand (kare two hash functions given as (kk mod mh (kk mod ' is the probe number that varies from to - and mis chosen to be less than we can choose mm- or - when we have to insert key in the hash tablewe first probe the location given by applying [ (kmod mbecause during the first probei if the location is vacantthe key is inserted into itelse subsequent probes generate locations that are at an offset of [ (kmod mfrom the previous location since the offset may vary with every probe depending on the value generated by the second hash functionthe performance of double hashing is very close to the performance of the ideal scheme of uniform hashing pros and cons double hashing minimizes repeated collisions and the effects of clustering that isdouble hashing is free from problems associated with primary clustering as well as secondary clustering example consider hash table of size using double hashinginsert the keys and into the table take ( mod and ( mod solution let initiallythe hash table can be given as - - - - - - - - - - we haveh(ki[ (kih ( )mod step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - - - step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - - |
24,697 | step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - - step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - - step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - - step key ( [ mod ( mod )mod [ ( )mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - - step key ( [ mod ( mod )mod [ ( )mod mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( [ mod ( mod )mod |
24,698 | data structures using [ ( )mod ( mod mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( [ mod ( mod )mod [ ( )mod [ mod mod = since [ is vacantinsert the key in [ the hash table now becomes - - - step key ( [ mod ( mod )mod [ ( )mod mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location thus probei this time key ( [ mod ( mod )mod [ ( )mod [ mod = now [ is occupiedso we cannot store the key in [ thereforetry again for the next location with probe repeat the entire process until vacant location is found you will see that we have to probe many times to insert the key in the hash table although double hashing is very efficient algorithmit always requires to be prime number in our case = which is not prime numberhencethe degradation in performance had been equal to the algorithm would have worked very efficiently thuswe can say that the performance of the technique is sensitive to the value of rehashing when the hash table becomes nearly fullthe number of collisions increasesthereby degrading the performance of insertion and search operations in such casesa better option is to create new hash table with size double of the original hash table all the entries in the original hash table will then have to be moved to the new hash table this is done by taking each entrycomputing its new hash valueand then inserting it in the new hash table though rehashing seems to be simple processit is quite expensive and must therefore not be done frequently consider the hash table of size given below the hash function used is (xx rehash the entries into to new hash table |
24,699 | note that the new hash table is of locationsdouble the size of the original table nowrehash the key values from the old hash table into the new one using hash function-- (xx programming example write program to show searching using closed hashing #include #include int ht[ ]ifound keyvoid insert_val()void search_val()void delete_val()void display()int main(int optionclrscr()for ; ; +//to initialize every element as '- ht[ - do printf"\ menu \ insert \ search \ delete \ display \ exit")printf"\ enter your option ")scanf"% "&option)switch (optioncase insert_val()breakcase search_val()breakcase delete_val()breakcase display()breakdefaultprintf"\ninvalid choice entry!!!\ )break}while (option!= ) |
Subsets and Splits