Recursive Relationship In Er Diagram And Contraints

  1. Home
  2. Amit Khandelwal 1
  3. Subjective Answers

thumb_up 11 thumb_down 1 flag 0

Dynamic programming (usually referred to asDP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly'Remember your Past' :) .  If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lapping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem

thumb_up 2 thumb_down 0 flag 0

thumb_up 52 thumb_down 1 flag 0

Consider 2 buckets one 4L and other 9L. : Bucket 1 (4L) and Bucket2 (9L)

First fill the 9L bucket fully. : 0 L and 9 L

Pour the water into 4L bucket. : 4 L and 5 L

Empty the 4L bucket. : 0 L and 5 L

Repeat this twice. : 4 L and 1 L

Now you will left with 1L water in the 9L bucket : 0 L and 1 L

Now pour this 1L into the 4L bucket : 1 L and 0 L

Refill the 9L bucket. : 1 L and 9 L

Now pour the water from 9L into the 4L bucket until it fills up. : 4 L and 6 L

Now you are left with 6 L water in the 9L bucket.

thumb_up 13 thumb_down 0 flag 0

Self-driving Car
Lung cancer detection system
Face recognition

thumb_up 2 thumb_down 0 flag 0

WCF has a couple of built-in bindings that are designed to fulfill some specific needs. You can also define your own custom bindings in WCF to fulfill your needs. All built-in bindings are defined in the System.ServiceModel Namespace. Here is the list of 10 built-in bindings in WCF that we commonly use.

Basic binding

This binding is provided by the BasicHttpBinding class. It is designed to expose a WCF service as an ASMX web service, so that old clients (that are still using an ASMX web service) can consume the new service. By default, it uses the HTTP protocol for transport and encodes the message in UTF-8 text format. You can also use HTTPS with this binding.

Web binding

This binding is provided by the WebHttpBinding class. It is designed to expose WCF services as HTTP requests using HTTP-GET and HTTP-POST. It is used with REST based services that may provide output in XML or JSON format. This is very much used with social networks for implementing a syndication feed.

Web Service (WS) binding

This binding is provided by the WSHttpBinding class. It is like a basic binding and uses HTTP or HTTPS protocols for transport. But this is designed to offer various WS - * specifications such as WS – Reliable Messaging, WS - Transactions, WS - Security and so on which are not supported by Basic binding.

wsHttpBinding= basicHttpBinding + WS-* specification

WS Dual binding

This binding is provided by the WsDualHttpBinding class. It is like a WsHttpBinding except it supports bi-directional communication which means both clients and services can send and receive messages.

TCP binding

This binding is provided by the NetTcpBinding class. It uses TCP protocol for communication between two machines within intranet (means same network). It encodes the message in binary format. This is a faster and more reliable binding compared to the HTTP protocol bindings. It is only used when the communication is WCF-to-WCF which means both client and service should have WCF.

IPC binding

This binding is provided by the NetNamedPipeBinding class. It uses named pipe for communication between two services on the same machine. This is the most secure and fastest binding among all the bindings.

MSMQ binding

This binding is provided by the NetMsmqBinding class. It uses MSMQ for transport and offers support to a disconnected message queued. It provides solutions for disconnected scenarios in which the service processes the message at a different time than the client sending the messages.

Federated WS binding

This binding is provided by the WSFederationHttpBinding class. It is a specialized form of WS binding and provides support to federated security.

Peer Network binding

This binding is provided by the NetPeerTcpBinding class. It uses the TCP protocol, but uses peer networking as transport. In this networking each machine (node) acts as a client and a server to the other nodes. This is used in the file sharing systems like a Torrent.

MSMQ Integration binding

This binding is provided by the MsmqIntegrationBinding class. It offers support to communicate with existing systems that communicate via MSMQ.

thumb_up 1 thumb_down 1 flag 0

https://www.geeksforgeeks.org/longest-palindromic-substring-set-2/

thumb_up 0 thumb_down 1 flag 0

https://www.geeksforgeeks.org/write-a-c-function-to-print-the-middle-of-the-linked-list/

thumb_up 0 thumb_down 0 flag 0

In WCF, all services expose contracts. Thecontract is a platform-neutral and standard way of describing what the service does. WCF defines four types of contracts.

Service contracts

Describe which operations the client can perform on the service.

Data contracts

Define which data types are passed to and from the service. WCF defines implicit contracts for built-in types such asint andstring, but you can easily define explicit opt-in data contracts for custom types.

Fault contracts

Define which errors are raised by the service, and how the service handles and propagates errors to its clients.

Message contracts

Allow the service to interact directly with messages. Message contracts can be typed or untyped, and are useful in interoperability cases and when there is an existing message format you have to comply with.

thumb_up 29 thumb_down 0 flag 0

Generally there are 2 types of processors, 32-bit and 64-bit.
This actually tells us how much memory a processor can have access from a CPU register.
A 32-bit system can reach around 2^32 memory addresses, i.e 4 GB of RAM or physical memory.
A 64-bit system can reach around 2^64 memory addresses, i.e actually 18-Billion GB of RAM. In short, any amount of memory greater than 4 GB can be easily handled by it.

  • To install 64 bit version, you need a processor that supports 64 bit version of OS.
  • If you have a large amount of RAM in your machine (like 4 GB or more), only then you can really see the difference between working of 32-bit and 64-bit versions of Windows.
  • 64-bit can take care of large amounts of RAM or physical memory more effectively than 32-bit.
  • Using 64-bit can help you a lot in multi-tasking. you can easily switch between various applications without any problems or your windows hanging.
  • If you're a gamer who only plays High graphical games like Modern Warfare, GTA V, or use high-end softwares like Photoshop or CAD which takes a lot of memory, then you should go for 64-bit Operating Systems. Since it makes multi-tasking with big softwares easy and efficient for users.

You cannot change your OS from 32 bit to 64 bit because they're built on different architecture. You need to re-install a 64 bit version of the OS. 32 bit is also known as x86 or x32. To download 64 bit version look for x64.

thumb_up 15 thumb_down 1 flag 0

This means the CPU can execute 3.3e+9 instructions per second on each core.

CPU clock speed (measured in Hz, KHz, MHz, and GHz) is a handy way to tell the speed of a processor.

thumb_up 2 thumb_down 0 flag 0

A servlet is simply a class which responds to a particular type of network request - most commonly an HTTP request. Basically servlets are usually used to implement web applications - but there are also various frameworks which operate on top of servlets (e.g. Struts) to give a higher-level abstraction than the "here's an HTTP request, write to this HTTP response" level which servlets provide.

A Servlet is aJava application programming interface (API) running on the server machine which can intercept requests made by the client and can generate/send a response accordingly. A well-known example is the HttpServletwhich provides methods to hook on HTTP requests using the popular  HTTP methods such as GETand POST.

Servlets run in aservlet container which handles the networking side (e.g. parsing an HTTP request, connection handling etc). One of the best-known open source servlet containers is Tomcat.

thumb_up 4 thumb_down 0 flag 0

Library:

It is just acollection ofroutines (functional programming) orclass definitions(object oriented programming). The reason behind is simplycode reuse, i.e. get the code that has already been written by other developers. The classes or routines normally definespecific operations in a domain specific area. For example, there are some libraries of mathematics which can let developer just call the function without redo the implementation of how an algorithm works.

Framework:

In framework, all thecontrol flow is already there, andthere are a bunch of predefined white spotsthat we shouldfill out with our code. A framework is normally more complex. Itdefines a skeleton where the application defines its own features to fill out the skeleton. In this way, your code will be called by the framework when appropriately. The benefit is that developers do not need to worry about if a design is good or not, but just about implementing domain specific functions.

thumb_up 5 thumb_down 0 flag 1

ASCII defines 128 characters, which map to the numbers 0–127. Unicode defines (less than) 221characters, which, similarly, map to numbers 0–221 (though not all numbers are currently assigned, and some are reserved).

Unicode is a superset of ASCII, and the numbers 0–128 have the same meaning in ASCII as they have in Unicode. For example, the number 65 means "Latin capital 'A'".

Because Unicode characters don't generally fit into one 8-bit byte, there are numerous ways of storing Unicode characters in byte sequences, such as UTF-32 and UTF-8.

C follows ASCII and Java follows UNICODE.

thumb_up 1 thumb_down 0 flag 0

While logged in as root, we can use  ps aux | less  command. As the list of processes can be quite long and occupy more than a single screen, the output ofps aux can be piped (transferred) to the less command, which lets it be viewed one screen full at a time. The output can be advanced one screen forward by pressing theSPACE barand one screen backward by pressing theb key.

thumb_up 4 thumb_down 0 flag 0

Permission Groups

Each file and directory has three user based permission groups:

  • owner - The Owner permissions apply only the owner of the file or directory, they will not impact the actions of other users.
  • group - The Group permissions apply only to the group that has been assigned to the file or directory, they will not effect the actions of other users.
  • all users - The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

  • read - The Read permission refers to a user's capability to read the contents of the file.
  • write - The Write permissions refer to a user's capability to write or modify a file or directory.
  • execute - The Execute permission affects a user's capability to execute a file or view the contents of a directory.

We can use the 'chmod' command which stands for 'change mode'. Using the command, we can set permissions (read, write, execute) on a file/directory for the owner, group and the world.Syntax:

chmod permissions filename

The table below gives numbers for all for permissions type:

NumberPermission Type                              Symbol

0No Permission                                     ---

1Execute                                               --X

2Write                                                    -w-

3 Execute + Write                                   -wx

4Read                                                    r--

5Read + Execute                                   r-x

6Read +Write                                         rw-

7                                Read + Write +Execute                        rwx

thumb_up 1 thumb_down 0 flag 0

apt-get is the command-line tool for working with APT software packages.

APT (the Advanced Packaging Tool) is an evolution of the Debian .deb software packaging system. It is a rapid, practical, and efficient way to install packages on your system. Dependencies are managed automatically, configuration files are maintained, and upgrades and downgrades are handled carefully to ensure system stability.

thumb_up 1 thumb_down 0 flag 0

http://www.geeksforgeeks.org/puzzle-1-how-to-measure-45-minutes-using-two-identical-wires/

thumb_up 17 thumb_down 0 flag 0

7

The minimum no of races to be held is 7.
Make group of 5 horses and run 5 races. Suppose five groups are a,b,c,d,e and next alphabet is its individual rank in tis group(of 5 horses).for eg. d3 means horse in group d and has rank 3rd in his group. [ 5 RACES DONE ]
a1 b1 c1 d1 e1
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
a4 b4 c4 d4 e4
a5 b5 c5 d5 e5

Now make a race of (a1,b1,c1,d1,e1).[RACE 6 DONE] suppose result is a1>b1>c1>d1>e1
which implies a1 must be FIRST.
b1 and c1 MAY BE(but not must be) 2nd and 3rd.
FOR II position,horse will be either b1 or a2
(we have to fine top 3 horse therefore we choose horses b1,b2,a2,a3,c1 do racing among them [RACE 7 DONE].the only possibilities are :
c1 may be third
b1 may be second or third
b2 may be third
a2 may be second or third
a3 may be third
The final result will give ANSWER. suppoose result is a2>a3>b1>c1>b2
then answer is a1,a2,a3,b1,c1.
HENCE ANSWER is 7 RACES

thumb_up 1 thumb_down 3 flag 0

The major difference between these two,  is that the result of BFS is always a tree, whereas DFS can be a forest (collection of trees). Meaning, that if BFS is run from a node s, then it will construct the tree only of those nodes reachable from s, but if there are other nodes in the graph, will not touch them. DFS however will continue its search through the entire graph, and construct the forest of all of these connected components. This is, as they explain, the desired result of each algorithm in most use-cases.

thumb_up 0 thumb_down 0 flag 0

thumb_up 1 thumb_down 2 flag 0

thumb_up 0 thumb_down 2 flag 0

thumb_up 1 thumb_down 0 flag 0

thumb_up 0 thumb_down 2 flag 0

thumb_up 0 thumb_down 0 flag 0

thumb_up 8 thumb_down 0 flag 0

If you want to turn an existing object into well formatted JSON, you can youJSON.stringify(obj),using the third JSON.stringify argument which represents the space indentation levels:

var formatted = JSON.stringify(myObject, null, 2); Example:  /* 	Result: 	{ 		"myProp": "GeeksforGeeks", 		"subObj": { 			"prop": "JSON DATA" 		} 	}  */

The resulting JSON representation will be formatted and indented with two spaces!

thumb_up 7 thumb_down 0 flag 0

An n-ary tree is a rooted tree in which each node has no more than n children. It is also sometimes known as a n-way tree, an K-ary tree, or an m-ary tree. A binary tree is the special case where n=2.

thumb_up 2 thumb_down 0 flag 0

thumb_up 7 thumb_down 0 flag 1

A binary tree is a tree where each node can have zero, one or, at most, two child nodes.  Each node is identified by a key or id.

A binary search tree is a binary tree which nodes are sorted following the follow rules: all the nodes on the left sub tree of a node have a key with less value than the node, while all the nodes on the right sub tree have higher value and both the left and right subtrees must also be binary search trees.

So, a binary tree can be a binary search tree, if it follows all the properties of a binary search tree.

http://www.geeksforgeeks.org/a-program-to-check-if-a-binary-tree-is-bst-or-not/

thumb_up 0 thumb_down 0 flag 0

An electricity meter, electric meter, electrical meter, or energy meter is a device that measures the amount of electric energy consumed by a residence, a business, or an electrically powered device.

thumb_up 1 thumb_down 0 flag 0

A machine-readable code in the form of numbers and a pattern of parallel lines of varying widths, printed on a commodity and used especially for stock control.

thumb_up 0 thumb_down 1 flag 0

thumb_up 1 thumb_down 0 flag 0

A web container (also known as a servlet container; and compare "webtainer") is the component of a web server that interacts with Java servlets. A web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access-rights.

thumb_up 1 thumb_down 0 flag 0

The heap memory is the runtime data area from which the Java VM allocates memory for all class instances and arrays. The heap may be of a fixed or variable size. The garbage collector is an automatic memory management system that reclaims heap memory for objects.

  • Eden Space: The pool from which memory is initially allocated for most objects.

  • Survivor Space: The pool containing objects that have survived the garbage collection of the Eden space.

thumb_up 2 thumb_down 0 flag 0

Traceroute works by sending packets with gradually increasing TTL value, starting with TTL value of one. The first router receives the packet, decrements the TTL value and drops the packet because it then has TTL value zero. The router sends an ICMP Time Exceeded message back to the source. The next set of packets are given a TTL value of two, so the first router forwards the packets, but the second router drops them and replies with ICMP Time Exceeded. Proceeding in this way, traceroute uses the returned ICMP Time Exceeded messages to build a list of routers that packets traverse, until the destination is reached and returns an ICMP Echo Reply message.

The timestamp values returned for each router along the path are the delay (latency) values, typically measured in milliseconds for each packet.

The sender expects a reply within a specified number of seconds. If a packet is not acknowledged within the expected interval, an asterisk is displayed. The Internet Protocol does not require packets to take the same route towards a particular destination, thus hosts listed might be hosts that other packets have traversed. If the host at hop #N does not reply, the hop is skipped in the output.

On Unix-line operating systems, traceroute employs User Datagram Protocol (UDP) datagrams by default, with destination port numbers ranging from 33434 to 33534. The traceroute utility usually has an option to instead use ICMP Echo Request (type 8) packets, like the Windows utility tracert does, or to use TCP SYN packets. If a network has a firewall and operates both Windows and Unix-like systems, more than one protocol must be enabled inbound through the firewall for traceroute to work and receive replies.

thumb_up 1 thumb_down 0 flag 0

Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4).

thumb_up 0 thumb_down 0 flag 0

Enhanced Interior Gateway Routing Protocol (EIGRP) is an advanced distance-vector routing protocol that is used on a computer network for automating routing decisions and configuration. The protocol was designed by Cisco Systems as a proprietary protocol, available only on Cisco routers.

thumb_up 2 thumb_down 0 flag 0

DB2 has a mechanism called a cursor. Using a cursor is like keeping your finger on a particular line of text on a printed page.

In DB2, an application program uses a cursor to point to one or more rows in a set of rows that are retrieved from a table. You can also use a cursor to retrieve rows from a result set that is returned by a stored procedure. Your application program can use a cursor to retrieve rows from a table.

You can retrieve and process a set of rows that satisfy the search condition of an SQL statement. When you use a program to select the rows, the program processes one or more rows at a time.

The SELECT statement must be within a DECLARE CURSOR statement and cannot include an INTO clause. The DECLARE CURSOR statement defines and names the cursor, identifying the set of rows to retrieve with the SELECT statement of the cursor. This set of rows is referred to as the result table.

thumb_up 0 thumb_down 0 flag 1

There are changes, which could be performed across domain controllers in Active Directory, using the 'multi-master replication'. However, performing all changes this way may not be practical, and so it must be refined under one domain controller that maneuvers such change requests intelligently. And that domain controller is dubbed as Operations Master, sometimes known as Flexible Single Master Operations (FSMO).

Operations Master role is assigned to one (or more) domain controllers and they are classified as Forest-wide and Domain-wide based on the extent of the role. A minimum of five Operations master roles is assigned and they must appear atleast once in every forest and every domain in the forest for the 'Forest-wide' and 'Domain-wide' roles respectively.

thumb_up 5 thumb_down 0 flag 0

              public int sumOfArray(int[] a, int n) {     if (n == 0)         return a[n];     else         return a[n] + sumOfArray(a, n-1); }            

thumb_up 0 thumb_down 1 flag 0

The only difference is that with module_invoke_all(), for example, func_get_args() is invoked only once, while when using module_invoke() func_get_args() is called each time module_invoke() is called; that is a marginal difference, though.

thumb_up 0 thumb_down 0 flag 0

Name your module

The first step in creating a module is to choose a "short name" for it. This short name will be used in all file and function names in your module, so it must start with a letter, and it must contain only lower-case letters and underscores. For this example, we'll choose "current_posts" as the short name. Important note: Be sure you follow these guidelines and do not use upper case letters in your module's short name, since it is used for both the module's file name and as a function prefix. When you implement Drupal "hooks" (see later portions of tutorial), Drupal will only recognize your hook implementation functions if they have the same function name prefix as the name of the module file.

It's also important to make sure your module does not have the same short name as any theme you will be using on the site.

Create a folder and a module file

Given that our choice of short name is "current_posts" :

  1. Start the module by creating a folder in your Drupal installation at the path:
    • sites/all/modules/current_posts

      In Drupal 6.x and 7.x, sites/all/modules (or sites/all/modules/contrib and sites/all/modules/custom) is the preferred place for non-core modules with sites/all/themes (or sites/all/themes/contrib and sites/all/themes/custom) for non-core themes. By placing all site-specific files in the sites directory, this allows you to more easily update the core files and modules without erasing your customizations. Alternatively, if you have a multi-site Drupal installation and this module is for only one specific site, you can put it in sites/your-site-folder/modules.

  2. Create the info file for the module :
    • Save it as current_posts.info in the directory sites/all/modules/current_posts
    • At a minimum, this file needs to contain the following...
                            name = Current Posts description = Description of what this module does core = 7.x                    
  3. Create the PHP file for the module :
    • Save it as current_posts.module in the directory sites/all/modules/current_posts
  4. Add an opening PHP tag to the module :
    • <?php
    • Module files begin with the opening PHP tag. Do not place the CVS ID tag in your module. It is no longer needed with drupal.org's conversion to Git. If the coder module gives you error messages about it, then that module has not yet been updated to drupal.org's Git conventions.

thumb_up 5 thumb_down 0 flag 0

Java is a programming language and computing platform first released by Sun Microsystems in 1995. There are lots of applications and websites that will not work unless you have Java installed, and more are created every day. Java is fast, secure, and reliable. From laptops to datacenters, game consoles to scientific supercomputers, cell phones to the Internet.

thumb_up 0 thumb_down 0 flag 0

Simple Solution: The solution is to run two nested loops. Start traversing from left side. For every character, check if it repeats or not. If the character repeats, increment count of repeating characters. When the count becomes k, return the character. Time Complexity of this solution is O(n2)

We can Use Sorting to solve the problem in O(n Log n) time. Following are detailed steps.
1) Copy the given array to an auxiliary array temp[].
2) Sort the temp array using a O(nLogn) time sorting algorithm.
3) Scan the input array from left to right. For every element, count its
occurrences in temp[] using binary search. As soon as we find a characterthat occurs more than once, we return the character.

This step can be done in O(n Log n) time.

An efficient solution is to use Hashing to solve this in O(n) time on average.

  1. Create an empty hash.
  2. Scan each character of input string and insert values to each keys in the hash.
  3. When any character appears more than once, hash key value is increment by 1, and return the character.

See : http://www.geeksforgeeks.org/find-the-first-repeated-character-in-a-string/

thumb_up 1 thumb_down 0 flag 0

CREATE INDEX index_name
ON table_name (column1, column2, ...);

thumb_up 2 thumb_down 0 flag 0

No, Java is not pure object oriented.
Object oriented programming language should only have objects whereas java have int,char,float which are not objects.

As in C++ and some other object-oriented languages, variables of Java's primitive data tyoes are not objects. Values of primitive types are either stored directly in fields (for objects) or on the stacks (for methods) rather than on the heap, as is commonly true for objects. This was a conscious decision by Java's designers for performance  reasons. Because of this, Java was not considered to be a pure  object-oriented programming language.

thumb_up 0 thumb_down 0 flag 0

JVM is an interpreter which is installed in each client machine that is updated with latest security updates by internet . When this byte codes are executed , the JVM can take care of the security. So, java is said to be more secure than other programming languages.

thumb_up 2 thumb_down 0 flag 0

HTML

Step 1: Let's create our HTML structure.

First, we need a container div, which we'll call ".shopping-cart".
Inside the container, we will have a title and three items which will include:

  • two buttons — delete button and favorite button
  • product image
  • product name and description
  • buttons that will adjust quantity of products
  • total price
              <div class="shopping-cart">       <!-- Title -->       <div class="title">         Shopping Bag       </div>         <!-- Product #1 -->       <div class="item">         <div class="buttons">           <span class="delete-btn"></span>           <span class="like-btn"></span>         </div>           <div class="image">           <img src="item-1.png" alt="" />         </div>           <div class="description">           <span>Common Projects</span>           <span>Bball High</span>           <span>White</span>         </div>           <div class="quantity">           <button class="plus-btn" type="button" name="button">             <img src="plus.svg" alt="" />           </button>           <input type="text" name="name" value="1">           <button class="minus-btn" type="button" name="button">             <img src="minus.svg" alt="" />           </button>         </div>           <div class="total-price">$549</div>       </div>         <!-- Product #2 -->       <div class="item">         <div class="buttons">           <span class="delete-btn"></span>           <span class="like-btn"></span>         </div>           <div class="image">           <img src="item-2.png" alt=""/>         </div>           <div class="description">           <span>Maison Margiela</span>           <span>Future Sneakers</span>           <span>White</span>         </div>           <div class="quantity">           <button class="plus-btn" type="button" name="button">             <img src="plus.svg" alt="" />           </button>           <input type="text" name="name" value="1">           <button class="minus-btn" type="button" name="button">             <img src="minus.svg" alt="" />           </button>         </div>           <div class="total-price">$870</div>       </div>         <!-- Product #3 -->       <div class="item">         <div class="buttons">           <span class="delete-btn"></span>           <span class="like-btn"></span>         </div>           <div class="image">           <img src="item-3.png" alt="" />         </div>           <div class="description">           <span>Our Legacy</span>           <span>Brushed Scarf</span>           <span>Brown</span>         </div>           <div class="quantity">           <button class="plus-btn" type="button" name="button">             <img src="plus.svg" alt="" />           </button>           <input type="text" name="name" value="1">           <button class="minus-btn" type="button" name="button">             <img src="minus.svg" alt="" />           </button>         </div>           <div class="total-price">$349</div>       </div>     </div>            

css

              * {   box-sizing: border-box; }   html, body {   width: 100%;   height: 100%;   margin: 0;   background-color: #7EC855;   font-family: 'Roboto', sans-serif; }            

Next, let's style the first item, which is the title, by changing the height to 60px and giving it some basic styling, and the next three items which are the shopping cart products, will make them 120px height each and set them to display flex.

              .title {    height: 60px;    border-bottom: 1px solid #E1E8EE;    padding: 20px 30px;    color: #5E6977;    font-size: 18px;    font-weight: 400;  }    .item {    padding: 20px 30px;    height: 120px;    display: flex;  }    .item:nth-child(3) {    border-top:  1px solid #E1E8EE;    border-bottom:  1px solid #E1E8EE;  }            

Now we've styled the basic structure of our shopping cart.
Let's style our products in order.

The first elements are the delete and favorite buttons.
I've always loved Twitter's heart button animation, I think it would look great on our Shopping Cart, let's implement it.

              .shopping-cart {   width: 750px;   height: 423px;   margin: 80px auto;   background: #FFFFFF;   box-shadow: 1px 2px 3px 0px rgba(0,0,0,0.10);   border-radius: 6px;     display: flex;   flex-direction: column; }            

Next, let's style the first item, which is the title, by changing the height to 60px and giving it some basic styling, and the next three items which are the shopping cart products, will make them 120px height each and set them to display flex.

              .title {   height: 60px;   border-bottom: 1px solid #E1E8EE;   padding: 20px 30px;   color: #5E6977;   font-size: 18px;   font-weight: 400; }   .item {   padding: 20px 30px;   height: 120px;   display: flex; }   .item:nth-child(3) {   border-top:  1px solid #E1E8EE;   border-bottom:  1px solid #E1E8EE; }            

Now we've styled the basic structure of our shopping cart.
Let's style our products in order.

The first elements are the delete and favorite buttons.
I've always loved Twitter's heart button animation, I think it would look great on our Shopping Cart, let's implement it.

              .buttons {   position: relative;   padding-top: 30px;   margin-right: 60px; } .delete-btn, .like-btn {   display: inline-block;   Cursor: pointer; } .delete-btn {   width: 18px;   height: 17px;   background: url("delete-icn.svg") no-repeat center; }   .like-btn {   position: absolute;   top: 9px;   left: 15px;   background: url('twitter-heart.png');   width: 60px;   height: 60px;   background-size: 2900%;   background-repeat: no-repeat; }            

We set class "is-active" for when we click the button to animate it using jQuery, but this is for the next section.

              .is-active {   animation-name: animate;   animation-duration: .8s;   animation-iteration-count: 1;   animation-timing-function: steps(28);   animation-fill-mode: forwards; }   @keyframes animate {   0%   { background-position: left;  }   50%  { background-position: right; }   100% { background-position: right; } }            

Next, is the product image which needs a 50px right margin.

              .image {   margin-right: 50px; }   Let's add some basic style to  product name and description. .description {   padding-top: 10px;   margin-right: 60px;   width: 115px; }   .description span {   display: block;   font-size: 14px;   color: #43484D;   font-weight: 400; }   .description span:first-child {   margin-bottom: 5px; } .description span:last-child {   font-weight: 300;   margin-top: 8px;   color: #86939E; }            

Then we need to add a quantity element, where we have two buttons for adding or removing product quantity. First, make the CSS and then we'll make it work by adding some JavaScript.

              .quantity {   padding-top: 20px;   margin-right: 60px; } .quantity input {   -webkit-appearance: none;   border: none;   text-align: center;   width: 32px;   font-size: 16px;   color: #43484D;   font-weight: 300; }   button[class*=btn] {   width: 30px;   height: 30px;   background-color: #E1E8EE;   border-radius: 6px;   border: none;   cursor: pointer; } .minus-btn img {   margin-bottom: 3px; } .plus-btn img {   margin-top: 2px; }   button:focus, input:focus {   outline:0; }            

And last, which is the total price of the product.

              .total-price {   width: 83px;   padding-top: 27px;   text-align: center;   font-size: 16px;   color: #43484D;   font-weight: 300; }            

Let's also make it responsive by adding the following lines of code:

              @media (max-width: 800px) {   .shopping-cart {     width: 100%;     height: auto;     overflow: hidden;   }   .item {     height: auto;     flex-wrap: wrap;     justify-content: center;   }   .image img {     width: 50%;   }   .image,   .quantity,   .description {     width: 100%;     text-align: center;     margin: 6px 0;   }   .buttons {     margin-right: 20px;   } }            

That's it for the CSS.

Java Script

Let's make the heart animate when we click on it by adding the following code:

              $('.like-btn').on('click', function() {    $(this).toggleClass('is-active'); });            

Cool! Now let's make the quantity buttons work.

              $('.minus-btn').on('click', function(e) {     e.preventDefault();     var $this = $(this);     var $input = $this.closest('div').find('input');     var value = parseInt($input.val());       if (value &gt; 1) {         value = value - 1;     } else {         value = 0;     }     $input.val(value);   });   $('.plus-btn').on('click', function(e) {     e.preventDefault();     var $this = $(this);     var $input = $this.closest('div').find('input');     var value = parseInt($input.val());       if (value &lt; 100) {         value = value + 1;     } else {         value =100;     }       $input.val(value);            

And this is our final version.

thumb_up 0 thumb_down 0 flag 0

Android service is a component that isused to perform operations on the background such as playing music, handle network transactions, interacting content providers etc. It doesn't has any UI (user interface).

The service runs in the background indefinitely even if application is destroyed.

Moreover, service can be bounded by a component to perform interactivity and inter process communication (IPC).

The android.app.Service is subclass of ContextWrapper class.

thumb_up 0 thumb_down 0 flag 0

Abroadcast receiver (receiver) is an Android component which allows you to register for system or application events. All registered receivers for an event are notified by the Android runtime once this event happens.

For example, applications can register for theACTION_BOOT_COMPLETED system event which is fired once the Android system has completed the boot process.

Implementation

A receiver can be registered via theAndroidManifest.xml file.

Alternatively to this static registration, you can also register a receiver dynamically via theContext.registerReceiver() method.

The implementing class for a receiver extends theBroadcastReceiver class.

If the event for which the broadcast receiver has registered happens, theonReceive()method of the receiver is called by the Android system.

thumb_up 7 thumb_down 0 flag 0

Algorithm: areRotations(str1, str2)

                              1. Create a temp string and store concatenation of str1 to        str1 in temp.                           temp = str1.str1     2. If str2 is a substring of temp then str1 and str2 are         rotations of each other.      Example:                                       str1 = "ABACD"                      str2 = "CDABA"       temp = str1.str1 = "ABACDABACD"      Since str2 is a substring of temp, str1 and str2 are       rotations of each other.                          

See: http://www.geeksforgeeks.org/a-program-to-check-if-strings-are-rotations-of-each-other/

thumb_up 0 thumb_down 0 flag 0

We can solve this problem by observing some cases, As N needs to be LCM of all numbers, all of them will be divisors of N but because a number can be taken only once in sum, all taken numbers should be distinct. The idea is totake every divisor of N once in sum to maximize the result.
How can we say that the sum we got is maximal sum? The reason is, we have taken all the divisors of N into our sum, now if we take one more number into sum which is not divisor of N, then sum will increase but LCM property will not be hold by all those integers. So it is not possible to add even one more number into our sum, except all divisor of N so our problem boils down to this, given N find sum of all divisors, which can be solved in O(sqrt(N)) time.
So total time complexity of solution will O(sqrt(N)) with O(1) extra space.

              // C/C++ program to get maximum sum of Numbers // with condition that their LCM should be N #include <bits/stdc++.h> using namespace std;   // Method returns maximum sum f distinct // number whose LCM is N int getMaximumSumWithLCMN(int N) {     int sum = 0;     int LIM = sqrt(N);       // find all divisors which divides 'N'     for (int i = 1; i <= LIM; i++)     {         // if 'i' is divisor of 'N'         if (N % i == 0)         {             // if both divisors are same then add             // it only once else add both             if (i == (N / i))                 sum += i;             else                 sum += (i + N / i);         }     }       return sum; }   //  Driver code to test above methods int main() {     int N = 12;     cout << getMaximumSumWithLCMN(N) << endl;     return 0; }  Output : 28            

See : http://www.geeksforgeeks.org/maximum-sum-distinct-number-lcm-n/

thumb_up 1 thumb_down 0 flag 0

thumb_up 0 thumb_down 0 flag 0

thumb_up 18 thumb_down 0 flag 0

He plucked 7 flowers from the garden and 8 he kept in every temple. LOGIC: let the no. of flowers he plucked be 'x' he washed x no. of flowers in the pond =2x let the no. of flowers he kept in each temple be 'y' and that equals=2x-y then he washed 2x-y=2(2x-y)=4x-2y Then again he kept y no. of flowers in the 2nd temple =4x-2y-y He again doubled the flowers by washing them in the magical pond=2(4x-2y-y)=8x-4y-2y and then he again kept y no.of flowers in the 3rd temple So,8x-4y-2y-y=0 as he is left with nothing. so that will be 8x-(4y-2y-y)=0=8x-7y=0 =8x+0=7y As 0 has no value it will be cut So, 8x=7y as 7 and 8 are co-prime,they will be multiplied by each other. =8x=56=7y so x becomes 7 and y becomes 8 =7=x=no. of flowers plucked =8=y=no. of flowers kept in each temple

thumb_up 0 thumb_down 0 flag 0

If it is convex, a trivial way to check it is that the point is laying on the same side of all the segments (if traversed in the same order).

You can check that easily with the cross product (as it is proportional to the cosine of the angle formed between the segment and the point, those with positive sign would lay on the right side and those with negative sign on the left side).

thumb_up 1 thumb_down 0 flag 0

Method 1 (Use Sorting)
1) Sort both strings
2) Compare the sorted strings

Method 2 (Count characters)
This method assumes that the set of possible characters in both strings is small. In the following implementation, it is assumed that the characters are stored using 8 bit and there can be 256 possible characters.
1) Create count arrays of size 256 for both strings. Initialize all values in count arrays as 0.
2) Iterate through every character of both strings and increment the count of character in the corresponding count arrays.
3) Compare count arrays. If both count arrays are same, then return true.

thumb_up 0 thumb_down 0 flag 0

A type expresses that a value has certain properties. A class or interface groups those properties so you can handle them in an organized, named way. You relate a type you're defining to a class using inheritance, but this is not always (and in fact usually is not) how you want to relate two types. Your type may implement any number of interfaces without implying a parent/child relationship with them.

An interface therefore provides a way other than subtyping to relate types. It lets you add an orthogonal concern to the type you're defining, such as Comparable or Closeable. These would be almost unusable if they were classes (one reason is because Java would only allow you to extend that class; and subtyping forces hierarchy, access to unrelated members, and other likely unwanted things).

Historically an interface is somewhat less useful than a trait (e.g. in Scala) because you cannot add any implementation.

thumb_up 4 thumb_down 0 flag 0

HTML and CSS are complimentary to the creation and design of a web document.

HTML (Hyper Text Markup Language)

HTML creates the bones of the document.
HTML can exist without CSS.

CSS (Cascading Style Sheets)

CSS changes the way HTML looks.
CSS can be placed separately or directly within HTML.

Example of HTML:

              <p>The p tag defines a paragraph.</p> <p>With a few exceptions, all HTML tags require an opening and closing tag.</p>            

Example of CSS:

CSS used separately from HTML:
At the top of the document, the CSS can be contained within its own special section. In this example, we are choosing to make all HTML paragraph tags appear bolded:

                              <style>     p { font-style: bold; }     </style>            


CSS used directly within HTML:
The CSS rules can be placed directly within HTML in the document:

                              <p style="font-style: bold;">The p tag defines a paragraph.</p>     <p style="font-style: bold;">The style attribute contained within the tag defines CSS rules chosen from a pre-defined list of commands (such as 'font-style').</p>            

thumb_up 3 thumb_down 0 flag 0

#include<stdio.h> is a statement which tells the compiler to insert the contents of stdio at that particular place. In C/C++ we use some functions like printf(), scanf(), cout,cin. These functions are not defined by the programmer. These functions are already defined inside the language library.

thumb_up 6 thumb_down 0 flag 0

stdio stands for "standard input output".

Now say printf is a function. When you use it, its code must be written somewhere so that you directly use it. "stdio.h" is a header file, where this and other similar functions are defined. stdio.h stands for standard input output header C standard library header files.

thumb_up 2 thumb_down 2 flag 0

Number has 51 digits, so the last digit's got to be 1.
625 x 16 = 10000. so any multiple of 10000 is divisible by 625. last four digits are 4141.
(2500 + 1250) = 3750 divisible by 625, leaving a remainder of 391.

thumb_up 0 thumb_down 0 flag 0

Flowchart is the diagrammatic representation of an algorithm with the help of symbols carrying certain meaning. Using flowchart, we can easily understand a program. Flowchart is not language specific. We can use the same  flowchart to code a program using different programming languages. Though designing a flowchart helps the coding easier, the designing of flowchart is not a simple task and is time consuming.

thumb_up 0 thumb_down 0 flag 0

Algorithm is a tool a software developer uses when creating new programs. An algorithm is a step-by-step recipe for processing data; it could be a process an online store uses to calculate discounts, for example.

thumb_up 0 thumb_down 3 flag 0

Debugging run-time errors can be done by using 2 general techniques:

Technique 1:

Narrow down where in the program the error occurs.

Technique 2:

Get more information about what is happening in the program.

One way to implement both of these techniques is to print the provided information while the program is running.

thumb_up 2 thumb_down 0 flag 0

A compiler is a computer program (or a set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language), with the latter often having a binary form known as object code.

thumb_up 0 thumb_down 0 flag 0

It is a mechanism allowing you to ensure the authenticity of an assembly. It allows you to ensure that an assembly hasn't been tampered. It is also necessary if you want to put them into the GAC.

thumb_up 0 thumb_down 1 flag 0

Private Assembly:

  1. Private assembly can be used by only one application.
  2. Private assembly will be stored in the specific application's directory or sub-directory.
  3. There is no other name for private assembly.
  4. Strong name is not required for private assembly.
  5. Private assembly doesn't have any version constraint.
  • Public Assembly:
  1. Public assembly can be used by multiple applications.
  2. Public assembly is stored in GAC (Global Assembly Cache).
  3. Public assembly is also termed as shared assembly.
  4. Strong name has to be created for public assembly.
  5. Public assembly should strictly enforce version constraint.

thumb_up 0 thumb_down 0 flag 0

To put some form of hierarchical efficiency into IP addressing.

That said, the details don't matter because the lack of IPv4 address space required the abandonment of classes to better utilize the limited address space.

Before 1992, it was easy to determine the subnet size simply by looking at the first few bits of an IP address which identified it's class.

thumb_up 0 thumb_down 0 flag 0

Object-Orientation is all about objects that collaborate by sending messages.

Java has the following features which make it object oriented:

  1. Objects sending message to other objects.
  2. Everything is an object.
  3. Subtype polymorphism
  4. Encapsulation
  5. Data hiding
  6. Inheritance
  7. Abstraction

thumb_up 0 thumb_down 4 flag 0

A surrogate key is any column or set of columns that can be declared as the primary key instead of a "real" or natural key. Sometimes there can be several natural keys that could be declared as the primary key, and these are all called candidate keys. So a surrogate is a candidate key.

thumb_up 0 thumb_down 1 flag 0

SELECT emp_id, emp_name  FROM Employee WHERE group=A;

thumb_up 0 thumb_down 0 flag 0

Take two arrays and separate alphabets and numeric values and sort both the arrays.

thumb_up 0 thumb_down 0 flag 0

thumb_up 0 thumb_down 0 flag 0

                              class GfG{         public static void main(String[] args)  {           int c=0,a,temp;           int n=153;//It is the number to check armstrong           temp=n;           while(n>0)           {           a=n%10;           n=n/10;           c=c+(a*a*a);           }           if(temp==c)           System.out.println("armstrong number");            else               System.out.println("Not armstrong number");           }       }                          

thumb_up 0 thumb_down 1 flag 0

public : it is a access specifier that means it will be accessed by publicly. static : it is access modifier that means when the java program is load then it will create the space in memory automatically. void : it is a return type i.e it does not return any value. main() : it is a method or a function name

thumb_up 0 thumb_down 0 flag 0

In cryptography, encryption is the process of encoding a message or information in such a way that only authorized parties can access it. Encryption does not of itself prevent interference, but denies the intelligible content to a would-be interceptor. In an encryption scheme, the intended information or message, referred to a plaintext, is encrypted using an encryption algorithm, generating ciphertext that can only be read if decrypted. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is in principle possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.

Some of the encryption algorithms are:

  • AES: Advance Encryption Standards
  • RSA
  • Blowfish

thumb_up 2 thumb_down 0 flag 0

Selenium is a portable software-testing framework for web application. Selenium provides a record/playback tool for authoring tests without the need to learn a test scripting language(Selenium IDE). It also provides a test domain specific language (Selenese) to write tests in a number of popular programming languages, including C#, Java, Ruby, Php and Scala/ The tests can then run against most modern web browsers. Selenium deploys on Windows, Linux and OS X platforms. It is open source software, released under the Apache 2.0 license web developers can download and use it without charge.

thumb_up 0 thumb_down 1 flag 0

The Digital Signature Algorithm (DSA) is a Federal Information Processing Standard for digital signatures. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) and adopted it as FIPS 186 in 1993.

thumb_up 0 thumb_down 0 flag 0

cmd -> ipconfig

thumb_up 0 thumb_down 0 flag 0

You could create db.js:

              var mysql = require('mysql'); var connection = mysql.createConnection({     host     : '127.0.0.1',     user     : 'root',     password : '',     database : 'chat' });  connection.connect(function(err) {     if (err) throw err; });  module.exports = connection;            

Then in your app.js, you would simply require it.

              var express = require('express'); var app = express(); var db = require('./db');  app.get('/save',function(req,res){     var post  = {from:'me', to:'you', msg:'hi'};     db.query('INSERT INTO messages SET ?', post, function(err, result) {       if (err) throw err;     }); });  server.listen(3000);            

This approach allows you to abstract any connection details, wrap anything else you want to expose and require db throughout your application while maintaining one connection to your db thanks to how node require works :)

thumb_up 0 thumb_down 0 flag 0

A directive is essentially a function† that executes when the Angular compiler finds it in the DOM. The function(s) can do almost anything, which is why I think it is rather difficult to define what a directive is. Each directive has a name (like ng-repeat, tabs, make-up-your-own) and each directive determines where it can be used: element, attribute, class, in a comment.

A directive normally only has a (post)link function. A complicated directive could have a compile function, a pre-link function, and a post-link function.

thumb_up 1 thumb_down 0 flag 0

enter image description here enter image description here

The difference between factory and service is just like the difference between a function and an object

Factory and Service is a just wrapper of a provider.

Factory

Factory can return anything which can be a class(constructor function), instance of class, string, number or boolean. If you return a constructor function, you can instantiate in your controller.

                              myApp.factory('myFactory', function () {    // any logic here..    // Return any thing. Here it is object   return {     name: 'Joe'   } }            

Service

Service does not need to return anything. But you have to assign everything in this variable. Because service will create instance by default and use that as a base object.

              myApp.service('myService', function () {    // any logic here..    this.name = 'Joe'; }            

When should we use them?

Factory is mostly preferable in all cases. It can be used when you have constructor function which needs to be instantiated in different controllers.

Service is a kind of Singleton Object. The Object return from Service will be same for all controller. It can be used when you want to have single object for entire application. Eg: Authenticated user details.

thumb_up 0 thumb_down 0 flag 0

$routeProvider is the key service which set the configuration of urls, map them with the corresponding html page or ng-template, and attach a controller with the same.

thumb_up 1 thumb_down 0 flag 0

  • Data-binding − It is the automatic synchronization of data between model and view components.

  • Scope − These are objects that refer to the model. They act as a glue between controller and view.

  • Controller − These are JavaScript functions that are bound to a particular scope.

  • Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app.

  • Filters − These select a subset of items from an array and returns a new array.

  • Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)

  • Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using "partials".

  • Routing − It is concept of switching views.

  • Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. AngularJS does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever.

  • Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state.

  • Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test.

thumb_up 1 thumb_down 0 flag 0

  • Interfaces are more flexible, because a class can implement multiple interfaces.  Since Java does not have multiple inheritance, using abstract classes prevents your users from using any other class hierarchy. In general, prefer interfaces when there are no default implementations or state. Java collections offer good examples of this (Map, Set, etc.).
  • Abstract classes have the advantage of allowing better forward compatibility.  Once clients use an interface, you cannot change it; if they use an abstract class, you can still add behavior without breaking existing code. If compatibility is a concern, consider using abstract classes.
  • Even if you do have default implementations or internal state, consider offering an interface and an abstract implementation of it. This will assist clients, but still allow them greater freedom if desired.

thumb_up 1 thumb_down 0 flag 0

The Open Group Architecture Framework (TOGAF) is a framework for enterprise architecture that provides an approach for designing, planning, implementing, and governing an enterprise information technology architecture.

thumb_up 1 thumb_down 0 flag 0

One way is to use a port number other than default port 1433 or a port determined at system startup for named instances. You can use SQL Server Configuration Manager to set the port for all IP addresses listed in the TCP/IP Properties dialog box. Be sure to delete any value for the TCP Dynamic Ports property for each IP address. You might want to disable the SQL Server Browser service as well or at least hide the SQL Server instance so that the Browser service doesn't reveal it to any applications that inquire which ports the server is listening to. (One reason to not disable it would be if you have multiple instances of SQL Server on the host because it "maps" connections to instances.) You can hide an instance in the properties page for the instance's protocol, although this just means that SQL Server won't respond when queried by client applications looking for a list of SQL Server machines. Making these kinds of changes is security by obscurity, which is arguably not very secure and shouldn't be your only security measure. But they do place speed bumps in the path of attackers trying to find an instance of SQL Server to attack.

thumb_up 0 thumb_down 0 flag 0

192.168.0.0/16, 127.0.0.0/8 and 10.0.0.0/8

10.0.0.0 through 10.255.255.255 169.254.0.0 through 169.254.255.255 (APIPA only) 172.16.0.0 through 172.31.255.255 192.168.0.0 through 192.168.255.255

Above are the private IP address ranges.

thumb_up 0 thumb_down 0 flag 0

Windows 2000 and Windows 98 provide Automatic Private IP Addressing (APIPA), a service for assigning unique IP addresses on small office/home office (SOHO) networks without deploying the DHCP service. Intended for use with small networks with fewer than 25 clients, APIPA enables Plug and Play networking by assigning unique IP addresses to computers on private local area networks.

APIPA uses a reserved range of IP addresses (169.254.x .x ) and an algorithm to guarantee that each address used is unique to a single computer on the private network.

APIPA works seamlessly with the DHCP service. APIPA yields to the DHCP service when DHCP is deployed on a network. A DHCP server can be added to the network without requiring any APIPA-based configuration. APIPA regularly checks for the presence of a DHCP server, and upon detecting one replaces the private networking addresses with the IP addresses dynamically assigned by the DHCP server.

thumb_up 0 thumb_down 0 flag 0

Arecurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural network, RNNs can use their internal memory to process arbitrary sequences of inputs. This makes them applicable to tasks such as unsegmented connected handwriting recognition or speech recognition.

thumb_up 0 thumb_down 0 flag 0

Alternating Current (AC)

  • Alternating current describes the flow of charge that changes direction periodically. As a result, the voltage level also reverses along with the current. AC is used to deliver power to houses, office buildings, etc.

Generating AC

  • AC can be produced using a device called an alternator. This device is a special type of
  • A loop of wire is spun inside of a magnetic field, which induces a current
  • The rotation of the wire can come from any number of means: a wind turbine, a steam turbine, flowing water, and so on.
  • Because the wire spins and enters a different magnetic polarity periodically, the voltage and current alternates on the wire.
  • Generating AC can be compared to our previous water analogy:
  • To generate AC in a set of water pipes, we connect a mechanical crank to a piston that moves water in the pipes back and forth (our "alternating" current).
  • Notice that the pinched section of pipe still provides resistance to the flow of water regardless of the direction of flow.

Direct Current (DC)

Direct current is a bit easier to understand than alternating current. Rather than oscillating back and forth, DC provides a constant voltage or current.
Generating DC
DC can be generated in a number of ways:

  • An AC generator equipped with a device called a "commutator" can produce direct current
  • Use of a device called a "rectifier" that converts AC to DC
  • Batteries provide DC, which is generated from a chemical reaction inside of the battery

Using our water analogy again, DC is similar to a tank of water with a hose at the end.

  • The tank can only push water one way: out the hose. Similar to our DC-producing battery, once the tank is empty, water no longer flows through the pipes.

thumb_up 0 thumb_down 0 flag 0

In multitasking computer operating systems, adaemon is a computer program hat runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letterd, for clarification that the process is, in fact, a daemon, and for differentiation between a daemon and a normal computer program. For example, syslogd is the daemon that implements the system logging facility, and sshd is a daemon that serves incoming SSH connections.

In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually either created by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such asdaemon(3) in Unix.

Systems often start daemons at boot time which will respond to network requests, hardware activity, or other programs by performing some task. Daemons such as cron may also perform defined tasks at scheduled times.

thumb_up 0 thumb_down 0 flag 0

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystem in 1984, allowing a user on a client computer o access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call(ONC RPC) system. The NFS is an open standard defined in Request For Comments (RFC), allowing anyone to implement the protocol.

thumb_up 0 thumb_down 0 flag 0

using the following command in linux:

ethtool eth0

thumb_up 2 thumb_down 0 flag 0

Determine if the problem is hardware or software.

Is it your installation of the operating system that is causing the lags and lockups, or is it a hardware issue like bad Ram or a blown cap… or bad sectors on a hard drive… or a wonky power supply?

One way to do this, is to boot your computer to a diagnostic OS… The Ultimate Boot CD, for example. Run some tests.

Another way is to remove your hard drive, install a blank one (you are only going to use it for this), and install Windows on it. If everything works great… you've just reduced your possibilities to a bad hard drive, or a bad install of Windows. A few HDD tests, read some S.M.A.R.T.. data, and you'd know whether it was the issue or not.

You could open the case, make sure that all the caps are good, all the connections are tight, that Ram and cards are all seated properly… that the heatsink is on the processor properly, drive data cables are secure and in good order.

If it is a laptop…that doesn't change opening it up and checking it out… it just puts that extra effort at the end of the session. Last resort kind of thing. But I've got prior experience with a bent heat pipe causing a heatsink to hit a case at an angle that caused the whole assembly to lift off a processor just enough when everything was screwed down tight that the unit would lag, hang, and overheat. Took it apart… and it ran fine. That was fun.. simulating reassembly to observe what the cooling rig did under pressure.

Start simple. For me when it is on the bench, I boot to Parted Magic. I read the SMART data, and backup important information. If the drive is to blame, the sooner I can get the important information off the drive, the better. Plus, I don't have to worry about passworded accounts this way. Once I tentatively eliminate the drive as a possibility, I'll boot to UBCD and start down the list. That, or I grab the PC-Doctor flash drive, since the corporate entity I work for purchased them. Our shop has a Toolkit, and it has a PC-Doctor suite running out of a customized WinPE… but I prefer the "official" one. It lights an amber LED when there is an issue, and that makes it easy to spot from across the bench.

Assuming the hardware all checks, I move on to the OS. This is where diagnostics start to lose ground to time and objective. More often than not, when repairing a computer, "fix" means "make it work the way it is supposed to". I know that this seems logical… but sometimes what would fix the problem would take far longer to uncover and implement than just nuking the system with a factory recovery.

Case in point, an older woman with an older HP that had been upgraded to Win10 came in tonight, with the system behaving unpredictably. The hardware checked out, and a scan of the system found some infected files and uncommon alterations. Her lags and locks were, at least potentially, being caused by a dirty system. I could have cleaned it. She only used it for surfing the web. She had nothing she wanted to keep (I asked), and she was OK with my performing a factory recovery. I didn't actually "fix" the issue. I obliterated it, and rebuilt Windows on top of the smoking remains.

Now, there is a diagnostic reason for this as well. A clean install of Windows can fix a great many issues, large and small. In fact, it is probably the most common piece of advice (Reinstall Windows) behind, "Check your drivers" and "Reboot the computer." A clean install of Windows can also help reveal issues, without OTHER issues getting in the way… which is why I opened this with the Hard Drive Swap Gambit… putting another hard drive in the machine and installing Windows, and seeing how the system operates. You get to see if that would fix the issue, without touching the issue. Then, you can work backwards.

There have been times that a customer has come in, with a computer that has been in their shop for a decade or two… it would have been upgraded, and upgraded, and their inventory control software can't be reinstalled anymore, because the company that made it doesn't exist… or something similar. So Windows can't be reinstalled. However… the average user? Nope. Back up the data, and nuke it.

Many times it happens due to server heavy load, deadlocks or some process may hanged. I will first monitor the database threads if anything sleeping long time or taking too much of time - I will kill the process from database server.

Secondly,  will check the page by applying start time and end time between methods to analyse which part is taking too long time to responds back to client request - I will find out and optimize this.

Look up maximum threads for server both web server and data base server - I will try to increase the threads connections

thumb_up 2 thumb_down 0 flag 0

Web Sockets is a next-generation bidirectional communication technology for web applications which operates over a single socket and is exposed via a JavaScript interface in HTML 5 compliant browsers.

Once you get a Web Socket connection with the web server, you can send data from browser to server by calling a send() method, and receive data from server to browser by an onmessage event handler.

Following is the API which creates a new WebSocket object.

var Socket = new WebSocket(url, [protocal] );            

Here first argument, url, specifies the URL to which to connect. The second attribute, protocol is optional, and if present, specifies a sub-protocol that the server must support for the connection to be successful.

thumb_up 3 thumb_down 0 flag 0

thumb_up 0 thumb_down 0 flag 0

Dynamic Host Configuration Protocol (DHCP)

DHCP is a network protocol for assigning IP address to computer's.

Suppose you are in college or employee of any company, then the IP address which show is of internet connection of that company or school router. But if there 100 PC's will be connected from that one connection then you have to assign sub Ip to all those PC's it can be done (automatically) by DHCP protocol.

As an example if you and your family member are using the wifi of house you all will be having different IP's such as

125.80.0.1 125.80.0.5

these are different IP's although you are using the same wifi, this distribution of IP's is done by DHCP automatically.

Advantages

  • DHCP also keeps track of computers connected to the network and prevents more than one computer from having the same IP address.
  • Automatic allocation of IP's in a range in a network automatically.

Disadvantage

  • Security Issues :- If a rogue DHCP server is introduced to the network. A rough offer IP addresses to user connecting to the network, hence information sent over the network can be intercepted, violating privacy and network security.

thumb_up 0 thumb_down 0 flag 0

(x>1) -> right shift will multiply the number by 2.

thumb_up 2 thumb_down 0 flag 0

Latches are level trigerred.
i.e output <-- input when clk = 1
Flip flops are edge triggered. i.e output <-- input when clk transitions from 0 to 1

Flip-flops can be implementedas two back-to-back latches driven by complementary clock phases. The first latch is driven by positive phase while the second latch is driven by negative phase

thumb_up 0 thumb_down 0 flag 0

Yes, definitly we can declare a class inside an interface.
example:-

Interface I
{
class GfG
{
void m1(){
System.out.println("inside m1");
}
void m2(){
System.out.println("inside m2");
}
}
}

Class Test{

public static void main(String args[])
I.GfG ia = new I.GfG();
ia.m1();
ia.m2();
}

In this case, the inner class is becoming as static inner class inside interface where we can access the members of inner class like we do in case of static inner class.

thumb_up 1 thumb_down 0 flag 0

What is Good about Linux:

  • Really fast (modest laptop - 4GB RAM)
  • Easy to update
  • Fast browsing (SO MUCH BETTER THAN WINDOWS 10)
  • No need for much security
  • Advanced desktop (I use MATE/Compiz - love it)
  • Transparent System

Bad about Linux:

  • No MS Office {Huge problem)
  • Other non-existent software (Photoshop, Dreamweaver)
  • Fragmented user design - this is a good thing but it makes every distribution a learning curve
  • Developers's GUI vision and kernel vision are not integrated (as I see it)
  • Gaming is Bad (I hear)

Good about Windows:

  • Decent performance
  • MS Office (this is the main reason I need Windows)
  • Gaming (I personally don't)

Bad about Windows

  • Cost
  • Performance (terrible - browser [Chrome] performance on Windows 10 is a huge problem)
  • Regressive behavior of the user interface with no real way to change it
  • Opaque System

thumb_up 0 thumb_down 1 flag 0

  • Distance Vector routing protocols are based on Bellma and Ford algorithms.
  • Distance Vector routing protocols are less scalable such as RIP supports 16 hops and IGRP has a maximum of 100 hops.
  • Distance Vector are classful routing protocols which means that there is no support of Variable Length Subnet Mask (VLSM) and Classless Inter Domain Routing (CIDR).
  • Distance Vector routing protocols uses hop count and composite metric.
  • Distance Vector routing protocols support dis-contiguous subnets.
  • Common distance vector routing protocols include: Appletalk RTMP, IPX RIP, IP RIP, IGRP.

thumb_up 0 thumb_down 0 flag 0

  • Distance Vector routing protocols are based on Bellma and Ford algorithms.
  • Distance Vector routing protocols are less scalable such as RIP supports 16 hops and IGRP has a maximum of 100 hops.
  • Distance Vector are classful routing protocols which means that there is no support of Variable Length Subnet Mask (VLSM) and Classless Inter Domain Routing (CIDR).
  • Distance Vector routing protocols uses hop count and composite metric.
  • Distance Vector routing protocols support dis-contiguous subnets.
  • Common distance vector routing protocols include: Appletalk RTMP, IPX RIP, IP RIP, IGRP

Common distance vector routing protocols include:

  • Appletalk RTMP
  • IPX RIP
  • IP RIP
  • IGRP

thumb_up 0 thumb_down 0 flag 0

WiMAX (Worldwide Interoperability for Microwave Access) is a wireless industry coalition dedicated to the advancement of IEEE 802.16 standards for broadband wireless access (BWA) networks.

WiMAX supports mobile, nomadic and fixed wireless applications. A mobile user, in this context, is someone in transit, such as a commuter on a train. A nomadic user is one that connects on a portable device but does so only while stationary -- for example, connecting to an office network from a hotel room and then again from a coffee shop.

thumb_up 0 thumb_down 0 flag 0

While C# has a set of capabilities similar to Java, it has added several new and interesting features. Delegation is the ability to treat a method as a first-class object. A C# delegate is used where Java developers would use an interface with a single method.

Delegates, represent methods that are callable without knowledge of the target object.

thumb_up 0 thumb_down 0 flag 0

thumb_up 0 thumb_down 0 flag 0

We can do this using external sorting. What we have to do for this is, sort small chunks of data first, write it back to the disk and then iterate over those to sort all.

thumb_up 0 thumb_down 1 flag 0

Priority Queue is similar to queue where we insert an element from the back and remove an element from front, but with a one difference that the logical order of elements in the priority queue depends on the priority of the elements. The element with highest priority will be moved to the front of the queue and one with lowest priority will move to the back of the queue. Thus it is possible that when you enqueue an element at the back in the queue, it can move to front because of its highest priority.

Example:
Let's say we have an array of 5 elements : {4, 8, 1, 7, 3} and we have to insert all the elements in the max-priority queue.
First as the priority queue is empty, so 4 will be inserted initially.
Now when 8 will be inserted it will moved to front as 8 is greater than 4.
While inserting 1, as it is the current minimum element in the priority queue, it will remain in the back of priority queue.
Now 7 will be inserted between 8 and 4 as 7 is smaller than 8.
Now 3 will be inserted before 1 as it is the 2nd minimum element in the priority queue. All the steps are represented in the diagram below:

enter image description here

We can think of many ways to implement the priority queue.

Naive Approach:
Suppose we have N elements and we have to insert these elements in the priority queue. We can use list and can insert elements in O(N) time and can sort them to maintain a priority queue in O(N logN ) time.

Efficient Approach:
We can use heaps to implement the priority queue. It will take O(log N) time to insert and delete each element in the priority queue.

Based on heap structure, priority queue also has two types max- priority queue and min - priority queue.

Let's focus on Max Priority Queue.

Max Priority Queue is based on the structure of max heap and can perform following operations:

maximum(Arr) : It returns maximum element from the Arr.
extract_maximum (Arr) - It removes and return the maximum element from the Arr.
increase_val (Arr, i , val) - It increases the key of element stored at index i in Arr to new value val.
insert_val (Arr, val ) - It inserts the element with value val in Arr.

Implementation:

length = number of elements in Arr.

Maximum :

                              int maximum(int Arr[ ])     {         return Arr[ 1 ];  //as the maximum element is the root element in the max heap.     }            

Complexity: O(1)

Extract Maximum: In this operation, the maximum element will be returned and the last element of heap will be placed at index 1 and max_heapify will be performed on node 1 as placing last element on index 1 will violate the property of max-heap.

              int extract_maximum (int Arr[ ]) {     if(length == 0)     { cout<< "Can't remove element as queue is empty";         return -1;     }     int max = Arr[1];     Arr[1] = Arr[length];     length = length -1;     max_heapify(Arr, 1);     return max; }            

Complexity: O(logN).

Increase Value: In case increasing value of any node, may violate the property of max-heap, so we will swap the parent's value with the node's value until we get a larger value on parent node.

              void increase_value (int Arr[ ], int i, int val) {     if(val < Arr[ i ])     {         cout<<"New value is less than current value, can't be inserted" <<endl;         return;     }     Arr[ i ] = val;     while( i > 1 and Arr[ i/2 ] < Arr[ i ])     {         swap|(Arr[ i/2 ], Arr[ i ]);         i = i/2;     } }            

Complexity : O(log N).

Insert Value :

              void insert_value (int Arr[ ], int val) {     length = length + 1;     Arr[ length ] = -1;  //assuming all the numbers greater than 0 are to be inserted in queue.     increase_val (Arr, length, val); }            

Complexity: O(log N).

Example:

Initially there are 5 elements in priority queue.
Operation: Insert Value(Arr, 6)
In the diagram below, inserting another element having value 6 is violating the property of max-priority queue, so it is swapped with its parent having value 4, thus maintaining the max priority queue.

enter image description here

Operation: Extract Maximum:
In the diagram below, after removing 8 and placing 4 at node 1, violates the property of max-priority queue. So max_heapify(Arr, 1) will be performed which will maintain the property of max - priority queue.

enter image description here

As discussed above, like heaps we can use priority queues in scheduling of jobs. When there are N jobs in queue, each having its own priority. If the job with maximum priority will be completed first and will be removed from the queue, we can use priority queue's operation extract_maximum here. If at every instant we have to add a new job in the queue, we can use insert_value operation as it will insert the element in O(log N) and will also maintain the property of max heap.

thumb_up 0 thumb_down 0 flag 0

Testing Big Data application is more a verification of its data processing rather than testing the individual features of the software product. When it comes to Big data testing, performance and functional testing are the key.

In Big data testing QA engineers verify the successful processing of terabytes of data using commodity cluster and other supportive components. It demands a high level of testing skills as the processing is very fast. Processing may be of three types

  1. Batch
  2. Real Time
  3. Interactive

Along with this, data quality is also an important factor in big data testing. Before testing the application, it is necessary to check the quality of data and should be considered as a part of database testing. It involves checking various characteristics like conformity, accuracy, duplication, consistency, validity,data completeness, etc.

Big Data Testing can be broadly divided into three steps:

Step 1: Data Staging Validation

The first step of big data testing, also referred as Pre-Hadoop stage involves process validation.

  1. Data from various source like RDBMS, weblogs etc. should be validated to make sure that correct data is pulled into system.
  2. Comparing source data with the data pushed into the Hadoop system to make sure they match.
  3. Verify the right data is extracted and loaded into the correct HDFS location

Step 2: "Map Reduce"Validation

The second step is a validation of "Map Reduce". In this stage, the tester verifies the business logic validation on every node and then validating them after running against multiple nodes, ensuring that the

  1. Map Reduce process works correctly
  2. Data aggregation or segregation rules are implemented on the data
  3. Key value pairs are generated
  4. Validating the data after Map Reduce process

Step 3: Output Validation Phase

The final or third stage of Big Data testing is the output validation process. The output data files are generated and ready to be moved to an EDW (Enterprise Data Ware-house ) or any other system based ont he requirement.

Activities in third stage includes

  1. To check the transformation rules are correctly applied
  2. To check the data integrity and successful data load into the target system
  3. To check that there is no data corruption by comparing the target data with the HDFS file system data

Architecture Testing

Hadoop processes very large volumes of data and is highly resource intensive. Hence, architectural testing is crucial to ensure success of your Big Data project. Poorly or improper designed system may lead to performance degradation, and the system could fail to meet the requirement. At least, Performance and Fail-Over test services should be done in a Hadoop environment.

Performance testing includestesting of job completion time, memory utilization, data throughput and similar system metrics. While the motive of Failover test service is to verify thatdata processing occurs seamlessly in case of failure of data nodes

Performance Testing

Performance Testing for Big Data includes following actions

  1. Data ingestion and Throughout: In this stage, the tester verifies how the fast system can consume data from various data source. Testing involves identifying different message that the queue can process in a given time frame. It also includes how quickly data can be inserted into underlying data store for example insertion rate into a Mongo and Cassandra database.
  2. Data Processing: It involves verifying the speed with which the queries or map reduce jobs are executed. It also includes testing the data processing in isolation when the underlying data store is populated within the data sets. For example running Map Reduce jobs on the underlying HDFS
  3. Sub-Component Performance: These systems are made up of multiple components, and it is essential to test each of these components in isolation. For example, how quickly message is indexed and consumed, map reduce jobs, query performance, search, etc.

thumb_up 0 thumb_down 0 flag 0

SQL SERVER/ ORACLE/ MS ACCESS:

ALTER TABLE Persons
DROP CONSTRAINT PK_Person;

MYSQL:

ALTER TABLE Persons
DROP PRIMARY KEY;

thumb_up 0 thumb_down 0 flag 0

http://www.geeksforgeeks.org/implement-stack-using-queue/

thumb_up 0 thumb_down 0 flag 0

Router:- It is a layer 3 device which is for exchange of layer 3 protocols like IP. It originates or routes incoming packet according to the destination IP and the exit interface it learnt / was configured in its routing table. This is called as next-hop for a particular network which routed by any router. Basically communication takes place by exchange of layer 3 packets which mainly contains source and destination IP address but it also contains other detailed information too. Generally takes place at core and distribution layers of the Cisco 3 layer architecture.

Switch:- Switch is a layer 2 device which works on the basis of layer 2 frames. Frame is something which consists of many elements but very important being source and destination MAC address. It communicates with help of frames and reaches destination. Generally takes place at access layer of the Cisco 3 layer architecture.

thumb_up 0 thumb_down 0 flag 0

Given that creating an index requires additional disk space (277,778 blocks extra from the above example, a ~28% increase), and that too many indexes can cause issues arising from the file systems size limits, careful thought must be used to select the correct fields to index.

Since indexes are only used to speed up the searching for a matching field within the records, it stands to reason that indexing fields used only for output would be simply a waste of disk space and processing time when doing an insert or delete operation, and thus should be avoided. Also given the nature of a binary search, the cardinality or uniqueness of the data is important. Indexing on a field with a cardinality of 2 would split the data in half, whereas a cardinality of 1,000 would return approximately 1,000 records. With such a low cardinality the effectiveness is reduced to a linear sort, and the query optimizer will avoid using the index if the cardinality is less than 30% of the record number, effectively making the index a waste of space.

thumb_up 0 thumb_down 1 flag 0

What does a database actually do to find out what matches a select statement?

To be blunt, it's a matter of brute force. Simply, it reads through each candidate record in the database and matches the expression to the fields. So, if you have "select * from table where name = 'fred'", it literally runs through each record, grabs the "name" field, and compares it to 'fred'.

Now, if the "table.name" field is indexed, then the database will (likely, but not necessarily) use the index first to locate the candidate records to apply the actual filter to.

This reduces the number of candidate records to apply the expression to, otherwise it will just do what we call a "table scan", i.e. read every row.

But fundamentally, however it locates the candidate records is separate from how it applies the actual filter expression, and, obviously, there are some clever optimizations that can be done.

How does a database interpret a join differently to a query with several "where key1 = key2" statements?

Well, a join is used to make a new "pseudo table", upon which the filter is applied. So, you have the filter criteria and the join criteria. The join criteria is used to build this "pseudo table" and then the filter is applied against that. Now, when interpreting the join, it's again the same issue as the filter -- brute force comparisons and index reads to build the subset for the "pseudo table".

How does the database store all its memory?

One of the keys to good database is how it manages its I/O buffers. But it basically matches RAM blocks to disk blocks. With the modern virtual memory managers, a simpler database can almost rely on the VM as its memory buffer manager. The high end DB's do all this themselves.

thumb_up 0 thumb_down 0 flag 0

When data is stored on disk based storage devices, it is stored as blocks of data. These blocks are accessed in their entirety, making them the atomic disk access operation. Disk blocks are structured in much the same way as linked lists; both contain a section for data, a pointer to the location of the next node (or block), and both need not be stored contiguously.

Due to the fact that a number of records can only be sorted on one field, we can state that searching on a field that isn't sorted requires a Linear Search which requires N/2 block accesses (on average), where N is the number of blocks that the table spans. If that field is a non-key field (i.e. doesn't contain unique entries) then the entire table space must be searched at N block accesses.

Whereas with a sorted field, a Binary Search may be used, this has log2 N block accesses. Also since the data is sorted given a non-key field, the rest of the table doesn't need to be searched for duplicate values, once a higher value is found. Thus the performance increase is substantial.

thumb_up 1 thumb_down 0 flag 0

B+Trees typically, you should look it up. It's a straight forward technique that has been around for years. It's benefit is shared with most any balanced tree: consistent access to the nodes, plus all the leaf nodes are linked so you can easily traverse from node to node in key order. So, with an index, the rows can be considered "sorted" for specific fields in the database, and the database can leverage that information to it benefit for optimizations. This is distinct from, say, using a hash table for an index, which only lets you get to a specific record quickly. In a B-Tree you can quickly get not just to a specific record, but to a point within a sorted list.

The actual mechanics of storing and indexing rows in the database are really pretty straight forward and well understood. The game is managing buffers, and converting SQL in to efficient query paths to leverage these basic storage idioms.

Then, there's the whole multi-users, locking, logging, and transactions complexity on top of the storage idiom.

How does it work?

Firstly, let's outline a sample database table schema;

Field name       Data type      Size on disk id (Primary key) Unsigned INT   4 bytes firstName        Char(50)       50 bytes lastName         Char(50)       50 bytes emailAddress     Char(100)      100 bytes            

Note: char was used in place of varchar to allow for an accurate size on disk value. This sample database contains five million rows, and is unindexed. The performance of several queries will now be analyzed. These are a query using the id (a sorted key field) and one using the firstName (a non-key unsorted field).

Example 1 - sorted vs unsorted fields

Given our sample database of r = 5,000,000 records of a fixed size giving a record length of R = 204 bytes and they are stored in a table using the MyISAM engine which is using the default block size B = 1,024 bytes. The blocking factor of the table would be bfr = (B/R) = 1024/204 = 5 records per disk block. The total number of blocks required to hold the table is N = (r/bfr) = 5000000/5 = 1,000,000 blocks.

A linear search on the id field would require an average of N/2 = 500,000 block accesses to find a value, given that the id field is a key field. But since the id field is also sorted, a binary search can be conducted requiring an average of log2 1000000 = 19.93 = 20 block accesses. Instantly we can see this is a drastic improvement.

Now the firstName field is neither sorted nor a key field, so a binary search is impossible, nor are the values unique, and thus the table will require searching to the end for an exact N = 1,000,000 block accesses. It is this situation that indexing aims to correct.

Given that an index record contains only the indexed field and a pointer to the original record, it stands to reason that it will be smaller than the multi-field record that it points to. So the index itself requires fewer disk blocks than the original table, which therefore requires fewer block accesses to iterate through. The schema for an index on the firstName field is outlined below;

Field name       Data type      Size on disk firstName        Char(50)       50 bytes (record pointer) Special        4 bytes            

Note: Pointers in MySQL are 2, 3, 4 or 5 bytes in length depending on the size of the table.

Example 2 - indexing

Given our sample database of r = 5,000,000 records with an index record length of R = 54 bytes and using the default block size B = 1,024 bytes. The blocking factor of the index would be bfr = (B/R) = 1024/54 = 18 records per disk block. The total number of blocks required to hold the index is N = (r/bfr) = 5000000/18 = 277,778 blocks.

Now a search using the firstName field can utilise the index to increase performance. This allows for a binary search of the index with an average of log2 277778 = 18.08 = 19 block accesses. To find the address of the actual record, which requires a further block access to read, bringing the total to 19 + 1 = 20 block accesses, a far cry from the 1,000,000 block accesses required to find a firstName match in the non-indexed table.

thumb_up 0 thumb_down 0 flag 0

A thread should be used in a long running process that would block the UI from updating. If it's more than a second or two you might want to put it into a background thread and notify the user with a dialog or spinner or something. If you lock the UI thread for more than 5 seconds the user will be prompted with a "kill or wait" option by the OS.

A service does not run on separate thread, so it will block the UI, but you can spawn a new thread within a service. A service is used more for something that should happen on an interval or keep running/checking for something when there is no UI shown.

thumb_up 0 thumb_down 0 flag 0

It requires three data structures. One is a hash table which is used to cache the key/values so that given a key we can retrieve the cache entry at O(1). Second one is a double linked list for each frequency of access. The max frequency is capped at the cache size to avoid creating more and more frequency list entries. If we have a cache of max size 4 then we will end up with 4 different frequencies. Each frequency will have a double linked list to keep track of the cache entries belonging to that particular frequency. The third data structure would be to somehow link these frequencies lists. It can be either an array or another linked list so that on accessing a cache entry it can be easily promoted to the next frequency list in time O(1).

thumb_up 0 thumb_down 0 flag 0

It uses a hash table to cache the entries and a double linked list to keep track of the access order. If an entry is inserted, updated or accessed, it gets removed and re-linked before the head node. The node before head is the most recently used and the node after is the eldest node. When the cache reaches its maximum size the least recently used entry will be evicted from the cache.

This simple LRU algorithm is implemented by simply extending LinkedHashMap.
In order to keep track of access order, we must pass access order true while creating LinkedHashMap.
new LRUCache(pageCacheSize, pageCacheSize, 0.75f, true)
If the cache size reaches the maximum size, we want the eldest entry (least recently used) to be removed. To implement this, we simply need to override the removeEldestEntry method.

thumb_up 1 thumb_down 0 flag 0

LRU is a cache eviction algorithm called least recently used cache.

It uses a hash table to cache the entries and a double linked list to keep track of the access order. If an entry is inserted, updated or accessed, it gets removed and re-linked before the head node. The node before head is the most recently used and the node after is the eldest node. When the cache reaches its maximum size the least recently used entry will be evicted from the cache.

LFU is a cache eviction algorithm called least frequently used cache.

It requires three data structures. One is a hash table which is used to cache the key/values so that given a key we can retrieve the cache entry at O(1). Second one is a double linked list for each frequency of access. The max frequency is capped at the cache size to avoid creating more and more frequency list entries. If we have a cache of max size 4 then we will end up with 4 different frequencies. Each frequency will have a double linked list to keep track of the cache entries belonging to that particular frequency. The third data structure would be to somehow link these frequencies lists. It can be either an array or another linked list so that on accessing a cache entry it can be easily promoted to the next frequency list in time O(1).

thumb_up 0 thumb_down 1 flag 0

LRU is a cache eviction algorithm called least recently used cache. The algorithm presented here is used in Apache Kahadb module, class LRUCache. The algorithm discards the least recently used items first as the cache reaches its maximum size.

thumb_up 0 thumb_down 0 flag 0

A Web-based digital video technology by Microsoft, codename "WPF/E".  Silverlight is a cross browser, cross-platform plug-in for delivering media and rich interactive application for the Web. The Silverlight browser plug-in is freely available for all major browsers including Mozilla Firefox, Apple'a Safari and Windows Internet Explorer running on the windows.. It's a competitor product to both the Adobe Macromedia Flash player and Apple Quick Time player.

Silverlight is based on the Microsoft .NET FrameWork and can be integrated with existing Web infrastructure and applications, including Apache  and PHP, Java Script and XHTML on the client. Using Silverlight, designers are able to prepare media for encoding and distribution, and create W3C standards-compliant sites.

thumb_up 2 thumb_down 0 flag 0

  • SQL databases are primarily called as Relational Databases (RDBMS); whereas NoSQL database are primarily called as non-relational or distributed database.
  • SQL databases are table based databases whereas NoSQL databases are document based, key-value pairs, graph databases or wide-column stores. This means that SQL databases represent data in form of tables which consists of n number of rows of data whereas NoSQL databases are the collection of key-value pair, documents, graph databases or wide-column stores which do not have standard schema definitions which it needs to adhered to.
  • SQL databases have predefined schema whereas NoSQL databases have dynamic schema for unstructured data.
  • SQL databases are vertically scalable whereas the NoSQL databases are horizontally scalable. SQL databases are scaled by increasing the horse-power of the hardware. NoSQL databases are scaled by increasing the databases servers in the pool of resources to reduce the load.
  • SQL databases uses SQL ( structured query language ) for defining and manipulating the data, which is very powerful. In NoSQL database, queries are focused on collection of documents. Sometimes it is also called as UnQL (Unstructured Query Language). The syntax of using UnQL varies from database to database.
  • SQL database examples: MySql, Oracle, Sqlite, Postgres and MS-SQL. NoSQL database examples: MongoDB, BigTable, Redis, RavenDb, Cassandra, Hbase, Neo4j and CouchDb
  • For complex queries: SQL databases are good fit for the complex query intensive environment whereas NoSQL databases are not good fit for complex queries. On a high-level, NoSQL don't have standard interfaces to perform complex queries, and the queries themselves in NoSQL are not as powerful as SQL query language.
  • For the type of data to be stored: SQL databases are not best fit for hierarchical data storage. But, NoSQL database fits better for the hierarchical data storage as it follows the key-value pair way of storing data similar to JSON data. NoSQL database are highly preferred for large data set (i.e for big data). Hbase is an example for this purpose.
  • For scalability: In most typical situations, SQL databases are vertically scalable. You can manage increasing load by increasing the CPU, RAM, SSD, etc, on a single server. On the other hand, NoSQL databases are horizontally scalable. You can just add few more servers easily in your NoSQL database infrastructure to handle the large traffic.
  • For high transactional based application: SQL databases are best fit for heavy duty transactional type applications, as it is more stable and promises the atomicity as well as integrity of the data. While you can use NoSQL for transactions purpose, it is still not comparable and sable enough in high load and for complex transactional applications.
  • For support: Excellent support are available for all SQL database from their vendors. There are also lot of independent consultations who can help you with SQL database for a very large scale deployments. For some NoSQL database you still have to rely on community support, and only limited outside experts are available for you to setup and deploy your large scale NoSQL deployments.
  • For properties: SQL databases emphasizes on ACID properties ( Atomicity, Consistency, Isolation and Durability) whereas the NoSQL database follows the Brewers CAP theorem ( Consistency, Availability and Partition tolerance )
  • For DB types: On a high-level, we can classify SQL databases as either open-source or close-sourced from commercial vendors. NoSQL databases can be classified on the basis of way of storing data as graph databases, key-value store databases, document store databases, column store database and XML databases.

thumb_up 0 thumb_down 0 flag 0

thumb_up 3 thumb_down 0 flag 0

First approach:

              public class Permutation {     public static void main(String[] args)     {         String str = "ABC";         int n = str.length();         Permutation permutation = new Permutation();         permutation.permute(str, 0, n-1);     }       /**      * permutation function      * @param str string to calculate permutation for      * @param l starting index      * @param r end index      */     private void permute(String str, int l, int r)     {         if (l == r)             System.out.println(str);         else         {             for (int i = l; i <= r; i++)             {                 str = swap(str,l,i);                 permute(str, l+1, r);                 str = swap(str,l,i);             }         }     }       /**      * Swap Characters at position      * @param a string value      * @param i position 1      * @param j position 2      * @return swapped string      */     public String swap(String a, int i, int j)     {         char temp;         char[] charArray = a.toCharArray();         temp = charArray[i] ;         charArray[i] = charArray[j];         charArray[j] = temp;         return String.valueOf(charArray);     }   }            

Better appraoch:

              public static void permutation(String str) {      permutation("", str);  }  private static void permutation(String prefix, String str) {     int n = str.length();     if (n == 0) System.out.println(prefix);     else {         for (int i = 0; i < n; i++)             permutation(prefix + str.charAt(i), str.substring(0, i) + str.substring(i+1, n));     } }            

thumb_up 2 thumb_down 0 flag 0

Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which a defect goes through during its lifetime. It varies from organization to organization and also from project to project as it is governed by the software testing process and also depends upon the tools used.

thumb_up 2 thumb_down 0 flag 0

The "V-model" concept means verification and validation model. This is a consequence of performing the processes in a certain order. The next step will be fulfilled only after completing the previous one.

In accordance with this model, testing is conducted in parallel with the proper development phase. Schematically, such approach reminds an English letter "V", hence the model's name.

What Are the Development Phases on the V-Model Basis?

  • Requirements (business requirements, different types of specification).
  • Designing of the architecture (HLD - High Level Design).
  • LLD - Low Level Design.
  • Realization phase.
  • Code writing.

As well as for waterfall development model, the statement of requirements and creation of specification are the opening stage of the product creation process. Documentation testing, system checking, and acceptance testing are applied for checking the correct operation of this phase.

But before starting the development itself, one creates the test plan which is focused on functional capabilities of the future system according to the certain requirements.

On the stage of designing the system architecture, the peculiarities of platform, system itself, technical decisions, processes, and services are defined. Integration testing is performed for checking the correct operation of the specified system components as a whole.

The low level design phase presupposes defining the logic of each system component, the classes and links between them are specified. Component testing will be conducted for checking the system coordination.

The code development process is executed at the realization stage of the system architecture. The coding itself is fulfilled by the development team in accordance with the requirements when the architecture is fully thought out and modeled.

thumb_up 4 thumb_down 0 flag 0

  • Verification means checking whether the system, service or product is developed as per mentioned in the requirements. It's the main purpose is to check out that the whether the system meets all the requirements and is it as same according to the blueprint given?
  • Validation means checking whether the system, service or product is working as per the requirements. Validation often includes various types of testing black box testing, white box testing, load testing and much more just to check whether the data entered is in the required format or not.

thumb_up 0 thumb_down 0 flag 0

In software development, static testing, also called dry run testing, is a form of software testing where the actual program or application is not used. Instead this testing method requires programmers to manually read their own code to find any errors. Static testing is a stage of White Box Testing.

thumb_up 0 thumb_down 0 flag 0

Fuzz testing or fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by inputting massive amounts of random data, called fuzz, to the system in an attempt to make it crash.

thumb_up 0 thumb_down 0 flag 0

  • The tester sees each defect in a neutral perspective

  • The tester is totally unbiased

  • The tester sees what has been built rather than what the developer thought

  • The tester makes no assumptions regarding quality

thumb_up 1 thumb_down 0 flag 0

Independent testing corresponds to an independent team, who involve in testing activities other than developer to avoid author bias and is often more effective at finding defects and failures.

thumb_up 0 thumb_down 0 flag 0

MySQL provides a LIMIT clause that is used to specify the number of records to return.

The LIMIT clause makes it easy to code multi page results or pagination with SQL, and is very useful on large tables. Returning a large number of records can impact on performance.

thumb_up 1 thumb_down 0 flag 0

The bubbles event property returns a Boolean value that indicates whether or not an event is a bubbling event.

Event bubbling directs an event to its intended target, it works like this:

  • A button is clicked and the event is directed to the button
  • If an event handler is set for that object, the event is triggered
  • If no event handler is set for that object, the event bubbles up (like a bubble in water) to the objects parent

The event bubbles up from parent to parent until it is handled, or until it reaches the document object.

thumb_up 0 thumb_down 0 flag 0

Create a mark[] array of Boolean type. We iterate through all the characters of our string and whenever we see a character we mark it. Lowercase and Uppercase are considered the same. So 'A' and 'a' are marked in index 0 and similarly 'Z' and 'z' are marked in index 25.

After iterating through all the characters we check whether all the characters are marked or not. If not then return false as this is not a panagram else return true.

See here: http://www.geeksforgeeks.org/pangram-checking/

thumb_up 0 thumb_down 0 flag 0

Approach 1:

Split both strings into two separate arrays of words, and then iterate over each word of each String in a 2-D array. However this is computationally expensive at O(n^2).

Approach 2:

Add all the elements in the first array to a hashmap and then scan the second array to see if each of the elements exists in the hashmap. Since access time to a hashmap is O(1), this will be O(n+m) time complexity.

Approach 3:

Sort both of the arrays in O(nlogn) and then compare the items in O(n+m) which would give you O(nlogn) in total.

thumb_up 2 thumb_down 1 flag 0

              class GfG  {     public int removeDuplicates(int[] A)      {         int length=A.length;         if(length==0 || length==1)              return length;         int i=1;         for(int j=1; j<length; j++)         {             if(A[j]!=A[j-1])             {                 A[i]=A[j];                 i++;             }         }         if(i<length)          A[i]='\0';         return i;     } }                          

thumb_up 2 thumb_down 0 flag 0

Algorithm:
1) Declare a character stack S.
2) Now traverse the expression string exp.
a) If the current character is a starting bracket ('(' or '{' or '[') then push it to stack.
b) If the current character is a closing bracket (')' or '}' or ']') then pop from stack and if the popped character is the matching starting bracket then fine else parenthesis are not balanced.
3) After complete traversal, if there is some starting bracket left in stack then "not balanced"

See here: http://www.geeksforgeeks.org/check-for-balanced-parentheses-in-an-expression/

thumb_up 0 thumb_down 0 flag 0

Algorithm:

  1. Split the string based on white space and store in array. (array will have 3 elements lets say - date, month, year)
  2. Convert element at the first index of the array to int.
  3. Check the date

            a) if date is either 28 or 30 or 31, check for the month

                 i) if date is 31 and month is dec, replace 31 by 1, dec by Jan and increase the year by 1.

                 ii) if date is 28 and month is feb, check for leap year

                     A) if leap year, increase the date by 1

                     B) if leap year, and date is 29 or not a leap and date is 28, replace date by 1, feb by mar.

                iii) if date is 30 or 31 check for the number of days in month and make changes accordingly.

thumb_up 0 thumb_down 0 flag 0

Algorithm:

  1. Find the number of bits in the given binary number.
  2. initialize left and right positions by 1 and n.
  3. Do following while left 'l' is smaller than right 'r'.
    a) If bit at position 'l' is not same as bit at position 'r', then return false.
    b) Increment 'l' and decrement 'r', i.e., do l++ and r–-.
  4. If we reach here, it means we didn't find a mismatching bit.

See here : http://www.geeksforgeeks.org/check-binary-representation-number-palindrome/

thumb_up 2 thumb_down 0 flag 0

pseudo code:

  • Take two variables: one at the beginning of the array, say i and other at the end of the array, say j.
  • while(i<j)

        1.  check if element at index i is 0 or not

              a) if yes, increase i by 1

              b) if not, check for element at index j

                   i) if element at index j is 1, decrease j by 1 and repeat step b.

                   ii) if element at index j is 0, swap the elements at indexes i and j. Increase i by 1 and decrease j by 1.

thumb_up 0 thumb_down 0 flag 0

One approach is to convert the string into string array and swap ith element from the beginning with (arraylength - 1 - i)th element from the end.

thumb_up 0 thumb_down 0 flag 0

One approach is using HashMap. Traverse the sentence and add the words as key and each time the words occurs, increase the corresponding value by 1. Now print the map.

thumb_up 1 thumb_down 1 flag 0

Subscript refers to the array occurrence while index is the displacement (in no of bytes) from the beginning of the array. An index can only be modified using PERFORM, SEARCH & SET.

thumb_up 3 thumb_down 0 flag 0

E-commerce application/sites are web applications or mobile application too. So, they undergo all the typical test types.

  • Functional Testing
  • Usability Testing
  • Security Testing
  • Performance Testing
  • Database Testing
  • Mobile Application Testing
  • A/B testing.

Testings:

1 Browser compatibility

  • Lack of support for early browsers
  • Browser specific extensions
  • Browser testing should cover main platforms ( Linux, Windows, Mac etc.)

2 Page display

  • Incorrect display of pages
  • Runtime error messages
  • Poor page download time
  • Dead hyperlink, plugin dependency, font sizing, etc.

3 Session Management

  • Session expiration
  • Session storage

4 Usability

  • Non-intuitive design
  • Poor site navigation
  • Catalog navigation
  • Lack of help-support

5 Content Analysis

  • Misleading, offensive and litigious content
  • Royalty free images and copyright infringement
  • Personalization functionality
  • Availability 24/7

6 Availability

  • Denial of service attacks
  • Unacceptable levels of unavailability

7 Back-up and Recovery

  • Failure or fall over recovery
  • Backup failure
  • Fault tolerance

8 Transactions

  • Transaction Integrity
  • Throughput
  • Auditing

9 Shopping order processing and purchasing

  • Shopping cart functionality
  • Order processing
  • Payment processing
  • Order tracking

10 Internationalization

  • Language support
  • Language display
  • Cultural sensitivity
  • Regional Accounting

11 Operational business procedures

  • How well e-procedure copes
  • Observe for bottlenecks

12 System Integration

  • Data Interface format
  • Interface frequency and activation
  • Updates
  • Interface volume capacity
  • Integrated performance

13 Performance

  • Performance bottlenecks
  • Load handling
  • Scalability analysis

14 Login and Security

  • Login capability
  • Penetration and access control
  • Insecure information transmission
  • Web attacks
  • Computer viruses
  • Digital signatures

thumb_up 0 thumb_down 0 flag 0

In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.

thumb_up 0 thumb_down 0 flag 0

It is mainly used to traverse array or collection elements. Its simple structure allows one to simplify code by presenting for-loops that visit each element of an array/collection without explicitly expressing how one goes from element to element.

Example:

for (String element : array) {     System.out.println("Element: " + element); }

thumb_up 0 thumb_down 0 flag 0

From a capability perspective, while both can store references to objects:

  • Arrays can store primitives
  • Collections can not store primitives (although they can store the primitive wrapper classes, such as Integer etc)

One important difference, is one of usability and convenience, especially given that Collections automatically expand in size when needed:

  • Arrays - Avoid using them unless you have to
  • Collections - Use them in preference to arrays

Arrays are ultimately the only way of storing a group of primitives/references in one object, but they are the most basic option. Although arrays may give you some speed advantages, unless you need super-fast code, Collections are preferred because they have so much convenience.

Array does not have methods (no API) such as the ones provided by Collection classes.

Difference between arrays and collections:

Array is Fixed in Size. Where , Collection is Grow able in nature.

Array stores homogeneous data . Where , Collection store both homogeneous as well as Heterogeneous data.

In Array , there are no underlined Data Structures, whereas ,Collection has Underlined DS.

Array is recommended in performance , whereas Collection is not.

 Array use more memory space compare to Collection.

thumb_up 1 thumb_down 0 flag 0

Marker Interface in java is an interface with no fields or methods within it. It is used to convey to the JVM that the class implementing an interface of this category will have some special behavior.

Hence, an empty interface in java is called a marker interface. In java we have the following major marker interfaces as under:

  • Searilizable interface
  • Cloneable interface
  • Remote interface
  • ThreadSafe interface

The marker interface can be described as a design pattern which is used by many languages to provide run-time type information about the objects. The marker interface provides a way to associate metadata with the class where the language support is not available.

thumb_up 0 thumb_down 0 flag 0

The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification.

thumb_up 0 thumb_down 0 flag 0

IDOC is simply a data container used to exchange information between any two processes that can understand the syntax and semantics of the data.

In other words, an IDOC is like a data file with a specified format which is exchanged between 2 systems which know how to interpret that data.

IDOC stands for " Intermediate Document"

When we execute an outbound ALE or EDI Process, an IDOC is created.

In the SAP System, I DOCs are stored in database. Every IDOC has a unique number(within a client).

thumb_up 0 thumb_down 0 flag 0

An IDOC Type, (Basic) defines the structure and format of the business document that is to be exchanged. An IDOC is an instance of an IDOC Type , just like the concept of variables and variables types in programming languages. You can define IDOC types using WE30

thumb_up 3 thumb_down 1 flag 0

List is one of the simplest and most important data structures in Python. Lists are enclosed in square brackets [ ] and each item is separated by a comma. Lists are collections of items where each item in the list has an assigned index value. A list is mutable, meaning you can change its contents. Lists have many built-in control functions.

Methods of List objects are:

Calls to list methods have the list they operate on appear before the method name separated by a dot, e.g. L.reverse()

  1. Creation
L = ['yellow', 'red', 'blue', 'green', 'black']
  1. print L
returns: ['yellow', 'red', 'blue', 'green', 'black']
  1. Accessing / Indexing
L[0]  = returns 'yellow'
  1. Slicing
L[1:4]  = returns ['red', 'blue', 'green']
L[2:]   = returns ['blue', 'green', 'black']
L[:2]   = returns ['yellow', 'red']
L[-1]   = returns 'black'
L[1:-1] = returns ['red', 'blue', 'green']
  1. Length - number of items in list
len(L)  = returns 5
  1. Sorting - sorting the list
sorted(L) = returns ['black', 'blue', 'green', 'red', 'yellow']
  1. Append - append to end of list
L.append("pink")

thumb_up 6 thumb_down 0 flag 0

              select DEPARTMENT, avg(SALARY) as AVGSalary from Company group by DEPARTMENT                          

thumb_up 66 thumb_down 1 flag 0

A number of possible explanations for why manhole covers are round include:

  • A round manhole cover cannot fall through its circular opening, whereas a square manhole cover could fall in if it were inserted diagonally in the hole.
  • Circular covers don't need to be rotated or precisely aligned when placing them on the opening.
  • A round manhole cover is easily moved and rolled.
  • Human beings have a roughly circular cross-section.
  • Round tubes are the strongest shape against the compression of the earth around them, so the cover of the tube would naturally be round as well.
  • It's easier to dig a circular hole.
  • Round castings are much easier to manufacture using a lathe

thumb_up 0 thumb_down 0 flag 0

Everything has a meaning in the syntex.

Public- scope identifier which mean this method is accessible everywhere i e Within class ,outside class,within package,outside package.

STATIC - means this method belongs to class and not to the instance/object. A static method can be accessed without creating object if the class.JVM loads static method at the time of class loading.

VOID - Is a return type, we apply void with the method signature when we know method is not going to return any value.

Simply void mean - returns nothing

MAIN - is the method of name.

thumb_up 0 thumb_down 0 flag 0

One approach is to split the sentence when there is a space and store each word into an array of string.  Now convert character of each word (String) to uppercase  and concatenate all the words( String elements of array) again.

thumb_up 1 thumb_down 0 flag 0

"Agile Development" is an umbrella term for several iterative and incremental software development methodologies. The most popular agile methodologies include Extreme Programming (XP), Scrum, Crystal, Dynamic Systems Development Method (DSDM), Lean Development, and Feature-Driven Development (FDD).

https://www.versionone.com/agile-101/

thumb_up 10 thumb_down 1 flag 0

Suppose if you are in class room that time you behave like a student, when you are in market at that time you behave like a customer, when you at your home at that time you behave like a son or daughter, Here one person present in different-different behaviors.

thumb_up 4 thumb_down 0 flag 0

Polymorphism is derived from 2 greek words: poly and morphs. The word "poly" means many and "morphs" means forms. So polymorphism means many forms. Polymorphism is not a programming concept but it is one of the principal of OOPs.

In Java, The ability of a reference variable to change behavior according to what object instance it is holding.

thumb_up 1 thumb_down 0 flag 0

3 different methods of dealing with outliers:

  1. Univariate method: This method looks for data points with extreme values on one variable.
  2. Multivariate method: Here we look for unusual combinations on all the variables.
  3. Minkowski error: This method reduces the contribution of potential outliers in the training process.

The first step is finding them. In a univariate sense, this is already a little tricky as there may be issues of masking, where the presence of one extreme outlier masks others. There is also the problem of what the underlying distribution of the variable is - e.g. income is (in nearly all countries) very right skew; height is very close to normal; an outlier on height is not an outlier on income in terms of standard deviations.

When you start looking bivariately, the possibilities grow, exponentially. Yet it still could be necessary to look. In a census of a population, a 12 year old is not an outlier and a widow is not an outlier but a 12 year old widow is one.

With multivariate data things are really hard: In one sense, every data point in a space with many dimensions is an outlier.

Then you have to decide what to do. You don't specify what methods you are using (regression? Factor analysis? etc) but in general there are several options:

1) Drop the outlier from the data set - I'd recommend this only when the data point is impossible and cannot be corrected - e.g. a man who is 12 feet tall.  However, this may be a reasonable option in other cases if you want to keep the statistics simple. Be aware that dropping an outlier when it is a possible point (a man who is 7 feet tall, a person with $10,000,000 income) may limit generalizability.

2) Transform the data. This may serve other purposes as well. It probably shouldn't be used just because there is an outlier, but it may help deal with outliers.

3) Use a robust method. e.g. Quantile regression instead of "regular" regression.

thumb_up 1 thumb_down 0 flag 0

  • Use test data for evaluation or do cross validation.
  • Add regularizations terms (such as L1, L2, AIC, BIC, MDL or a probabilistic prior) to the objective function.
  • collect more data
  • use ensembling methods that "average" models
  • choose simpler models / penalize complexity

thumb_up 0 thumb_down 0 flag 0

PEGA is a platform which provided a systematic approach to build and deploy process-oriented and rule based solution and applications. Business Process Management, Customer relationship management, Decision management and case management solutions are some of the examples where process and rules are integral part of application development. What PEGA provides to a developer is a Designer studio which acts as an Integrated Development Environment (IDE) to build applications. The best part is this Designer studio is web based so a developer can work and create solutions from anywhere in the world.

thumb_up 0 thumb_down 0 flag 0

Autowiring feature of spring framework enables you to inject the object dependency implicitly. It internally uses setter or constructor injection.

Autowiring can't be used to inject primitive and string values. It works with reference only.

thumb_up 1 thumb_down 0 flag 0

Spring supports 2 types of dependency injection, they are:

1) Constructor-based dependency injection: It is accomplished when the container invokes a class constructor with a number of arguments, each representing a dependency on other class.

2) Setter-based dependency injection: It is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or no-argument static factory method to instantiate your bean.

thumb_up 3 thumb_down 0 flag 0

Dependency injection (DI) is a process whereby objects define their dependencies, that is, the other objects they work with, only through constructor arguments, arguments to a factory method, or properties that are set on the object instance after it is constructed or returned from a factory method. The container then injects those dependencies when it creates the bean. This process is fundamentally the inverse, hence the name Inversion of Control (IoC), of the bean itself controlling the instantiation or location of its dependencies on its own by using direct construction of classes, or the Service Locator pattern.

thumb_up 0 thumb_down 0 flag 0

  • toUpperCase, toLowerCase: Changes the capitalization of a string

  • substring, substringBefore, substringAfter: Gets a subset of a string

  • trim: Trims white space from a string

  • replace: Replaces characters in a string

  • indexOf, startsWith, endsWith, contains, containsIgnoreCase: Checks whether a string contains another string

  • split: Splits a string into an array

  • join: Joins a collection into a string

  • escapeXml: Escapes XML characters in a string

JSTL Functions

thumb_up 0 thumb_down 0 flag 0

The JavaServer Pages Standard Tag Library (JSTL) is a collection of useful JSP tags which encapsulates the core functionality common to many JSP applications. The JSP Standard Tag Library (JSTL) represents a set of tags to simplify the JSP development.

JSTL has support for common, structural tasks such as iteration and conditionals, tags for manipulating XML documents, internationalization tags, and SQL tags. It also provides a framework for integrating the existing custom tags with the JSTL tags.

So, JSTL (JavaServer Pages Standard Tag Langauge) is a library of XML tags used to provide a means of executing conditional logic, iteration, and switch block logic, as well as parsing and manipulating XML, parsing date and numerical formats and strings, building URLs, and even providing database access (which is fine for prototyping, but one downside may be that you CAN use JSTL to access a database, which might encourage some developers to lazily mix data access logic into their view layer).

For creating JSTL application, we need to load jstl.jar file.

thumb_up 0 thumb_down 0 flag 0

A JSP life cycle is defined as the process from its creation till the destruction. This is similar to a servlet life cycle with an additional step which is required to compile a JSP into servlet.

The following are the paths followed by a JSP −

  • Compilation
  • Initialization
  • Execution
  • Cleanup

The four major phases of a JSP life cycle are very similar to the Servlet Life Cycle. The four phases have been described below −

JSP Life Cycle

thumb_up 2 thumb_down 0 flag 0

Ant and Maven both are build tools provided by Apache. The main purpose of these technologies is to ease the build process of a project.

  1. Maven is a Framework, Ant is a Toolbox .
  2. Ant doesn't have formal conventions. You have to tell Ant exactly where to find the source, where to put the outputs, etc.
  3. Ant is procedural. You have to tell Ant exactly what to do; tell it to compile, copy, then compress, etc.
  4. Ant doesn't have a life cycle.
  5. Maven uses conventions. It knows where your source code is automatically, as long as you follow these conventions. You don't need to tell Maven where it is.
  6. Maven is declarative; All you have to do is create a pom.xml file and put your source in the default directory. Maven will take care of the rest.
  7. Maven has a lifecycle. You simply call mvn install and a series of sequence steps are executed.
  8. Maven has intelligence about common project tasks. To run tests, simple execute mvn test, as long as the files are in the default location. In Ant, you would first have to JUnit JAR file is, then create a classpath that includes the JUnit JAR, then tell Ant where it should look for test source code, write a goal that compiles the test source and then finally execute the unit tests with JUnit.

thumb_up 0 thumb_down 0 flag 0

Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which a defect goes through during its lifetime. It varies from organization to organization and also from project to project as it is governed by the software testing process and also depends upon the tools used.

thumb_up 0 thumb_down 0 flag 0

Partitioning allows tables, indexes, and index-organized tables to be subdivided into smaller pieces, enabling these database objects to be managed and accessed at a finer level of granularity. Oracle provides a rich variety of partitioning strategies and extensions to address every business requirement.

thumb_up 2 thumb_down 0 flag 0

  1. Functions must return some value whereas for stored procedure it is optional
  2. In functions, only SELECT statement is allowed but in case of stored procedure SELECT, INSERT, UPDATE, DELETE are allowed. It means in function you can not change existing data. But data alteration is allowed for stored procedures
  3. You can call function inside stored procedure but vice versa is not true
  4. To call the function you can use SELECT statement. e.g. SELECT <<function_name>> <<parameters>> but same is not allowed for stored procedure. To call / execute procedure you have to use syntax as exec <<procedure_name>>  <<parameters>>
  5. You can grant users permission to execute a Stored Procedure independently of underlying table permissions.
  6. Functions can be used with JOIN statements also but procedures can not be used for JOIN purpose

thumb_up 0 thumb_down 0 flag 0

SQL Loader is a bulk loader utility used for moving data from external files into the Oracle Database. Its syntax is similar to that of the DB2 load utility, but comes with more options. SQL Loader supports various load formats, selective loading, and multi-table loads.

thumb_up 1 thumb_down 0 flag 0

  • GET - Requests data from a specified resource
  • requests can be cached
  • requests remain in the browser history
  • requests can be bookmarked
  • requests should never be used when dealing with sensitive data
  • requests have length restrictions
  • requests should be used only to retrieve data
  • POST - Submits data to be processed to a specified resource
  • requests are never cached
  • requests do not remain in the browser history
  • requests cannot be bookmarked
  • requests have no restrictions on data length

thumb_up 0 thumb_down 0 flag 0

A DSO (DataStore Object) is known as the storage place to keep cleansed and consolidated transaction or master data at the lowest granularity level and this data can be analyzed using the BEx query.

A DataStore Object contains key figures and the characteristic fields and data from a DSO can be updated using Delta update or other DataStore objects or even from the master data. These DataStore Objects are commonly stored in two dimensional transparent database tables.

thumb_up 0 thumb_down 0 flag 0

Three types of DSO.

1. Standard

2. Direct Update

3. Write Optimized

thumb_up 0 thumb_down 0 flag 0

- A process called rethinking the requirements – by going back the sources of the conflicting requirements and tries to understand and address it differently.
- Getting all the stakeholders in one place and make them discuss and analyzing the trade-offs among the conflicting requirements and come up with prioritization process in terms of, value to the project, cost, time, etc.
- Try to replace 2 or more conflicting requirements (function or feature) with a single one that addresses the goals of the conflicting requirements.

thumb_up 2 thumb_down 0 flag 0

  • Memory Management in Operating system is a technique of handling and managing main  memory.
  • It is a process of controlling memory, so that running application can be provided block of memory.
  • The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed.
  • Memory management resides in hardware, in OS, in programs and in applications.
  •  It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or un-allocated and correspondingly it updates the status.

thumb_up 0 thumb_down 10 flag 0

Can be solved using dp

thumb_up 4 thumb_down 0 flag 0

Infinite lines.

All the lines passing through the center of the rectangle will divide it into two parts of equal area.

thumb_up 11 thumb_down 0 flag 0

The responsibility of a proxy server is to contact a server on your behalf, request an action and pass the response back to you.

This is applicable, for example, when there is no possibility to establish direct connection with server.

thumb_up 1 thumb_down 0 flag 0

A HTTP proxy speaks the HTTP protocol, it's especially made for HTTP connections but can be abused for other protocols as well (which is kinda standard already)

The browser (CLIENT) sends something like "GET http://host.tld/path HTTP/1.1" to the PROXY
and the PROXY will then forward the request to the SERVER. The SERVER will only see the PROXY as connection and answer to the PROXY just like to a CLIENT.
The PROXY receives the response and forwards it back to the CLIENT.

It is a transparent process and nearly like directly communicating with a server.
There are some additional heads that can be sent, proxies sometimes change/add stuff.
Some proxies for example include your real IP in a special HTTP HEADER which can be logged on server-side, or intercepted in their scripts.

              CLIENT <---> PROXY <---> SERVER                          

Related to using proxies as a security/privacy feature
As you can see in the ascii above, there is no direct communication between CLIENT and SERVER. Both parties just talk to the PROXY between them.
In modern worlds the CLIENT often is a Browser and the Server often is a Webserver.

In such an environment users often trust the PROXY to be secure and not leak their identity. However there are many possible ways to ruin this security model due to complex software frameworks running on the browser.
For example Flash or Java applets are a perfect example how a proxy connection can get broken, Flash and Java both might not care much about the proxy settings of their parent application (browser).
Another example are DNS requests which can reach the destination nameserver without PROXY depending on the PROXY and the application settings.
Another example would be cookies or your browser meta footprint which might both identify you if the webserver knows you from the past already (or meets you again without proxy).

The proxy itself needs to be trusted as it can read all the data that goes through it and on top it might even be able to break your SSL security (read up on man in the middle)

thumb_up 0 thumb_down 0 flag 0

HttpHandler is where the request train is headed. HTTP handler is the process that runs in response to a request made to an ASP.NET Web application.

There are three steps involved in creating Handler 1. Implement IHttpHandler interface. 2. Register handler in web.config or machine.config file. 3. Map the file extension (*.arshad) to aspnet_isapi.dll in the IIS.

IHttpHandler interface has ProcessRequest method and IsReusable property which needs to be implemented. ProcessRequest: In this method, you write the code that produces the output for the handler. IsResuable: This property tells whether this handler can be reused or not.

You can register the handler in web.config file like this

              <httpHandlers>    <add verb="*" path="*.arshad" type="namespace.classname, assemblyname" /> </httpHandlers>            

thumb_up 0 thumb_down 0 flag 0

ASP.NET HTTP Modules are classes which implement the System.Web.IHttpModule interface.

An HttpModule will execute for every request to your application, regardless of extension, and is generally used for things like security, statistics, logging, etc.  So, HTTP modules let you examine incoming and outgoing requests and take action based on the request.

thumb_up 0 thumb_down 0 flag 0

1 By working on sets of data. We can insert, update, delete multiple rows at one time. here in an example insert of multiple rows:

              INSERT INTO YourTable         (col1, col2, col3, col4)     SELECT         cola, colb+Colz, colc, @X         FROM ....             LEFT OUTER JOIN ...         WHERE...            

2 Can also be done using while loop

thumb_up 0 thumb_down 0 flag 0

Because cursors take up memory and create locks.

When you open a cursor, you are basically loading rows into memory and locking them, creating potential blocks. Then, as you cycle through the cursor, you are making changes to other tables and still keeping all of the memory and locks of the cursor open.

All of which has the potential to cause performance issues for other users.

So, as a general rule, cursors are frowned upon. Especially if that's the first solution arrived at in solving a problem.

thumb_up 0 thumb_down 0 flag 0

A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor contains information on a select statement and the rows of data accessed by it. This temporary work area is used to store the data retrieved from the database, and manipulate this data.

thumb_up 0 thumb_down 0 flag 0

Yes, that would be called an Indexed view or Materialized view.

The first index created on a view must be a unique clustered index. After the unique clustered index has been created, you can create more non-clustered indexes. Creating a unique clustered index on a view improves query performance because the view is stored in the database in the same way a table with a clustered index is stored. The query optimizer may use indexed views to speed up the query execution. The view does not have to be referenced in the query for the optimizer to consider that view for a substitution

thumb_up 1 thumb_down 0 flag 0

1. Using PLSQL arrays.
2. Using simple sql query to check whether data exists before inserting data into the table

thumb_up 1 thumb_down 0 flag 0

SQL is the Structured Query Language that is used to communicate commands to a database server.

thumb_up 3 thumb_down 0 flag 0

OUTER JOIN is useful when you want to include rows that have no matching rows in the related table.

Say there are two relations customer(cid, cname) and orders(oid, cid, quantity), and you want to find quantity of products each customer ordered, including ones who make no order. If using natural join, order information of the ones with no order will be lost. If using outer join as below:

SELECT cname, quantity
FROM customer LEFT OUTER JOIN orders
ON customer.cid = orders.cid;


You will get rows of customers with no order as (cname, NULL). When using OUTER JOIN you must use the RIGHT or LEFT keywords to specify the table from which to include all rows.

thumb_up 0 thumb_down 0 flag 0

A database is an organized collection of data. It is the collection of schemas, tables, queries, reports, views, and other objects. The data are typically organized to model aspects of reality in a way that supports processes requiring information.

thumb_up 1 thumb_down 0 flag 0

Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data summarization, query, and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the Mapreduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. Since most data warehousing applications work with SQL-based querying languages, Hive aids portability of SQL-based applications to Hadoop.

thumb_up 9 thumb_down 0 flag 0

String is immutable for several reasons, here is a summary:

  • Security: parameters are typically represented as String in network connections, database connection urls, usernames/passwords etc. If it were mutable, these parameters could be easily changed.
  • Synchronization and concurrency: making String immutable automatically makes them thread safe thereby solving the synchronization issues.
  • Caching: when compiler optimizes your String objects, it sees that if two objects have same value (a="test", and b="test") and thus you need only one string object (for both a and b, these two will point to the same object).
  • Class loading: String is used as arguments for class loading. If mutable, it could result in wrong class being loaded (because mutable objects change their state).

That being said, immutability of String only means you cannot change it using its public API. You can in fact bypass the normal API using reflection.

thumb_up 2 thumb_down 0 flag 0

              >>> import copy  >>> c = copy.deepcopy(a)  >>> a, c ({1: [1, 2, 3, 4]}, {1: [1, 2, 3, 4]})  >>> a[1].append(5)  >>> a, c ({1: [1, 2, 3, 4, 5]}, {1: [1, 2, 3, 4]})            

Using copy.deepcopy() function, copies of objects can be created.

thumb_up 5 thumb_down 0 flag 0

The default join that returns only the matching records in two table,

Select all records from Table A and Table B, where the join condition is met.

Select all records from Table A and Table B, where the join condition is met.

Example:

Lets say you have a Students table, and a Lockers table.

Each student can be assigned to a locker, so there is a LockerNumber column in the Studenttable. More than one student could potentially be in a single locker, but especially at the beginning of the academic year, you may have some incoming students without lockers and some lockers that have no students assigned.

For the sake of this example, lets say you have 100 students, 70 of which have lockers. You have a total of 50 lockers, 40 of which have at least 1 student and 10 lockers have no student.

INNER JOIN is equivalent to "show me all students with lockers".
Any students without lockers, or any lockers without students are missing.
Returns 70 rows

thumb_up 0 thumb_down 0 flag 0

MVC is the separation of model, view and controller. It's simply a paradigm.

For example, while a table grid view should obviously present data once shown, it should not have code on where to retrieve the data from, or what its native structure (the model) is like. Likewise, while it may have a function to sum up a column, the actual summing is supposed to happen in the controller.

A 'save file' dialog (view) ultimately passes the path, once picked by the user, on to the controller, which then asks the model for the data, and does the actual saving.

This separation of responsibilities allows flexibility down the road. For example, because the view doesn't care about the underlying model, supporting multiple file formats is easier: just add a model subclass for each.

thumb_up 0 thumb_down 1 flag 0

              class GFG {   void reverse(int arr[], int n)   {     for(int i=0; i<n; i++)     {       int temp = arr[i];       arr[i] = arr[n-i];       arr[n-i] = arr[i];     }          for(int i=0; i<n; i++)     {       System.out.println(arr[i]);     }   } }   class Reverse {   public static void main(String args[])   {     int arr[] = new int[10]     GFG g = new GFG();       g.reverse(arr,10);   } }                          

thumb_up 3 thumb_down 2 flag 0

              #!/bin/bash  src="$1" dst="$2"  while read line; do     for word in $line; do 	if ! grep -q "\b$word" $dst; then 	    echo -n "$word " >> $dst 	fi     done     echo " " >> $dst done < $src            

thumb_up 0 thumb_down 0 flag 0

We can do this using AVL tree.

thumb_up 1 thumb_down 0 flag 0

Shallow copy:

Before Copy Shallow Copying Shallow Done

The variables A and B refer to different areas of memory, when B is assigned to A the two variables refer to the same area of memory. Later modifications to the contents of either are instantly reflected in the contents of other, as they share contents.

Deep Copy:

Before Copy Deep Copying Deep Done

The variables A and B refer to different areas of memory, when B is assigned to A the values in the memory area which A points to are copied into the memory area to which B points. Later modifications to the contents of either remain unique to A or B; the contents are not shared.

In short, it depends on what points to what. In a shallow copy, object B points to object A's location in memory. In deep copy, all things in object A's memory location get copied to object B's memory location.

thumb_up 0 thumb_down 0 flag 0

An email hosting service is an Internet hosting services that operates email servers.

Hosting options available are:

  • Website Builders
  • Shared Hosting
  • Dedicated Hosting
  • Collocated Hosting

thumb_up 3 thumb_down 0 flag 0

A driver is software that allows your computer to communicate with hardware or devices. Without drivers, the hardware you connect to your computer—for example, a video card or a printer—won't work properly.

A printer driver communicates the computer commands to the printer to perform a certain task (make printouts). A software driver interprets the computer commands to a software program to do something else.

thumb_up 0 thumb_down 0 flag 0

Assumption: consider (12:00 (h=12, m = 0) as a reference.

The minute hand moves 360 degree in 60 minute(or 6 degree in one minute) and hour hand moves 360 degree in 12 hours(or 0.5 degree in 1 minute). In h hours and m minutes, the minute hand would move (h*60 + m)*0.5 and hour hand would move 6*m.

The angle between the hands can be found using the following formula:

angle = |((h*60 + m)*0.5) - 6*m|

thumb_up 7 thumb_down 0 flag 0

  1. The browser extracts the domain name from the URL.
  2. The browser queries DNS for the IP address of the URL. Generally, the browser will have cached domains previously visited, and the operating system will have cached queries from any number of applications. If neither the browser nor the OS have a cached copy of the IP address, then a request is sent off to the system's configured DNS server. The client machine knows the IP address for the DNS server, so no lookup is necessary.
  3. The request sent to the DNS server is almost always smaller than the maximum packet size, and is thus sent off as a single packet. In addition to the content of the request, the packet includes the IP address it is destined for in its header. Except in the simplest of cases (network hubs), as the packet reaches each piece of network equipment between the client and server, that equipment uses a routing table to figure out what node it is connected to that is most likely to be part of the fastest route to the destination. The process of determining which path is the best choice differs between equipment and can be very complicated.
  4. The is either lost (in which case the request fails or is reiterated), or makes it to its destination, the DNS server.
  5. If that DNS server has the address for that domain, it will return it. Otherwise, it will forward the query along to DNS server it is configured to defer to. This happens recursively until the request is fulfilled or it reaches an authoritative name server and can go no further.
  6. Assuming the DNS request is successful, the client machine now has an IP address that uniquely identifies a machine on the Internet. The web browser then assembles an HTTP request, which consists of a header and optional content. The header includes things like the specific path being requested from the web server, the HTTP version, any relevant browser cookies, etc.
  7. This HTTP request is sent off to the web server host as some number of packets, each of which is routed in the same was as the earlier DNS query. (The packets have sequence numbers that allow them to be reassembled in order even if they take different paths.) Once the request arrives at the webserver, it generates a response (this may be a static page, served as-is, or a more dynamic response, generated in any number of ways.) The web server software sends the generated page back to the client.


See also:

  • http://en.wikipedia.org/wiki/Dom...
  • http://en.wikipedia.org/wiki/Rou...
  • http://en.wikipedia.org/wiki/Web...
  • http://en.wikipedia.org/wiki/HTML

thumb_up 0 thumb_down 1 flag 0

Points inside the rectangle can be found by writing the equations of the lines of the rectangle and checking for the constraints along x-axis and y-axis.

thumb_up 3 thumb_down 0 flag 0

Using thread synchronization.

Make separate threads for tasks like a thread for deposition, a thread for withdrawal, etc. for all the tasks and synchronize them all.

thumb_up 2 thumb_down 0 flag 0

Since segment trees are a fixed structure dividing up a space, they are well suited to 3D graphics (deciding which parts of a scene need to be rendered) and computing distance/routes on maps (easily narrowing down which nearby points of interest need to be considered in a distance sort, for example).

thumb_up 2 thumb_down 0 flag 0

Segment tree is a data structure which can be used to perform range queries and range updates. The underlying principle behind this data structure is to store certain values of ranges as a balanced binary tree and hence query and update can be performed efficiently.

A detailed explanation of this structure is available in this link - Segment tree - PEGWiki

To summarize what is written I will use the following example:
You are given an array of 'n' elements. You want to perform the following operations on the array.
1) Choose a segment [l,r] and increment all the elements in that range by 1.
2) Given a segment [l,r], obtain the sum of elements in that range.

A simple approach is to maintain an array and update each segment when an update query comes, and to calculate the sum of the range when a find sum query comes. Note, both these operations take O(n) time.

Segment tree helps in reducing each of these tasks to O(log n) time. In brief it works as follows:
Maintain a balanced binary tree. Each node of the binary tree corresponds to some consecutive range. In particular, the root corresponds to the entire [1,n] range. Its left child corresponds to [1,n/2] range and the right child corresponds to [n/2+1,n] range and so on. More formally, if a node corresponds to a consecutive range [l,r], its left child corresponds to the range [l,l+(r-l)/2] and its right child corresponds to [l+(l-r)/2+1,r].

Now, at every node you will store the sum of the elements of that particular range. When operation of type (i) comes, you will update the first node from the root which completely fits in the range and push a flag to its children which says the values have been modified. Note, you will have to update atmost two nodes in any single operation of type 1 (This is a jist. Please refer to the link for full details).

Similarly, when query of operation 2 appears you can perform a similar process and by visiting atmost 2 leaves you can obtain the sum. (Check the link for details).

Since this is a balanced binary tree, for an array of size n, the height is atmost O(log n). And hence we can perform the operations in O(log n), The query and update process is essentially a divide and conquer strategy.

thumb_up 0 thumb_down 0 flag 0

Shared memory, virtual memory, read only memory, write only memory

Process memory is divided into four sections for efficient working :

  • The text section is made up of the compiled program code, read in from non-volatile storage when the program is launched.
  • The data section is made up the global and static variables, allocated and initialized prior to executing the main.
  • The heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc.
  • The stack is used for local variables. Space on the stack is reserved for local variables when they are declared.

thumb_up 3 thumb_down 0 flag 0

Overlaying means "the process of transferring a block of program code or other data into internal memory, replacing what is already stored". Overlaying is a technique that allows programs to be larger than the computer's main memory. An embedded would normally use overlays because of the limitation of physical memory which is  internal memory for a system-on-chip and the lack of virtual memory facilities.

Overlaying requires the programmers to split their object code to into multiple completely-independent sections, and the overlay manager that linked to the code will load the required overlay dynamically & will swap them when necessary.


This technique requires the programmers to specify which overlay to load at different circumstances.

thumb_up 1 thumb_down 0 flag 0

First, it has NOTHING to do with swap. It is the process created at system startup time, which is also the first process created by the system. It then creates the init(PID=1) process. As to linux,  the swapper has nothing to do after initialization, and only gets run when these is no other process that can be run in the system because it is given the lowest priority, so it can also be called the idle process. On an SMP architecutre each CPU may have its own idle process. A good reference to Linux kernel may be Understanding the Linux Kernel.

thumb_up 37 thumb_down 0 flag 1

.

thumb_up 0 thumb_down 0 flag 0

Because a stateless protocol does not require the server to retain session information or status about each communications partner for the duration of multiple requests.

HTTP is a stateless protocol, which means that the connection between the browser and the server is lost once the transaction ends.

thumb_up 6 thumb_down 0 flag 0

A web browser serves as a good practical illustration of the OSI model and the TCP/IP protocol suite:
• Τhe web browser serves as the user interface for accessing a website. The browser itself does not function at the Application layer. Instead, the
web browser invokes the Hyper Text Transfer Protocol (HTTP) to interface with the remote web server, which is why http:// precedes every
web address.
• The Internet can provide data in a wide variety of formats, a function of the Presentation layer. Common formats on the Internet include HTML,
XML, PHP, GIF, and JPEG. Any encryption or compression mechanisms used on a website are also considered a Presentation layer function.
• The Session layer is responsible for establishing, maintaining, and  terminating the session between devices, and determining whether the
communication is half-duplex or full-duplex. However, the TCP/IP stack generally does not include session-layer protocols, and is reliant on
lower-layer protocols to perform these functions.
• HTTP utilizes the TCP Transport layer protocol to ensure the reliable delivery of data. TCP establishes and maintains a connection from the
client to the web server, and packages the higher-layer data into segments. A sequence number is assigned to each segment so that data
can be reassembled upon arrival.
• The best path to route the data between the client and the web server is determined by IP, a Network layer protocol. IP is also responsible for
the assigned logical addresses on the client and server, and for encapsulating segments into packets.
• Data cannot be sent directly to a logical address. As packets travel from network to network, IP addresses are translated to hardware addresses,
which are a function of the Data-Link layer. The packets are encapsulated into frames to be placed onto the physical medium.
• The data is finally transferred onto the network medium at the Physical layer, in the form of raw bits. Signaling and encoding mechanisms are
defined at this layer, as is the hardware that forms the physical connection between the client and the web server .

thumb_up 2 thumb_down 0 flag 0

OSI model has seven layers:

Lets rake an example. Suppose you're carrying large amount of goods from one place to other. Say you're travelling in sea using a number of ships.

1. Physical Layer: Physical layer deals with the actual connectivity between the source and destination. If you're transferring data to another machine through LAN, physical layer is the Ethernet connection. The water in the sea connecting your place and destination is the physical layer in our case.

2. Data Link layer: In this layer data is broken down into pieces. That is your goods will be separated in various categories, say expensive goods, fragile and non fragile. This layer deals with breaking the data being sent and transmitting it through the physical layer. So goods being categorized and sent through the sea.

3. Network layer: Here the data being sent is organized. It also decides which protocols to use, tcp or udp. So in our case the different compartments of ship where you'll keep these goods is the network layer. It will also decide which generators and engines to be used while travelling (protocols).

4. Transport Layer: Transport layer gives you the best route, security and safest path to the destination. The transportation of data takes place here. So think of the coast guard as transport layer. He'll look after the connection, inform you if there's any danger and tell you the best route to your destination.

5. Session layer: The best way to remember a session is thinking of it as a Hangout or Yahoo messenger chat. When two people start communicating a session is created, as soon as one ends the chat or disconnects session is broken. Session layer creates a new session for a pack of data. So the time when our ships leaves our docks and till the time it reaches the destination is one whole session, assuming there are no problems during that interval. If your ships gets hijacked by pirates, session is terminated. If the destination blows up, session is terminated.
Session plays an important role in online transactions. In the very last step, where you need to enter OTP or your secure password to authorize the transaction, if you sit idle for more than 2-3 minutes without any activity, the session will terminate itself and the transaction will be unsuccessful.

6. Presentation Layer: There are times you send various file formats through the network like images or videos. Presentation layer deals with all this different files. It converts this data into a readable format for the destination. Think of a captain who's in charge of people from various races, countries and languages. He has to make sure everyone will reach to destination and can perform their work over there.

7. Application layer: Many people think of browsers as soon as they hear application layer. But application layers isn't the browsers or application you're using. In our case if browsers are taken then ships will be out application layer.
But application layer is not the browser but the common protocols used by the browsers. So application layer are not the ships but the common fuels these ships are using.

thumb_up 2 thumb_down 0 flag 0

If a method is not having any definition then that method is called an Abstract method and it must be declared using the keyword 'abstract'.

If a class contains one or more abstract methods then it is called an abstract class and it must be declared using a keyword 'abstract'.

We can create a reference variable of an abstract class but we can not create object of abstract class. It can only be inherited in a below class.

thumb_up 1 thumb_down 0 flag 0

Advantages

  • Main advantage is synchronization.
  • In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer softwares, particularly for associative arrays, database indexing, caches and sets.

Disadvantages

  • Hash collisions are practically unavoidable. when hashing a random subset of a large set of possible keys.
  • Hash tables become quite inefficient when there are many collisions.
  • Hash table does not allow null values, like hash map.

thumb_up 7 thumb_down 0 flag 0

Advantages of BST are:

  • we can always keep the cost of insert(), delete(), lookup() to O(logN) where N is the number of nodes in the tree - so the benefit really is that lookups can be done in logarithmic time which matters a lot when N is large.
  • We have an ordering of keys stored in the tree. Any time we need to traverse the increasing (or decreasing) order of keys, we just need to do the in-order (and reverse in-order) traversal on the tree.
  • We can implement order statistics with binary search tree - Nth smallest, Nth largest element. This is because it is possible to look at the data structure as a sorted array.
  • We can also do range queries - find keys between N and M (N <= M).
  • BST can also be used in the design of memory allocators to speed up the search of free blocks (chunks of memory), and to implement best fit algorithms where we are interested in finding the smallest free chunk with size greater than or equal to size specified in allocation request.

Disadvantages of using BST:

  • The main disadvantage is that we should always implement a balanced binary search tree - AVL tree, Red-Black tree, Splay tree. Otherwise the cost of operations may not be logarithmic and degenerate into a linear search on an array.

thumb_up 3 thumb_down 0 flag 0

  • == -> is a reference comparison, i.e. both objects point to the same memory location
  • .equals() -> evaluates to the comparison of values in the objects

thumb_up 6 thumb_down 0 flag 0

A Media Access Control address (MAC address) is a unique identifier assigned to network interfaces for communications at data link layer.

A MAC address is a 48 bit identifier assigned to every device connected to a network. They are normally written as 6 octets displayed as hexadecimal digits, for example 08:01:27:0E:25:B8. The first 3 octets identify the manufacturer, the last 3 are to ensure that each address for that manufacturer is unique.

MAC addresses are used for numerous network technologies and most IEEE 802 network technologies, including Ethernet and wifi. Logically, MAC addresses are used in the Media Access Control protocol sub-layer of the OSI reference model.

MAC addresses are most often assigned by the manufacturer of a network interface controller (NIC) and are stored in its hardware.

thumb_up 1 thumb_down 0 flag 0

A hash table (hash map) is a data structure which implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. Or we can say a hash table is an array (usually sparse one) of vectors which contain key/value pairs. The maximum size of this array is typically smaller than the number of items in the set of possible values for the type of data being stored in the hash table

  1. Hash tables are used to quickly store and retrieve data (or records).
  2. Records are stored in buckets using hash keys
  3. Hash keys are calculated by applying a hashing algorithm to a chosen value contained within the record. This chosen value must be a common value to all the records.
  4. Each bucket can have multiple records which are organized in a particular order.

thumb_up 2 thumb_down 0 flag 0

MAC addresses are organized based on a manufacturer code plus a number assigned from that space, effectively random. It contains NO information about where the system is, and how it is connected to the Internet. That makes it impossible to build a routing table other than a complete map that lists EVERY single MAC address ever seen.

This is the equivalent of a completely flat, bridged network.

From harsh experience, flat networks break down under the load of broadcast traffic somewhere under about 100,000 systems, even if you manage them carefully. That would constrain the growth of the Internet to where we were somewhere in the late '80s.

Technically, MAC addresses and IP addresses operate on different layers of the internet protocol suit. MAC addresses are used to identify machines within the same broadcast network on layer 2, while IP addresses are used on layer 3 to identify machines throughout different networks.

Even if your computer has an IP address, it still needs a MAC address to find other machines on the same network (especially the router/gateway to the rest of the network/internet), since every layer is using underlying layers.

thumb_up 1 thumb_down 0 flag 0

Select all records from Table A and Table B, where the join condition is met.

Select all records from Table A and Table B, where the join condition is met.

Select all records from Table A, along with records from Table B for which the join condition is met (if at all).

Select all records from Table A, along with records from Table B for which the join condition is met (if at all).

Select all records from Table B, along with records from Table A for which the join condition is met (if at all).

Select all records from Table B, along with records from Table A for which the join condition is met (if at all).

Select all records from Table A and Table B, regardless of whether the join condition is met or not.

Select all records from Table A and Table B, regardless of whether the join condition is met or not.

for more information refer: http://www.dofactory.com/sql/join

thumb_up 0 thumb_down 0 flag 0

  • Binary Search Tree: Used in many search applications where data is constantly entering/leaving, such as the map and set in many language's libraries.
  • Heaps: Used in implementing efficient priority-queues, which in turn are used for scheduling processes in many operating systems, Quality-of-Service in routers, and A* (path-finding algorithm used in AI applications, including robotics and video games). Also used in heap-sort.
  • Hash Trees: used in p2p programs and specialized image-signatures in which a hash needs to be verified, but the whole file is not available.
  • Syntax Tree: Constructed by compilers and (implicitly) calculators to parse expressions.
  • T-tree: Though most databases use some form of B-tree to store data on the drive, databases which keep all (most) their data in memory often use T-trees to do so.

thumb_up 1 thumb_down 0 flag 0

A M-ary tree is a rooted tree in which each node has no more than M children. It is also sometimes known as a k-way tree, an N-ary tree, or an k-ary tree. A binary tree is a special case where M=2.

A M-ary tree results in better memory access patterns, because each node contains M nodes next to each other, which means the height of the tree is shorter (wikipedia gives the height, $h$, for a complete k-ary tree as $h=\left\lceil\log_k (k - 1) + \log_k (n) - 1\right\rceil$, which is asymptotically the same for any constant $k$), and traversal might jump around less as well, since leaf nodes can contain multiple in-order keys.

M-ary tree are in quad-trees and other space-partitioning trees, where divisioning space using only two nodes per level would make the logic unnecessarily complex; and B-trees used in many databases, where the limiting factor is not how many comparisons are done at each level but how many nodes can be loaded from the hard-drive at once.

thumb_up 3 thumb_down 0 flag 0

The idea is to maintain two values in recursive calls
1) Maximum root to leaf path sum for the subtree rooted under current node.
2) The maximum path sum between leaves (desired output).

For every visited node X, we find the maximum root to leaf sum in left and right subtrees of X. We add the two values with X->data, and compare the sum with maximum path sum found so far.

For implementation see http://www.geeksforgeeks.org/find-maximum-path-sum-two-leaves-binary-tree/

thumb_up 15 thumb_down 1 flag 0

  1. An executing instance of a program is called a process while a thread is a subset of the process.
  2. Both processes and threads are independent sequences of execution. The typical difference is that threads within the same process run in a shared memory space, while processes run in separate memory spaces.
  3. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section and OS resources like open files and signals. But, like process, a thread has its own program counter (PC), a register set, and a stack space.
  4. Each process provides the resources needed to execute a program. While a thread is an entity within a process that can be scheduled for execution.
  5. A process has its own virtual address space while all threads of a process share its virtual address space and system resources.
  6. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process.
  7. Threads can directly communicate with other threads of its process; processes must use interprocess communication to communicate with sibling processes.
  8. Threads have almost no overhead; processes have considerable overhead.
  9. New threads are easily created; new processes require duplication of the parent process.
  10. Threads can exercise considerable control over threads of the same process; processes can only exercise control over child processes.
  11. Changes to the main thread (cancellation, priority change, etc.) may affect the behavior of the other threads of the process; changes to the parent process do not affect child processes.

Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completed its execution, then its output can be immediately responded.

2. Faster context switch: Context switch time between threads is less compared to process context switch. Process context switch is more overhead for CPU.

3. Effective Utilization of Multiprocessor system: If we have multiple threads in a single process, then we can schedule multiple threads on multiple processor. This will make process execution faster.

4. Resource sharing: Resources like code, data and file can be shared among all threads within a process.
Note : stack and registers can't be shared among the threads. Each thread have its own stack and registers.

5. Communication: Communication between multiple thread is easier as thread shares common address space. while in process we have to follow some specific communication technique for communication between two process.

6. Enhanced Throughput of the system: If process is divided into multiple threads and each thread function is considered as one job, then the number of jobs completed per unit time is increased. Thus, increasing the throughput of the system.

For detailed explanation see article

thumb_up 0 thumb_down 0 flag 0

thumb_up 21 thumb_down 0 flag 0

Both Arrays and Linked List can be used to store linear data of similar types, but they both have some advantages and disadvantages over each other.

Following are the points in favour of Linked Lists.

(1) The size of the arrays is fixed: So we must know the upper limit on the number of elements in advance. Also, generally, the allocated memory is equal to the upper limit irrespective of the usage, and in practical uses, upper limit is rarely reached.

(2) Inserting a new element in an array of elements is expensive, because room has to be created for the new elements and to create room existing elements have to shifted.

For example, suppose we maintain a sorted list of IDs in an array id[].

id[] = [1000, 1010, 1050, 2000, 2040, …..].

And if we want to insert a new ID 1005, then to maintain the sorted order, we have to move all the elements after 1000 (excluding 1000).

Deletion is also expensive with arrays until unless some special techniques are used. For example, to delete 1010 in id[], everything after 1010 has to be moved.

So Linked list provides following two advantages over arrays
1) Dynamic size
2) Ease of insertion/deletion

thumb_up 3 thumb_down 0 flag 0

  1. C is easier to read.
  2. There is no operator overloading in C, there is very little going on under the hood.
  3. C is friendlier towards powerful command line debuggers such as gdb and dbx as compared to C++.
  4. C requires very little runtime support.
  5. C is more efficient than C++ due to no need for virtual method table lookups.
  6. For certain domains of programming, particularly kernel programming and device driver development, C is by far the most accepted language.

thumb_up 0 thumb_down 0 flag 0

First Come First Serve

thumb_up 4 thumb_down 0 flag 0

In the class-based object-oriented programming paradigm, "object" refers to a particular instance of a class where the object can be a combination of variables, functions, and data structures.

object is the physical representation of the class which can stores memory . (i.e variables present in class )  object is the instance of the class .Every object has address called hashcode ex: j2c4a .

thumb_up 16 thumb_down 1 flag 0

0 1 2 3 4

thumb_up 24 thumb_down 0 flag 0

Stack. Because of its LIFO (Last In First Out) property it remembers its 'caller' so knows whom to return when the function has to return. Recursion makes use of system stack for storing the return addresses of the function calls. Every recursive function has its equivalent iterative (non-recursive) function.

thumb_up 2 thumb_down 1 flag 0

Stack

thumb_up 0 thumb_down 0 flag 0

Complexity of the code will be O(n).

thumb_up 2 thumb_down 0 flag 0

Threads do not share stack.

thumb_up 0 thumb_down 0 flag 0

This program will show error as semicolon (;) is used after statement case j.

thumb_up 0 thumb_down 0 flag 0

              #include <stdio.h>  void strrev(char *p) {   char *q = p;   while(q && *q) ++q;   for(--q; p < q; ++p, --q)     *p = *p ^ *q,     *q = *p ^ *q,     *p = *p ^ *q; }  int main(int argc, char **argv) {   do {     printf("%s ",  argv[argc-1]);     strrev(argv[argc-1]);     printf("%s\n", argv[argc-1]);   } while(--argc);    return 0; }            

thumb_up 0 thumb_down 0 flag 0

import java.util.Scanner;
import java.util.*;

class GfG
{
void reverse(String str)
{
char arr[] = str.toCharArray();
int begin = 0;
int end = arr.length-1;
char temp;
while(begin<end)
{
temp = arr[begin];
arr[begin] = arr[end];
arr[end] = temp;
end--;
begin++;
}

        str = new String(arr);
System.out.println(str);
removeDuplicate(str);
return ;
}

    void removeDuplicate(String str)
{
LinkedHashSet<Character> lhs = new LinkedHashSet<>();
int i=0;
while(i<str.length())
{
lhs.add(str.charAt(i));
i++;
}

        for(Character ch : lhs)
System.out.print(ch);
}
}

class ReverseString
{
public static void main(String args[])
{
Scanner sc = new Scanner(System.in);
String str = sc.nextLine();
GfG g = new GfG();
g.reverse(str);
}
}

thumb_up 2 thumb_down 0 flag 0

An interface in java is a blueprint of a class. It has static constants and abstract methods.

The interface in java is a mechanism to achieve abstraction. There can be only abstract methods in the java interface not method body. It is used to achieve abstraction and multiple inheritance in Java.

Java Interface also represents IS-A relationship.

It cannot be instantiated just like abstract class.

Why use Java interface?

There are mainly three reasons to use interface. They are given below.

  • It is used to achieve fully abstraction.
  • By interface, we can support the functionality of multiple inheritance.
  • It can be used to achieve loose coupling.

The java compiler adds public and abstract keywords before the interface method and public, static and final keywords before data members.

In other words, Interface fields are public, static and final bydefault, and methods are public and abstract.

thumb_up 6 thumb_down 0 flag 0

Templates are a feature of the C++ programming language that allows functions and classes to operate with generics type. This allows a function or class to work on many different data types without being rewritten for each one.

thumb_up 2 thumb_down 0 flag 0

  1. The browser extracts the domain name from the URL.
  2. The browser queries DNS for the IP address of the URL. Generally, the browser will have cached domains previously visited, and the operating system will have cached queries from any number of applications. If neither the browser nor the OS have a cached copy of the IP address, then a request is sent off to the system's configured DNS server. The client machine knows the IP address for the DNS server, so no lookup is necessary.
  3. The request sent to the DNS server is almost always smaller than the maximum packet size, and is thus sent off as a single packet. In addition to the content of the request, the packet includes the IP address it is destined for in its header. Except in the simplest of cases (network hubs), as the packet reaches each piece of network equipment between the client and server, that equipment uses a routing table to figure out what node it is connected to that is most likely to be part of the fastest route to the destination. The process of determining which path is the best choice differs between equipment and can be very complicated.
  4. The is either lost (in which case the request fails or is reiterated), or makes it to its destination, the DNS server.
  5. If that DNS server has the address for that domain, it will return it. Otherwise, it will forward the query along to DNS server it is configured to defer to. This happens recursively until the request is fulfilled or it reaches an authoritative name server and can go no further. (If the authoritative name server doesn't recognize the domain, the response indicates failure and the browser generally gives an error like "Can't find the server at www.lkliejafadh.com".) The response makes its way back to the client machine much like the request traveled to the DNS server.
  6. Assuming the DNS request is successful, the client machine now has an IP address that uniquely identifies a machine on the Internet. The web browser then assembles an HTTP request, which consists of a header and optional content. The header includes things like the specific path being requested from the web server, the HTTP version, any relevant browser cookies, etc. In the case in question (hitting Enter in the address bar), the content will be empty. In other cases, it may include form data like a username and password (or the content of an image file being uploaded, etc.)
  7. This HTTP request is sent off to the web server host as some number of packets, each of which is routed in the same was as the earlier DNS query. (The packets have sequence numbers that allow them to be reassembled in order even if they take different paths.) Once the request arrives at the webserver, it generates a response (this may be a static page, served as-is, or a more dynamic response, generated in any number of ways.) The web server software sends the generated page back to the client.
  8. Assuming the response HTML and not an image or data file, then the browser parses the HTML to render the page. Part of this parsing and rendering process may be the discovery that the web page includes images or other embedded content that is not part of the HTML document. The browser will then send off further requests (either to the original web server or different ones, as appropriate) to fetch the embedded content, which will then be rendered into the document as well.

See also:

DNS

Routing

Web Server

Html

thumb_up 14 thumb_down 1 flag 0

"lvalue" and "rvalue" are so named because of where each of them can appear in an assignment operation.  An lvalue can appear on the left side of an assignment operator, whereas an rvalue can appear on the right side.

As an example:

  1. int a;
  2. a = 3;

In the second line, "a" is the lvalue, and "3" is the rvalue.

And in this example:

  1. int a, b;
  2. a = 4;
  3. b = a;

In the third line of that example, "b" is the lvalue, and "a" is the rvalue, whereas it was the lvalue in line 2.  This illustrates an important point: An lvalue can also be an rvalue, but an rvalue can never be an lvalue.

Another definition of lvalue is "a place where a value can be stored." This means certain pointer expressions are also valid lvalues:

  1. int *p, *q;
  2. p = 65000; /* valid lvalue assignment */
  3. p + 4 = 18; /* invalid - "p + 4" is not an lvalue */
  4. q = p + 4; /* valid - "p + 4" is an rvalue */
  5. *(p + 4) = 18; /* valid - dereferencing pointer expression gives an lvalue */

Remembering the mnemonic, that lvalues can appear on the left of an assignment operator while rvalues can appear on the right, will help you keep it straight.

Here's a fragment that's also valid:

  1. int x, y;
  2. (x < y ? y : x) = 0;

It's valid because the ternary expression preserves the "lvalue-ness" of both its possible return values.

thumb_up 4 thumb_down 0 flag 0

The client-server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing and world wide web.

Clients and servers exchange messages in a request - response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication . To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communication protocols. All client-server protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue.

thumb_up 2 thumb_down 0 flag 0

The time complexity of TOH can be calculated by formulating number of moves.

We need to move the first N-1 disks from Source to Auxiliary and from Auxiliary to Destination, i.e. the first N-1 disks requires two moves. One more move of last disk from Source to Destination. Mathematically it can be defined recursively.

M base N = 2 (M base N-1) + 1.

We can easily solve the above recursive relation (2^N-1), which is exponential.

The minimal number of moves required to solve a Tower of Hanoi puzzle is 2n − 1, where n is the number of disks

See Tower of Hanoi

thumb_up 0 thumb_down 0 flag 0

// Java program to print all permutations of a
// given string.
public class Permutation
{
public static void main(String[] args)
{
String str = "ABC";
int n = str.length();
Permutation permutation = new Permutation();
permutation.permute(str, 0, n-1);
}

    /**
* permutation function
* @param str string to calculate permutation for
* @param l starting index
* @param r end index
*/
private void permute(String str, int l, int r)
{
if (l == r)
System.out.println(str);
else
{
for (int i = l; i <= r; i++)
{
str = swap(str,l,i);
permute(str, l+1, r);
str = swap(str,l,i);
}
}
}

    /**
* Swap Characters at position
* @param a string value
* @param i position 1
* @param j position 2
* @return swapped string
*/
public String swap(String a, int i, int j)
{
char temp;
char[] charArray = a.toCharArray();
temp = charArray[i] ;
charArray[i] = charArray[j];
charArray[j] = temp;
return String.valueOf(charArray);
}

}

Output:

ABC ACB BAC BCA CBA CAB


Algorithm Paradigm: Backtracking
Time Complexity: O(n*n!) Note that there are n! permutations and it requires O(n) time to print a a permutation.

thumb_up 2 thumb_down 0 flag 0

The behavior of stack (growing up or growing down) depends on the application binary interface (ABI) and how the call stack (aka activation record) is organized.

Throughout its lifetime a program is bound to communicate with other programs like OS. ABI determines how a program can communicate with another program.

The stack for different architectures can grow the either way, but for an architecture it will be consistent. But, the stack's growth is decided by the ABI of that architecture.

For example, if you take the MIPS ABI, the call stack is defined as below.

Let us consider that function 'fn1' calls 'fn2'. Now the stack frame as seen by 'fn2' is as follows:

              direction of     |                                 |   growth of      +---------------------------------+     stack         | Parameters passed by fn1(caller)| from higher addr.|                                 | to lower addr.   | Direction of growth is opposite |       |          |   to direction of stack growth  |       |          +---------------------------------+ <-- SP on entry to fn2       |          | Return address from fn2(callee) |        V          +---------------------------------+                   | Callee saved registers being    |                   |   used in the callee function   |                   +---------------------------------+                  | Local variables of fn2          |                  |(Direction of growth of frame is |                  | same as direction of growth of  |                  |            stack)               |                  +---------------------------------+                   | Arguments to functions called   |                  | by fn2                          |                  +---------------------------------+ <- Current SP after stack                                                          frame is allocated            

Now you can see the stack grows downward. So, if the variables are allocated to the local frame of the function, the variable's addresses actually grows downward. The compiler can decide on the order of variables for memory allocation. (In your case it can be either 'q' or 's' that is first allocated stack memory. But, generally the compiler does stack memory allocation as per the order of the declaration of the variables).

But in case of the arrays, the allocation has only single pointer and the memory needs to be allocated will be actually pointed by a single pointer. The memory needs to be contiguous for an array. So, though stack grows downward, for arrays the stack grows upword.

              void func(int *p)  {     int i;     if (!p)         func(&i);     else if (p < &i)         printf("Stack grows upward\n");     else         printf("Stack grows downward\n"); }  func(NULL);            

See Stack dirction

thumb_up 0 thumb_down 0 flag 0

Inter-process communication: As it is clear from the name it is the communication between different processes.

In any system there are many number of processes running at a particular time and they need to share information with each other to achieve their goals. These processes can be on the same system or on different systems. IPC can be achieved by many different ways like message passing or shared memory, pipes, sockets.

Some examples of IPC we encounter on a daily basis:

  • X applications communicate with the X server through network protocols.
  • Pipes are a form of IPC: grep foo file | sort
  • Servers like Apache spawn child processes to handle requests.
  • many more I can't think of right now

thumb_up 3 thumb_down 0 flag 0

  • iOS is by Apple Inc, Android is by Google.
  • iOS was released earlier 2007, Android was released earlier 2008.
  • iOS was developed with Objective-C and Swift, Android was with java programming.
  • Android is open source. Wou can view the code, modify it, create our own distribution of Android (ROM). While iOS is Closed with Objective- C and open with Swift. Proprietary software. There are no ROMs. We can't install our own ROMs.
  • Android is more customisable. We can change almost everything in the UI. Add widgets, change launcher, customize effects. not possible with iOS.
  • Security is more for iOS, where as crack tool are more for Android.
  • For Android, we can download apps from Google Play and other third party source. Where as for iOS, we only download apps from Apple Store(iTunes).
  • File transfer is very easy for Android using USB with file explorer, for iOS is it bit complicated and doesn't have file explorer.
  • Development tools are more for Android where as iOS no.

thumb_up 0 thumb_down 0 flag 0

Algorithm :

  1. Add all the elements of the second array to a Hashset
  2. Since range of numbers in both the arrays are same and Hashset contains unique elements, so Hashset contains elements which are common in both the arrays.
  3. Return an array of elements which are not present in HashSet.

thumb_up 2 thumb_down 0 flag 0

A nibble is a four-bit aggregation, or half an octet. There are two nibbles in a byte.
Convert the given number in byte, swap the two nibbles in it. For example 100 is be represented as 01100100 in a byte (or 8 bits). The two nibbles are (0110) and (0100). If we swap the two nibbles, we get 01000110 which is 70 in decimal.


To swap the nibbles, we can use bitwise &, bitwise '<<' and '>>' operators. A byte can be represented using a unsigned char in C as size of char is 1 byte in a typical C compiler. Following is C program to swap the two nibbles in a byte.

#include <stdio.h>

unsigned char swapNibbles(unsigned char x)

{

    return ( (x & 0x0F)<<4 | (x & 0xF0)>>4 );

}

int main()

{

    unsigned char x = 100;

    printf("%u", swapNibbles(x));

    return 0;

}

Output:

70

Explanation:
100 is 01100100 in binary. The operation can be split mainly in two parts
1) The expression "x & 0x0F" gives us last 4 bits of x. For x = 100, the result is 00000100. Using bitwise '<<' operator, we shift the last four bits to the left 4 times and make the new last four bits as 0. The result after shift is 01000000. 2) The expression "x & 0xF0" gives us first four bits of x. For x = 100, the result is 01100000. Using bitwise '>>' operator, we shift the digit to the right 4 times and make the first four bits as 0. The result after shift is 00000110.

At the end we use the bitwise OR '|' operation of the two expressions explained above. The OR operator places first nibble to the end and last nibble to first. For x = 100, the value of (01000000) OR (00000110) gives the result 01000110 which is equal to 70 in decimal.

thumb_up 16 thumb_down 0 flag 0

In addition to answer provided by Abhiros :

examples of TCP and UDP

  • Our web browser uses TCP to load webpages but before that uses UDP to translate an URL into an IP address.
  • Our "apps" that load articles from a news site will use TCP to download the content.
  • When we skype to someone, we use UDP because retransmissions and overhead are not desirable.
  • Next is WhatsApp. As we may notice this app keeps users informed about not just the recipient of the messages but also if those messages have been seen and reproduced…A good example of TCP.

thumb_up 0 thumb_down 3 flag 0

//starting and ending with 1

class CountSubString

{

    int countSubStr(char str[], int n)

    {

        int m = 0; // Count of 1's in input string


// Travers input string and count of 1's in it

        for (int i = 0; i < n; i++)

        {

            if (str[i] == '1')

                m++;

        }


// Return count of possible pairs among m 1's

        return m * (m - 1) / 2;

    }


// Driver program to test the above function

    public static void main(String[] args)

    {

        CountSubString count = new CountSubString();

        String string = "00100101";

        char str[] = string.toCharArray();

        int n = str.length;

        System.out.println(count.countSubStr(str, n));

    }

}

Output:

3

thumb_up 1 thumb_down 0 flag 0

              Algorithm  multiply(A[0..m-1], B[0..n01])              1) Create a product array prod[] of size m+n-1. 2) Initialize all entries in prod[] as 0. 3) Travers array A[] and do following for every element A[i] ...(3.a) Traverse array B[] and do following for every element B[j]           prod[i+j] = prod[i+j] + A[i] * B[j] 4) Return prod[].

For implementation see Multiply two polynomials

thumb_up 1 thumb_down 0 flag 0

Suppose we have a number n, let's say 28. so corresponding to it we need to print the column name. We need to take remainder with 26.

If remainder with 26 comes out to be 0 (meaning 26, 52 and so on) then we put 'Z' in the output string and new n becomes n/26 -1 because here we are considering 26 to be 'Z' while in actual it's 25th with respect to 'A'.

Similarly if the remainder comes out to be non zero. (like 1, 2, 3 and so on) then we need to just insert the char accordingly in the string and do n = n/26.

Finally we reverse the string and print.

Example:
n = 700

Remainder (n%26) is 24. So we put 'X' in output string and n becomes n/26 which is 26.

Remainder (26%26) is 0. So we put 'Z' in output string and n becomes n/26 -1 which is 0.

Following is C++ implementation of above approach.

#include<bits/stdc++.h>

#define MAX 50

using namespace std;

// Function to print Excel column name for a given column number

void printString(int n)

{

    char str[MAX];  // To store result (Excel column name)

    int i = 0;  // To store current index in str which is result

    while (n>0)

    {

        // Find remainder

        int rem = n%26;

        // If remainder is 0, then a 'Z' must be there in output

        if (rem==0)

        {

            str[i++] = 'Z';

            n = (n/26)-1;

        }

        else // If remainder is non-zero

        {

            str[i++] = (rem-1) + 'A';

            n = n/26;

        }

    }

    str[i] = '\0';

    // Reverse the string and print result

    reverse(str, str + strlen(str));

    cout << str << endl;

    return;

}

// Driver program to test above function

int main()

{

    printString(26);

    printString(51);

    printString(52);

    printString(80);

    printString(676);

    printString(702);

    printString(705);

    return 0;

}

Output:

Z AY AZ CB YZ ZZ AAC

thumb_up 0 thumb_down 0 flag 0

Algorithm :

1. Convert both the numbers in binary form.

2. Compare the number of digits in both the numbers

          append zeros in the beginning of the number having lesser number of digit

3.  Now compare each bit of both numbers

           if not equal increase count by 1.

thumb_up 3 thumb_down 1 flag 0

Algorithm : find the longest length for every point, and record the result in a 2D array so that we do not need to calculate the longest length for some points again.

              int original[m][n] = {...}; int longest[m][n] = {0};  int find() {     int max = 0;     for (int i = 0; i < m; i++) {         for (int j = 0; j < n; j++) {             int current = findfor(i, j);             if (current > max) { max = current; }         }     }     return max; }  int findfor(int i, int j) {     if (longest[i][j] == 0) {         int max = 0;         for (int k = -1; k <= 1; k++) {             for (int l = -1; l <= 1; l++) {                 if (!(k == 0 && l == 0) &&                     i + k >= 0 && i + k < m &&                     j + l >= 0 && j + l < n &&                     original[i + k][j + l] > original[i][j]                    )                     int current = findfor(i + k, j + l);                     if (current > max) { max = current; }                 }             }         }         longest[i][j] = max + 1;     }     return longest[i][j]; }                          

Recursion

1) start with a point (and this step has to be taken for all necessary points)

2) if no surrounding point is greater, then this path ends; else pick a greater surrounding point to continue the path, and go to 2).

2.1) if the (ended) path is longer than recorded longest path, substitute itself as the longest.

Hint

(less computation but more coding)

For the longest path, the start point of which will be a local minimum point, and the end point of which will be a local maximum point.

Local minimum, less than (or equal to) all (at most) 8 surrounding points.

Local maximum, greater than (or equal to) all (at most) 8 surrounding points.

Proof

If the path does not start with a local minimum, then the start point must be greater than at least a surrounding point, and thus the path can be extended. Reject! Thus, the path must start with a local minimum. Similar for the reason to end with a local maximum.

pseudo code

for all local minimum   do a recursive_search  recursive_search (point)   if point is local maximum     end, and compare (and substitute if necessary) longest   else     for all greater surrounding points       do a recursive_search            

thumb_up 1 thumb_down 1 flag 0

Kadane's Algorithm:

Initialize:     max_so_far = 0     max_ending_here = 0  Loop for each element of the array   (a) max_ending_here = max_ending_here + a[i]   (b) if(max_ending_here < 0)             max_ending_here = 0   (c) if(max_so_far < max_ending_here)             max_so_far = max_ending_here return max_so_far            

Explanation:
Simple idea of the Kadane's algorithm is to look for all positive contiguous segments of the array (max_ending_here is used for this). And keep track of maximum sum contiguous segment among all positive segments (max_so_far is used for this). Each time we get a positive sum compare it with max_so_far and update max_so_far if it is greater than max_so_far

              Lets take the example:     {-2, -3, 4, -1, -2, 1, 5, -3}      max_so_far = max_ending_here = 0      for i=0,  a[0] =  -2     max_ending_here = max_ending_here + (-2)     Set max_ending_here = 0 because max_ending_here < 0      for i=1,  a[1] =  -3     max_ending_here = max_ending_here + (-3)     Set max_ending_here = 0 because max_ending_here < 0      for i=2,  a[2] =  4     max_ending_here = max_ending_here + (4)     max_ending_here = 4     max_so_far is updated to 4 because max_ending_here greater      than max_so_far which was 0 till now      for i=3,  a[3] =  -1     max_ending_here = max_ending_here + (-1)     max_ending_here = 3      for i=4,  a[4] =  -2     max_ending_here = max_ending_here + (-2)     max_ending_here = 1      for i=5,  a[5] =  1     max_ending_here = max_ending_here + (1)     max_ending_here = 2      for i=6,  a[6] =  5     max_ending_here = max_ending_here + (5)     max_ending_here = 7     max_so_far is updated to 7 because max_ending_here is      greater than max_so_far      for i=7,  a[7] =  -3     max_ending_here = max_ending_here + (-3)     max_ending_here = 4            

For implementation see Kandane's Algorithm

thumb_up 0 thumb_down 0 flag 0

paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

thumb_up 2 thumb_down 0 flag 0

1.  Method overloading deals with the notion of having two or more methods in the same class with the same name but different arguments.

              void foo(int a) void foo(int a, float b)            

Method overriding means having two methods with the same arguments, but different implementations. One of them would exist in the parent class, while another will be in the derived, or child class. The @Override annotation, while not required, can be helpful to enforce proper overriding of a method at compile time.

              class Parent {     void foo(double d) {         // do something     } }  class Child extends Parent {      @Override     void foo(double d){         // this method is overridden.       } }            

2.  Method overriding is when a child class redefines the same method as a parent class, with the same parameters. For example, the standard Java class java.util.LinkedHashSet extends java.util.HashSet. The method add() is overridden in LinkedHashSet. If you have a variable that is of type HashSet, and you call its add() method, it will call the appropriate implementation of add(), based on whether it is a HashSet or a LinkedHashSet. This is called polymorphism.

Method overloading is defining several methods in the same class, that accept different numbers and types of parameters. In this case, the actual method called is decided at compile-time, based on the number and types of arguments. For instance, the method System.out.println() is overloaded, so that you can pass ints as well as Strings, and it will call a different version of the method.

3. Method overloading is the example of compile time polymorphism while method overriding is the example of run time polymorphism.

thumb_up 0 thumb_down 0 flag 0

n a Queue data structure, we maintain two pointers, front and rear. The front points the first item of queue and rear points to last item.

enQueue() : This operation adds a new node after rear and moves rear to the next node.

deQueue() : This operation removes the front node and moves front to the next node.

// A C program to demonstrate linked list based implementation of queue
#include <stdlib.h>
#include <stdio.h>

// A linked list (LL) node to store a queue entry
struct QNode
{
int key;
struct QNode *next;
};

// The queue, front stores the front node of LL and rear stores ths
// last node of LL
struct Queue
{
struct QNode *front, *rear;
};

// A utility function to create a new linked list node.
struct QNode* newNode(int k)
{
struct QNode *temp = (struct QNode*)malloc(sizeof(struct QNode));
temp->key = k;
temp->next = NULL;
return temp;
}

// A utility function to create an empty queue
struct Queue *createQueue()
{
struct Queue *q = (struct Queue*)malloc(sizeof(struct Queue));
q->front = q->rear = NULL;
return q;
}

// The function to add a key k to q
void enQueue(struct Queue *q, int k)
{
// Create a new LL node
struct QNode *temp = newNode(k);

    // If queue is empty, then new node is front and rear both
if (q->rear == NULL)
{
q->front = q->rear = temp;
return;
}

    // Add the new node at the end of queue and change rear
q->rear->next = temp;
q->rear = temp;
}

// Function to remove a key from given queue q
struct QNode *deQueue(struct Queue *q)
{
// If queue is empty, return NULL.
if (q->front == NULL)
return NULL;

    // Store previous front and move front one node ahead
struct QNode *temp = q->front;
q->front = q->front->next;

    // If front becomes NULL, then change rear also as NULL
if (q->front == NULL)
q->rear = NULL;
return temp;
}

// Driver Program to test anove functions
int main()
{
struct Queue *q = createQueue();
enQueue(q, 10);
enQueue(q, 20);
deQueue(q);
deQueue(q);
enQueue(q, 30);
enQueue(q, 40);
enQueue(q, 50);
struct QNode *n = deQueue(q);
if (n != NULL)
printf("Dequeued item is %d", n->key);
return 0;
}

Output:

Dequeued item is 30

thumb_up 3 thumb_down 0 flag 0

A header file is a file with extension .h which contains C function declarations and macro definitions to be shared between several source files.

Some of C Header files :

  • stddef.h -  Defines several useful types and macros.
  • stdint.h  -  Defines exact width integer types.
  • stdio.h   -  Defines core input and output functions
  • stdlib.h  -  Defines numeric conversion functions, pseudo random network generator, memory allocation
  • string.h  -  Defines string handling functions
  • math.h  -  Defines common mathematical functions

thumb_up 3 thumb_down 0 flag 0

conio.h is a C header file used mostly by MS-DOS compilers to provide console input/output. conio stands for "console input and output".

thumb_up 11 thumb_down 2 flag 0

Three.

There'll be a matching pair of socks after three tries because only two socks, of either color, are needed. Let's say the first sock drawn from the wardrobe is black. There are only two things that can now happen: If the next sock is black, there's now a matching pair of socks and we're done. If the next sock is white, there's now a white and a black sock. Now, no matter what the next sock drawn will make a pair. Whether a black sock or a white sock is drawn from the wardrobe, there'll be two socks of one color and one of the other color. Therefore, there'll always be a pair by the third sock drawn.

thumb_up 0 thumb_down 0 flag 0

Iterative Method
Iterate trough the linked list. In loop, change next to prev, prev to current and current to next.

Implementation of Iterative Method

// Java program for reversing the linked list

class LinkedList {

    static Node head;

    static class Node {

        int data;

        Node next;

        Node(int d) {

            data = d;

            next = null;

        }

    }

    /* Function to reverse the linked list */

    Node reverse(Node node) {

        Node prev = null;

        Node current = node;

        Node next = null;

        while (current != null) {

            next = current.next;

            current.next = prev;

            prev = current;

            current = next;

        }

        node = prev;

        return node;

    }

    // prints content of double linked list

    void printList(Node node) {

        while (node != null) {

            System.out.print(node.data + " ");

            node = node.next;

        }

    }

    public static void main(String[] args) {

        LinkedList list = new LinkedList();

        list.head = new Node(85);

        list.head.next = new Node(15);

        list.head.next.next = new Node(4);

        list.head.next.next.next = new Node(20);

        System.out.println("Given Linked list");

        list.printList(head);

        head = list.reverse(head);

        System.out.println("");

        System.out.println("Reversed linked list ");

        list.printList(head);

    }

}

Given linked list 85 15 4 20  Reversed Linked list  20 4 15 85            

Time Complexity: O(n)
Space Complexity: O(1)

Recursive Method:

              1) Divide the list in two parts - first node and rest of the linked list.    2) Call reverse for the rest of the linked list.    3) Link rest to first.    4) Fix head pointer            

void recursiveReverse(struct node** head_ref)

{

    struct node* first;

    struct node* rest;

    /* empty list */

    if (*head_ref == NULL)

       return;


/* suppose first = {1, 2, 3}, rest = {2, 3} */

    first = *head_ref;

    rest  = first->next;


/* List has only one node */

    if (rest == NULL)

       return;


/* reverse the rest list and put the first element at the end */

    recursiveReverse(&rest);

    first->next->next  = first;

    /* tricky step -- see the diagram */

    first->next  = NULL;


/* fix the head pointer */

    *head_ref = rest;

}

Time Complexity: O(n)
Space Complexity: O(1)

A Simpler and Tail Recursive Method
Below is C++ implementation of this method.

// Java program for reversing the Linked list


class LinkedList {


static Node head;


static class Node {


int data;

        Node next;


Node(int d) {

            data = d;

            next = null;

        }

    }


// A simple and tail recursive function to reverse

    // a linked list.  prev is passed as NULL initially.

    Node reverseUtil(Node curr, Node prev) {


/* If last node mark it head*/

        if (curr.next == null) {

            head = curr;


/* Update next to prev node */

            curr.next = prev;

            return null;

        }


/* Save curr->next node for recursive call */

        Node next1 = curr.next;


/* and update next ..*/

        curr.next = prev;


reverseUtil(next1, curr);

        return head;

    }


// prints content of double linked list

    void printList(Node node) {

        while (node != null) {

            System.out.print(node.data + " ");

            node = node.next;

        }

    }


public static void main(String[] args) {

        LinkedList list = new LinkedList();

        list.head = new Node(1);

        list.head.next = new Node(2);

        list.head.next.next = new Node(3);

        list.head.next.next.next = new Node(4);

        list.head.next.next.next.next = new Node(5);

        list.head.next.next.next.next.next = new Node(6);

        list.head.next.next.next.next.next.next = new Node(7);

        list.head.next.next.next.next.next.next.next = new Node(8);


System.out.println("Original Linked list ");

        list.printList(head);

        Node res = list.reverseUtil(head, null);

        System.out.println("");

        System.out.println("");

        System.out.println("Reversed linked list ");

        list.printList(res);

    }

}

thumb_up 0 thumb_down 0 flag 0

Compile time polymorphism is nothing but the method overloading in java. In simple terms we can say that a class can have more than one methods with same name but with different number of arguments or different types of arguments or both. To  know more about it refer method overloading in java.

Lets see the below example to understand it better-

class X {    void methodA(int num)    {        System.out.println ("methodA:" + num);    }    void methodA(int num1, int num2)    {        System.out.println ("methodA:" + num1 + "," + num2);    }    double methodA(double num) {        System.out.println("methodA:" + num);        return num;    } }  class Y {    public static void main (String args [])    {        X Obj = new X();        double result;        Obj.methodA(20);        Obj.methodA(20, 30);        result = Obj.methodA(5.5);        System.out.println("Answer is:" + result);    } }

Output:

methodA:20 methodA:20,30 methodA:5.5 Answer is:5.5

In the above example class has three variance of methodA or we can say methodA is polymorphic in nature since it is having three different forms. In such scenario, compiler is able to figure out the method call at compile-time that's the reason it is known as compile time polymorphism.

thumb_up 7 thumb_down 0 flag 0

Runtime polymorphism or Dynamic Method Dispatch is a process in which a call to an overridden method is resolved at runtime rather than compile-time.

In this process, an overridden method is called through the reference variable of a superclass. The determination of the method to be called is based on the object being referred to by the reference variable.

Method overriding is a perfect example of  runtime polymorphism. In this kind of polymorphism, reference of class X can hold object of class X or an object of any sub classes of class X. For e.g. if class Y extends class X then both of the following statements are valid:

Y obj = new Y(); //Parent class reference can be assigned to child object X obj = new Y();

Since in method overriding both the classes(base class and child class) have same method, compile doesn't figure out which method to call at compile-time. In this case JVM(java virtual machine) decides which method to call at runtime that's why it is known as runtime or dynamic polymorphism.

Example :

class Animal
{
void eat()
{
System.out.println("eating");
}
}

class Dog extends Animal
{
void eat()
{
System.out.println("Dog is eating bread");
}
}

class Cat extends Animal
{
void eat()
{
System.out.println("Cat is eating rat");
}
}

class Lion extends Animal
{
void eat()
{
System.out.println("Lion is eating meat");
}
}

class TestPolymorphism3
{
public static void main(String[] args)
{
Animal a;
a=new Dog();
a.eat();
a=new Cat();
a.eat();
a=new Lion();
a.eat();
}
}

Output:

Dog is eating bread Cat is eating rat Lion is eating meat            

thumb_up 2 thumb_down 0 flag 0

We use two data structures to implement an LRU Cache.

  1. Queue which is implemented using a doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size).The most recently used pages will be near front end and least recently pages will be near rear end.
  2. A Hash with page number as key and address of the corresponding queue node as value.

When a page is referenced, the required page may be in the memory. If it is in the memory, we need to detach the node of the list and bring it to the front of the queue.
If the required page is not in the memory, we bring that in memory. In simple words, we add a new node to the front of the queue and update the corresponding node address in the hash. If the queue is full, i.e. all the frames are full, we remove a node from the rear of queue, and add the new node to the front of queue.

Note: Initially no page is in the memory.


C implementation:

// A C program to show implementation of LRU cache

#include <stdio.h>

#include <stdlib.h>

// A Queue Node (Queue is implemented using Doubly Linked List)

typedef struct QNode

{

    struct QNode *prev, *next;

    unsigned pageNumber;  // the page number stored in this QNode

} QNode;

// A Queue (A FIFO collection of Queue Nodes)

typedef struct Queue

{

    unsigned count;  // Number of filled frames

    unsigned numberOfFrames; // total number of frames

    QNode *front, *rear;

} Queue;

// A hash (Collection of pointers to Queue Nodes)

typedef struct Hash

{

    int capacity; // how many pages can be there

    QNode* *array; // an array of queue nodes

} Hash;

// A utility function to create a new Queue Node. The queue Node

// will store the given 'pageNumber'

QNode* newQNode( unsigned pageNumber )

{

    // Allocate memory and assign 'pageNumber'

    QNode* temp = (QNode *)malloc( sizeof( QNode ) );

    temp->pageNumber = pageNumber;

    // Initialize prev and next as NULL

    temp->prev = temp->next = NULL;

    return temp;

}

// A utility function to create an empty Queue.

// The queue can have at most 'numberOfFrames' nodes

Queue* createQueue( int numberOfFrames )

{

    Queue* queue = (Queue *)malloc( sizeof( Queue ) );

    // The queue is empty

    queue->count = 0;

    queue->front = queue->rear = NULL;

    // Number of frames that can be stored in memory

    queue->numberOfFrames = numberOfFrames;

    return queue;

}

// A utility function to create an empty Hash of given capacity

Hash* createHash( int capacity )

{

    // Allocate memory for hash

    Hash* hash = (Hash *) malloc( sizeof( Hash ) );

    hash->capacity = capacity;

    // Create an array of pointers for refering queue nodes

    hash->array = (QNode **) malloc( hash->capacity * sizeof( QNode* ) );

    // Initialize all hash entries as empty

    int i;

    for( i = 0; i < hash->capacity; ++i )

        hash->array[i] = NULL;

    return hash;

}

// A function to check if there is slot available in memory

int AreAllFramesFull( Queue* queue )

{

    return queue->count == queue->numberOfFrames;

}

// A utility function to check if queue is empty

int isQueueEmpty( Queue* queue )

{

    return queue->rear == NULL;

}

// A utility function to delete a frame from queue

void deQueue( Queue* queue )

{

    if( isQueueEmpty( queue ) )

        return;

    // If this is the only node in list, then change front

    if (queue->front == queue->rear)

        queue->front = NULL;

    // Change rear and remove the previous rear

    QNode* temp = queue->rear;

    queue->rear = queue->rear->prev;

    if (queue->rear)

        queue->rear->next = NULL;

    free( temp );

    // decrement the number of full frames by 1

    queue->count--;

}

// A function to add a page with given 'pageNumber' to both queue

// and hash

void Enqueue( Queue* queue, Hash* hash, unsigned pageNumber )

{

    // If all frames are full, remove the page at the rear

    if ( AreAllFramesFull ( queue ) )

    {

        // remove page from hash

        hash->array[ queue->rear->pageNumber ] = NULL;

        deQueue( queue );

    }

    // Create a new node with given page number,

    // And add the new node to the front of queue

    QNode* temp = newQNode( pageNumber );

    temp->next = queue->front;

    // If queue is empty, change both front and rear pointers

    if ( isQueueEmpty( queue ) )

        queue->rear = queue->front = temp;

    else  // Else change the front

    {

        queue->front->prev = temp;

        queue->front = temp;

    }

    // Add page entry to hash also

    hash->array[ pageNumber ] = temp;

    // increment number of full frames

    queue->count++;

}

// This function is called when a page with given 'pageNumber' is referenced

// from cache (or memory). There are two cases:

// 1. Frame is not there in memory, we bring it in memory and add to the front

//    of queue

// 2. Frame is there in memory, we move the frame to front of queue

void ReferencePage( Queue* queue, Hash* hash, unsigned pageNumber )

{

    QNode* reqPage = hash->array[ pageNumber ];

    // the page is not in cache, bring it

    if ( reqPage == NULL )

        Enqueue( queue, hash, pageNumber );

    // page is there and not at front, change pointer

    else if (reqPage != queue->front)

    {

        // Unlink rquested page from its current location

        // in queue.

        reqPage->prev->next = reqPage->next;

        if (reqPage->next)

           reqPage->next->prev = reqPage->prev;

        // If the requested page is rear, then change rear

        // as this node will be moved to front

        if (reqPage == queue->rear)

        {

           queue->rear = reqPage->prev;

           queue->rear->next = NULL;

        }

        // Put the requested page before current front

        reqPage->next = queue->front;

        reqPage->prev = NULL;

        // Change prev of current front

        reqPage->next->prev = reqPage;

        // Change front to the requested page

        queue->front = reqPage;

    }

}

// Driver program to test above functions

int main()

{

    // Let cache can hold 4 pages

    Queue* q = createQueue( 4 );

    // Let 10 different pages can be requested (pages to be

    // referenced are numbered from 0 to 9

    Hash* hash = createHash( 10 );

    // Let us refer pages 1, 2, 3, 1, 4, 5

    ReferencePage( q, hash, 1);

    ReferencePage( q, hash, 2);

    ReferencePage( q, hash, 3);

    ReferencePage( q, hash, 1);

    ReferencePage( q, hash, 4);

    ReferencePage( q, hash, 5);

    // Let us print cache frames after the above referenced pages

    printf ("%d ", q->front->pageNumber);

    printf ("%d ", q->front->next->pageNumber);

    printf ("%d ", q->front->next->next->pageNumber);

    printf ("%d ", q->front->next->next->next->pageNumber);

    return 0;

}

Output:

5 4 1 3

thumb_up 0 thumb_down 0 flag 0

// Function code in java

void makeDiagonal1(int matrix[][], int row, int col)
{
int i = 0;
int j = col-1;
while(j>=0 && i<= row-1)
{
matrix[i][j] = 1;
j--;
i++;
}

}

void makeMainDiagonal1(int matrix[][], int row, int col)
{
int i = 0;
int j = 0;
while(i < row || j < col)
{
matrix[i][j] = 1;
i++;
j++;
}
}

checkDiagonalElement(int matrix[][], int row, int col, int x, int y)
{
int i = 0;
if(x == y)
{
makeMainDiagonal1(matrix, row, col);
}

    j = col-1;
while(j >= 0 && i < row-1)
{
if(i == x && j == y)
{
makeDiagonal1(matrix, row, col);
break;
}
else
{
i++;
j--;
}
}
}

thumb_up 0 thumb_down 0 flag 0

Function code in Java:

// row = number of rows in matrix and col = number of columns in matrix

int[] findIndex(int matrix[][], int i , int j, String direction)
{
if(matrix[i][j] == 0)
{
while(matrix[i][j]  == 0  &&  j<(col-1) && j<(row-1))
{
if(direction == right )
{
J++;
if(j == (col-1))
{
a[0] = i;
a[1] = j;
return a;
}
}

            if(direction == down)
{
i++;
if(i == (row-1))
{
a[0] = i;
a[1] = j;
return a;
}
}

            if(direction == left)
{
j--;
if(j == 0)
{
a[0] = i;
a[1] = j;
return a;
}
}

            if(direction == up)
{
i--;
if(i == 0)
{
a[0] = i;
a[1] = j;
return a;
}
}
}
}

    if( matrix[i][j] == 1)
{
if(direction == right && i == (row-1))
{
matrix[i][j] = 0;
a[0] = i;
a[1] = j;
return a;
}
else
{
direction = down;
matrix[i][j] = 0;
i = i+1;
return findIndex(matrix, i, j, direction);
}

        if(direction == down  && j == 0)
{
matrix[i][j] = 0;
a[0] = i;
a[1] = j;
return a;
}
else
{
direction = left;  // right of current position but in left direction in general)
matrix[i][j] = 0;
j = j-1;
return findIndex(matrix, i, j, direction);
}

        if(direction == up  && j == (col-1))
{
matrix[i][j] = 0;
a[0] = i;
a[1] = j;
return a;
}
else
{
direction = right;
matrix[i][j] = 0;
j = j+1;
return findIndex(matrix, i, j, direction);
}

        if(direction == left  && i == 0))
{
matrix[i][j] = 0;
a[0] = i;
a[1] = j;
return a;
}
else
{
direction = up;
matrix[i][j] = 0;
i = i-1;
return findIndex(matrix, i , j, direction);
}
}
}

thumb_up 6 thumb_down 0 flag 0

A min-heap typically only supports a delete-min operation, not an arbitrary delete(x) operation. Implement delete(x) as a composition of decrease-key(x, −∞), and delete-min. Recall, that to implement decrease-key, we would bubble up the element to maintain the heap property (in this case all the way to the root). In a binary heap, to implement the delete_min operation, we replace the root by the last element on the last level, and then percolate that element down.

To summarize, to delete(x), bubble-up the element all the way to the root, then delete the element and put the last element in the heap at the root, then percolate down to restore the heap property.

Removal algorithm

  1. Copy the last value in the array to the root;
  2. Decrease heap's size by 1;
  3. Sift down root's value. Sifting is done as following:
    • if current node has no children, sifting is over;
    • if current node has one child: check, if heap property is broken, then swap current node's value and child value; sift down the child;
    • if current node has two children: find the smallest of them. If heap property is broken, then swap current node's value and selected child value; sift down the child.

Example

Remove the minimum from a following heap:

Copy the last value in the array to the root and decrease heap's size by 1:

Now heap property is broken at root:

Root has two children. Swap root's value with the smallest:

Heap property is broken in node 1:

Recover heap property:

Node 3 has no children. Sifting is complete.

Complexity of the removal operation is O(h) = O(log n), where h is heap's height, n is number of elements in a heap.

thumb_up 1 thumb_down 4 flag 0

The reason why we use runtime polymorphism is when we build generic frameworks that take a whole bunch of different objects with the same interface. When we create a new type of object, we don't need to change the framework to accommodate the new object type, as long as it follows the "rules" of the object.

Example :

              public class X {     public void methodA() //Base class method     {         System.out.println ("hello, I'm methodA of class X");     } }  public class Y extends X {     public void methodA() //Derived Class method     {         System.out.println ("hello, I'm methodA of class Y");     } } public class Z {    public static void main (String args []) {        Y obj1 = new X(); // Reference and object X        Y obj2 = new Y(); // X reference but Y object        obj1.methodA();        obj2.methodA();    } }            

In static polymorphism, compiler itself determines which method should call. Method overloading is an example of static polymorphism.

In runtime polymorphism, compiler cannot determine the method at compile time. Method overriding is an example of runtime Polymorphism. Because in, runtime polymorphism the signature of methodA() is similar in both the class X(base class) and Y(child class). So compiler cannot determine method at compile time which should execute. Only after object creation(which is a run time process), the runtime environment understand the exact method to call.

It is because of that in this case, obj1.methodA() calls methodA()  in Class X since obj1 is reference variable of object created for Class X

AND obj2.methodA() calls methodA() in Class Y since obj2 is reference variable of object created for Class Y.

thumb_up 0 thumb_down 0 flag 0

Use recursion , as we can recursively identify left snd right subtrees. For example, after the root all in first bracket is in left subtree and then rest is in right subtree so we can easily understand that at which level we are at any time. As we go in any subtree (either left or right ), it means we are steping in next level , so maintain level while traversing.

// Following is the C++ code

              #include<cstring> #include<iostream> #include<stdio.h> #include<string.h> using namespace std;    void sum_at_level(char *s , int k ,int level , int *ans, int start , int end ){     if(end-start+1<=2)     	return;     else if(level==k)         *ans = *ans + s[start+1]-48;     else if(start+2<end){         int t =0,i=0;         for(i = start+2;i<=end;i++){             if(s[i]=='(')             ++t;             if(s[i]==')')             --t;             if(t==0)             break;         }         // Left subtree         sum_at_level(s,k,level+1, ans , start+2 ,i );         // Right subtree         if(i+1<end)         sum_at_level(s,k,level+1,ans,i+1,end-1);		     }	 }       int main(){     cout<<"Input String : "<<endl;     char s[100000];     gets(s);     int k;     cout<<"Input level(Note: root is at 0th level) :"<<endl;     cin>>k;     int ans;     sum_at_level(s , k , 0,&ans , 0 , strlen(s)-1);     cout<<"ANSWER  "<<ans<<endl;     return 0; }                          

thumb_up 0 thumb_down 0 flag 0

Collections in java is a framework that provides an architecture to store and manipulate the group of objects.

All the operations that we perform on a data such as searching, sorting, insertion, manipulation, deletion etc. can be performed by Java Collections.

Java Collection simply means a single unit of objects. Java Collection framework provides many interfaces (Set, List, Queue, Deque etc.) and classes (ArrayList, Vector, LinkedList, PriorityQueue, HashSet, LinkedHashSet, TreeSet etc).

Collection framework represents a unified architecture for storing and manipulating group of objects. It has:

  1. Interfaces and its implementations i.e. classes
  2. Algorithm

according to the docs it is defined as:

The root interface in the collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. Some are ordered and others unordered. The JDK does not provide any direct implementations of this interface: it provides implementations of more specific subinterfaces like Set and List. This interface is typically used to pass collections around and manipulate them where maximum generality is desired.

thumb_up 0 thumb_down 0 flag 0

/* my_strcat(dest, src) copies data of src to dest. To do so, it first reaches end of the string dest using recursive calls my_strcat(++dest, src).  Once end of dest is reached, data is copied using

(*dest++ = *src++)?  my_strcat(dest, src). */

void my_strcat(char *dest, char *src)

{

  (*dest)? my_strcat(++dest, src): (*dest++ = *src++)? my_strcat(dest, src): 0 ;

}


/* driver function to test above function */

int main()

{

  char dest[100] = "geeksfor";

  char *src = "geeks";

  my_strcat(dest, src);

  printf(" %s ", dest);

  getchar();

}

see strcat

thumb_up 0 thumb_down 0 flag 0

// C program for linked list implementation of stack
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>

// A structure to represent a stack
struct StackNode
{
int data;
struct StackNode* next;
};

struct StackNode* newNode(int data)
{
struct StackNode* stackNode =
(struct StackNode*) malloc(sizeof(struct StackNode));
stackNode->data = data;
stackNode->next = NULL;
return stackNode;
}

int isEmpty(struct StackNode *root)
{
return !root;
}

void push(struct StackNode** root, int data)
{
struct StackNode* stackNode = newNode(data);
stackNode->next = *root;
*root = stackNode;
printf("%d pushed to stack\n", data);
}

int pop(struct StackNode** root)
{
if (isEmpty(*root))
return INT_MIN;
struct StackNode* temp = *root;
*root = (*root)->next;
int popped = temp->data;
free(temp);

    return popped;
}

int peek(struct StackNode* root)
{
if (isEmpty(root))
return INT_MIN;
return root->data;
}

int main()
{
struct StackNode* root = NULL;

    push(&root, 10);
push(&root, 20);
push(&root, 30);

    printf("%d popped from stack\n", pop(&root));

    printf("Top element is %d\n", peek(root));

    return 0;
}

Output:

10 pushed to stack 20 pushed to stack 30 pushed to stack 30 popped from stack Top element is 20

See stack using linkedlist

thumb_up 0 thumb_down 0 flag 0

import java.util.Scanner;

class Swap2Numbers
{
public static void main(String args[])
{
Scanner sc = new Scanner(System.in);
int n1 = sc.nextInt();  // let say n1 = 2
int n2 = sc.nextInt();  // let say n2 = 4
System.out.println("before swapping -> n1 = " + n1 + " and n2 = " + n2);

        int temp = n1;
n1 = n2;
n2 = temp;

        System.out.println("after swapping -> n1 = " + n1 + " and  n2 = " + n2);
}
}

Output :

before swapping -> n1 = 2 and n2 = 4
after swapping -> n1 = 4 and  n2 = 2

thumb_up 2 thumb_down 0 flag 0

multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently. New tasks start and interrupt already started ones before they have reached completion, instead of executing the tasks sequentially so each started task needs to reach its end before a new one is started. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as CPUs and main memory.

Multitasking does not necessarily mean that multiple tasks are executing at exactly the same time (simultaneously). In other words, multitasking does not imply parallel execution but it does mean that more than one task can be part-way through execution at the same time, and that more than one task is advancing over a given period of time. Even on multiprocessor o multicore computers, which have multiple CPUs/cores so more than one task can be executed at once (physically, one per CPU or core), multitasking allows many more tasks to be run than there are CPUs.

thumb_up 6 thumb_down 1 flag 0

A multiprogramming is a parallel processing in which the multiple programs can run simultaneously.

  • Multiprogramming is the allocation of more than one concurrent program on a computer system and its resources.
  • Multiprogramming allows using the CPU effectively by allowing various users to use the CPU and I/O devices effectively.
  • Multiprogramming makes sure that the CPU always has something to execute, thus increases the CPU utilization.

Example: Use a browser, play video, download apps and transfer data at the same time. In actual all process are working one at a time on the processor. Switching between process/program is so fast that we never notice. CPU efficiency is always measured in GHz. RAM is also required for switching. That's why people want a device with large RAM memory and high CPU GHz.

thumb_up 1 thumb_down 0 flag 0

A multivalued dependency is a full constraint between two sets of attributes in a relation.

In contrast to the functional dependencies, the multivalued dependency requires that certain tuples be present in a relation. Therefore, a multivalued dependency is a special case of tuple-generating dependency. The multivalued dependency plays a role in the 4NF database normalization.

See Multivalued Dependency

thumb_up 5 thumb_down 1 flag 0

Main differences between Data Definition Language (DDL) and Data Manipulation Language (DML) commands are:

I. DDL vs. DML: DDL statements are used for creating and defining the Database structure. DML statements are used for managing data within Database.

II. Sample Statements: DDL statements are CREATE, ALTER, DROP, TRUNCATE, RENAME etc. DML statements are SELECT, INSERT, DELETE, UPDATE, MERGE, CALL etc.

III. Number of Rows: DDL statements work on whole table. CREATE will a create a new table. DROP will remove the whole table. TRUNCATE will delete all records in a table. DML statements can work on one or more rows. INSERT can insert one or more rows. DELETE can remove one or more rows.

IV. WHERE clause: DDL statements do not have a WHERE clause to filter the data. Most of DML statements support filtering the data by WHERE clause.

V. Commit: Changes done by a DDL statement can not be rolled back. So there is no need to issue a COMMIT or ROLLBACK command after DDL statement. We need to run COMMIT or ROLLBACK to confirm our changed after running a DML statement.

VI. Transaction: Since each DDL statement is permanent, we can not run multiple DDL statements in a group like Transaction. DML statements can be run in a Transaction. Then we can COMMIT or ROLLBACK this group as a transaction. Eg. We can insert data in two tables and commit it together in a transaction.

VII. Triggers: After DDL statements no triggers are fired. But after DML statements relevant triggers can be fired.

thumb_up 44 thumb_down 0 flag 1

Stack is used browser back button :

Every web browser has a Back button. As we navigate from web page to web page, those pages are placed on a stack (actually it is the URLs that are going on the stack). The current page that we are viewing is on the top and the first page we looked at is at the base. If we click on the Back button, we begin to move in reverse order through the pages.

Other uses of stacks :

  • An "undo" mechanism in text editors; this operation is accomplished by keeping all text changes in a stack.
  • Undo/Redo stacks in Excel or Word.

Queue is used for Browsing history

  • New pages are added to history.
  • Old pages removed such as in 30 days.

thumb_up 0 thumb_down 0 flag 0

An enum is essentially a data type for variables to be defined as a set of pre defined constants.

For example, a music player might only have PLAYING, PAUSED, STOPPED and ENDED. So it could be defined as

  1. enum state{
  2. PLAYING, PAUSED, STOPPED, ENDED
  3. };

Now it can be accessed as state.PLAYING, state.PAUSED etc.. The advantage is that it looks understandable unlike associating just numbers to each state and can only have a fixed set of values.

thumb_up 3 thumb_down 1 flag 0

Integrity constraints are mainly enforced to maintain the data consistency in database as they restrict the data that can go into the table. These integrity constraints are categorized into two categories column level and table level constraints. maily used constraints are:

not null constraint: Ensures that a column cannot have NULL value.

Default: Provides a default value for a column when none is specified.

Unique: Ensures that all values in a column are different.

Primary Key: Uniquely identified each rows/records in a database table.

Foreign Key: Uniquely identified a rows/records in any another database table(fetch the value from the column of another table that is the primary key in that table).

Check constraint: The CHECK constraint ensures that all values in a column satisfy certain conditions that are mentioned in the check condition.

Index: Use to create and retrieve data from the database very quickly.

These constraints can be defined while creating the table using create table statement. However in case constraints are not mentioned at the time of table creation then you can create/drop these constraints using alter table command.

thumb_up 1 thumb_down 0 flag 0

A foreign key is a column or set of columns in one table whose values must have matching values in the primary key of another (or the same) table. A foreign key is said toreference its primary key. Foreign keys are a mechanism for maintaining data integrity.

thumb_up 1 thumb_down 0 flag 0

  • First Normal Form (1NF) is a simple form of Normalization.
  • It simplifies each attribute in a relation.
  • In 1NF, there should not be any repeating group of data.
  • Each set of column must have a unique value.
  • It contains atomic values because the table cannot hold multiple values.

Employee Table using 1NF

thumb_up 0 thumb_down 0 flag 0

A functional dependency is defined as a constraint between the two sets of attributes from the database.

Functional dependency is denoted by X->Y. We also say that there is a functional dependency from X to Y or that Y is functionally dependent on X. Technically we can say "X functionally determines Y". Here, X is determinant and Y is dependent.

For example, consider a relation R with attributes F1,F2,F3,F4 and F5 as shown below:

  1. R(F1,F2,F3,F4,F5) { F1->F3   F2->F4   F1 F2->F5 }
  2. These functional dependencies specify that:
  3. (a) F1 uniquely determines F3,
  4. (b) The value of F2 uniquely determines F4 and
  5. (c) A combination of F1 and F2 uniquely determines the values of F5.

thumb_up 2 thumb_down 0 flag 0

The dynamic arrays are the arrays which are allocated memory at the runtime and the memory is allocated from heap.

Arrays created with operator new[] have dynamic storage duration and are stored on the heap (technically the "free store"). They can have any size, but we need to allocate and free them our self since they're not part of the stack frame:

              int* foo = new int[10]; delete[] foo;            

thumb_up 2 thumb_down 0 flag 0

A preprocessor is a language that takes as input a text file written using some programming language syntax and output another text file following the syntax of another programming language.

The purpose is usually to extend the syntax of some existing language by adding new syntaxic constructions (new "instructions"). The programmer write a program using the extended syntax, then the processor translate it in the more restrictive original dialect.

The most common use of the word preprocessor nowaday is the C preprocessor. It is the layer that understand these strange commands beginning by # in C language: #define, #include, etc. These are actually no part of the C language but another language: the C preprocessor language, whose output is the actual C source code.

thumb_up 1 thumb_down 0 flag 0

Data Alignment:

Every data type in C/C++ will have alignment requirement (infact it is mandated by processor architecture, not by language). A processor will have processing word length as that of data bus size. On a 32 bit machine, the processing word size will be 4 bytes.

Historically memory is byte addressable and arranged sequentially. If the memory is arranged as single bank of one byte width, the processor needs to issue 4 memory read cycles to fetch an integer. It is more economical to read all 4 bytes of integer in one memory cycle. To take such advantage, the memory will be arranged as group of 4 banks as shown in the above figure.

The memory addressing still be sequential. If bank 0 occupies an address X, bank 1, bank 2 and bank 3 will be at (X + 1), (X + 2) and (X + 3) addresses. If an integer of 4 bytes is allocated on X address (X is multiple of 4), the processor needs only one memory cycle to read entire integer.

Where as, if the integer is allocated at an address other than multiple of 4, it spans across two rows of the banks as shown in the below figure. Such an integer requires two memory read cycle to fetch the data.

A variable's data alignment deals with the way the data stored in these banks. For example, the natural alignment of int on 32-bit machine is 4 bytes. When a data type is naturally aligned, the CPU fetches it in minimum read cycles.

Similarly, the natural alignment of short int is 2 bytes. It means, a short int can be stored in bank 0 – bank 1 pair or bank 2 – bank 3 pair. A double requires 8 bytes, and occupies two rows in the memory banks. Any misalignment of double will force more than two read cycles to fetch double data.

Note that adouble variable will be allocated on 8 byte boundary on 32 bit machine and requires two memory read cycles. On a 64 bit machine, based on number of banks, double variable will be allocated on 8 byte boundary and requires only one memory read cycle.

Structure Padding:

In C/C++ a structures are used as data pack. It doesn't provide any data encapsulation or data hiding features (C++ case is an exception due to its semantic similarity with classes).

Because of the alignment requirements of various data types, every member of structure should be naturally aligned. The members of structure allocated sequentially increasing order. Let us analyze each struct declared in the above program.

thumb_up 0 thumb_down 0 flag 0

Both are user defined data types to store data of different types as a single unit.
Now the differences goes as follows.
1.The keywordstruct is used to define a structure while keyword union is used to define a union.
2. When a variable is associated with a structure, the compiler allocates the memory for each member. The size of structure is greater than or equal to the sum of  sizes of its members. The smaller members may end with unused slack bytes. While in case of Union when a variable is associated with a union, the compiler allocates the memory by considering the size of the largest memory. So, size of union is equal to the size of largest member.
3. Each member within a structure is assigned unique storage area of location while in case of union memory allocated is shared by individual members of union.
4. In Structure, the address of each member will be in ascending order. This indicates that memory for each member will start at different offset values while in unions the address is same for all the members of a union. This indicates that every member begins at the same offset value.
5 In Structure, altering the value of a member will not affect other members of the structure while in union altering the value of any of the member will alter other member values.
6. In structure, individual member can be accessed at a time while in unions only one member can be accessed at a time.
7. Several members of a structure can initialize at once while in unions only the first member can be initialized.

thumb_up 6 thumb_down 0 flag 0

A typical memory representation of C program consists of following sections.

1. Text segment
2. Initialized data segment
3. Uninitialized data segment
4. Stack
5. Heap


A typical memory layout of a running process

1. Text Segment:
A text segment , also known as a code segment or simply as text, is one of the sections of a program in an object file or in memory, which contains executable instructions.

As a memory region, a text segment may be placed below the heap or stack in order to prevent heaps and stack overflows from overwriting it.

Usually, the text segment is sharable so that only a single copy needs to be in memory for frequently executed programs, such as text editors, the C compiler, the shells, and so on. Also, the text segment is often read-only, to prevent a program from accidentally modifying its instructions.

2. Initialized Data Segment:
Initialized data segment, usually called simply the Data Segment. A data segment is a portion of virtual address space of a program, which contains the global variables and static variables that are initialized by the programmer.

Note that, data segment is not read-only, since the values of the variables can be altered at run time.

This segment can be further classified into initialized read-only area and initialized read-write area.

For instance the global string defined by char s[] = "hello world" in C and a C statement like int debug=1 outside the main (i.e. global) would be stored in initialized read-write area. And a global C statement like const char* string = "hello world" makes the string literal "hello world" to be stored in initialized read-only area and the character pointer variable string in initialized read-write area.

Ex: static int i = 10 will be stored in data segment and global int i = 10 will also be stored in data segment

3. Uninitialized Data Segment:
Uninitialized data segment, often called the "bss" segment, named after an ancient assembler operator that stood for "block started by symbol." Data in this segment is initialized by the kernel to arithmetic 0 before the program starts executing

uninitialized data starts at the end of the data segment and contains all global variables and static variables that are initialized to zero or do not have explicit initialization in source code.

For instance a variable declared static int i; would be contained in the BSS segment.
For instance a global variable declared int j; would be contained in the BSS segment.

4. Stack:
The stack area traditionally adjoined the heap area and grew the opposite direction; when the stack pointer met the heap pointer, free memory was exhausted. (With modern large address spaces and virtual memory techniques they may be placed almost anywhere, but they still typically grow opposite directions.)

The stack area contains the program stack, a LIFO structure, typically located in the higher parts of memory. On the standard PC x86 computer architecture it grows toward address zero; on some other architectures it grows the opposite direction. A "stack pointer" register tracks the top of the stack; it is adjusted each time a value is "pushed" onto the stack. The set of values pushed for one function call is termed a "stack frame"; A stack frame consists at minimum of a return address.

Stack, where automatic variables are stored, along with information that is saved each time a function is called. Each time a function is called, the address of where to return to and certain information about the caller's environment, such as some of the machine registers, are saved on the stack. The newly called function then allocates room on the stack for its automatic and temporary variables. This is how recursive functions in C can work. Each time a recursive function calls itself, a new stack frame is used, so one set of variables doesn't interfere with the variables from another instance of the function.

5. Heap:
Heap is the segment where dynamic memory allocation usually takes place.

The heap area begins at the end of the BSS segment and grows to larger addresses from there.The Heap area is managed by malloc, realloc, and free, which may use the brk and sbrk system calls to adjust its size (note that the use of brk/sbrk and a single "heap area" is not required to fulfill the contract of malloc/realloc/free; they may also be implemented using mmap to reserve potentially non-contiguous regions of virtual memory into the process' virtual address space). The Heap area is shared by all shared libraries and dynamically loaded modules in a process.

thumb_up 0 thumb_down 0 flag 0

Let us understand this concept using two tables as shown below:

As we can clearly notice that the data in Table-1 is not functionally dependent. DBMS is the favorite subject of Abhishek in this case but this does not mean that each and every student with name Abhishek will have DBMS as his favorite subject, i.e. they are not functionally dependent on each other. Hence we can say that none of the attributes (Roll number, Name, Favorite Subject) in Table-1 is functionally dependent on each other.

Now coming to table-2, attributes 'Duration' and 'Fee' are functionally dependent on the attribute 'Course Name'. For example, if the course selected will be Oracle then the attributes 'Duration' and 'Fee' will always be 6 months and 5000 respectively. So here we can say that the attributes 'Duration' and 'Fee' are functionally dependent on the attribute 'Course Name'.

Functional dependency is denoted by X->Y. We also say that there is a functional dependency from X to Y or that Y is functionally dependent on X. Technically we can say "X functionally determines Y". Here, X is determinant and Y is dependent.

For example, consider a relation R with attributes F1,F2,F3,F4 and F5 as shown below:

 R(F1,F2,F3,F4,F5) { F1->F3 F2->F4 F1F2->F5 }  These functional dependencies specify that: (a) F1 uniquely determines F3, (b) The value of F2 uniquely determines F4 and (c) A combination of F1 and F2 uniquely determines the values of F5.

Now you can easily define functional dependency in your own words as you are well aware of the concept. Though i will provide you the technical definition of the functional dependency.

Definition: A functional dependency is defined as a constraint between the two sets of attributes from the database.

Note: A functional dependency cannot be accurate automatically but must be defined explicitly by someone who knows the meanings of the attributes of a relation R. The database designers use their understanding of the meanings of the attributes of the relation R.

1.1 Inference Rules For Functional Dependencies

1) If Y is subset of X then X->Y (Reflexive Rule)
2) If X->Y and Y->Z then X->Z (Transitive Rule)
3) If X->YZ then X->Y and X->Z (Decomposition Rule)
4) If X->Y and X->Z then X->YZ (Additive Rule)
5) If X->Y then XZ->YZ (Augmentation Rule)

Note: If XY->Z this does not mean X->Z and Y->Z (differentiate with decomposition rule). The meaning of XY->Z is that a combination of X and Y uniquely determines Z, not alone X or Y can can determine Z.

thumb_up 0 thumb_down 0 flag 0

The concept of a Lossless-Join Decomposition is central in removing redundancy safely from databases while preserving the original data.

Lossless decomposition  can also be called Non-additive. If we decompose a relation R into relations R_{1} and R_{2} we will guarantee a Lossless-Join if R R_{1}x R_{2} = R.

If R is split into R1 and R2, for the decomposition to be lossless then at least one of the two should hold true.

Projecting on R1 and R2, and joining back, results in the relation you started with. Let R be a relation schema.

Let F be a set of functional dependencies on R.

Let R_{1} and R_{2} form a decomposition of R.

The decomposition is a lossless-join decomposition of R if at least one of the following functional dependencies are in F+ (where F+ stands for the closure for every attribute or attribute sets in F);

see Lossless decomposition

thumb_up 0 thumb_down 0 flag 0

A relationship type represents the association between entity types. For example,'Enrolled in' is a relationship type that exists between entity type Student and Course. In ER diagram, relationship type is represented by a diamond and connecting the entities with lines.

A set of relationships of same type is known as relationship set. The following relationship set depicts S1 is enrolled in C2, S2 is enrolled in C1 and S3 is enrolled in C3.

er9 Degree of a relationship set

The number of different entity sets participating in a relationship set is called as degree of a relationship set.

Unary Relationship

When there is only ONE entity set participating in a relation, the relationship is called as unary relationship. For example, one person is married to only one person.

er10

Binary Relationship

When there are TWO entities set participating in a relation, the relationship is called as binary relationship.For example, Student is enrolled in Course.

er11

n-ary Relationship

When there are n entities set participating in a relation, the relationship is called as n-ary relationship.

Cardinality

The number of times an entity of an entity set participates in a relationship set is known as cardinality. Cardinality can be of different types:

one to one: When each entity in each entity set can take part only once in the relationship, the cardinality is one to one. Let us assume that a male can marry to one female and a female can marry to one male. So the relationship will be one to one.

er20

Using Sets, it can be represented as:

er12

 many to one: When entities in one entity set can take part only once in the relationship set and entities in other entity set can take part more than once in the relationship set, cardinality is many to one. Let us assume that a student can take only one course but one course can be taken by many students. So the cardinality will be n to 1. It means that for one course there can be n students but for one student, there will be only one course.ernewUsing Sets, it can be represented as:

er14

In this case, each student is taking only 1 course but 1 course has been taken by many students.

many to many: When entities in all entity sets can take part more than once in the relationship cardinality is many to many. Let us assume that a student can take more than one course and one course can be taken by many students. So the relationship will be many to many.

n2

Using sets, it can be represented as:

er16

In this example, student S1 is enrolled in C1 and C3 and Course C3 is enrolled by S1, S3 and S4. So it is many to many relationships.

Participation Constraint

Participation Constraint is applied on the entity participating in the relationship set.

See ER Model

thumb_up 1 thumb_down 0 flag 0

The term Generalization is about bottom-up approach in which two lower level entities combine to form a higher level entity. In generalization, the higher level entity can also combine with other lower level entity to make further higher level entity.

Eg: if we have a bank account then we have further division which account like type of account so types of account it maybe current or business account also that account current or saving the the bottom up approach lower to high level of entity.

thumb_up 14 thumb_down 0 flag 0

A batsman can score a maximum of 1653 runs in a ODI innings of 50 overs, assuming not a single extra run is given.  Let us assume the batsman is Sachin Tendulkar.  He takes strike, hits the first five balls for sixes, takes three off the sixth ball and retains the strike for the second over.  He does exactly the same on 49 occasions.  So that is 33 times 49 which is equal to 1617 runs.  Off the 50th over he hits six sixes and completes the innings.  Sachin will remain undefeated on 1653.

thumb_up 0 thumb_down 2 flag 0

Weak Entity Sets

  1. They don't have any primary key of their own so they depend on a primary key from a strong entity set to which they are related.
  2. They must be in a 1:M relation.
  3. A member of a weak entity is called a subordinate entity.

There are two types of weak entities: associative entities and sub-type entitites.

1. Associative entities :

An associative entity is a term used in relational and entity-relationship theory. A relational database requires the implementation of a base relation (or base table) to resolve many-to-many relationships. This kind of base relation is called an associative table.

An associative entity (using Chen notation)

As mentioned above, associative entities are implemented in a database structure using associative tables, which are tables that can contain references to columns from the same or different database tables within the same database.

2. Sub-type entities  :

  • Complete subtype relationship, when all categories are known.
  • Incomplete subtype relationship, when all categories may not be known.

A classic example of a weak entity without a sub-type relationship would be the "header/detail' records in many real world situations such as claims, orders and invoices, where the header captures information common across all forms and the detail captures information specific to individual items.

See Weak entity

thumb_up 3 thumb_down 0 flag 0

1. Malloc : malloc() allocates memory block of given size (in bytes) and returns a pointer to the beginning of the block.

              void * malloc( size_t size );

malloc() doesn't initialize the allocated memory.

2. Calloc :  calloc() allocates the memory and also initializes the allocates memory to zero.

              void * calloc( size_t num, size_t size );

3. Realloc :

Size of dynamically allocated memory can be changed by using realloc().

As per the C99 standard:

              void *realloc(void *ptr, size_t size);            

realloc deallocates the old object pointed to by ptr and returns a pointer to a new object that has the size specified by size. The contents of the new object is identical to that of the old object prior to deallocation, up to the lesser of the new and old sizes. Any bytes in the new object beyond the size of the old object have indeterminate values.

The point to note is that realloc() should only be used for dynamically allocated memory . If the memory is not dynamically allocated, then behavior is undefined.

4. free :  free() function is used to free memory allocated using malloc() or calloc() or realloc().

              void free(void *ptr);            

See memory management

thumb_up 0 thumb_down 6 flag 0

1. 7 ( length of string without null character );

2. 8 (length of null contained string)

thumb_up 2 thumb_down 0 flag 0

strlen() is used to get the length of an array of chars / string.

sizeof() is used to get the actual size of any type of data in bytes.

Besides, sizeof() is a compile-time expression giving us the size of a type or a variable's type. It doesn't care about the value of the variable.

strlen() is a function that takes a pointer to a character, and walks the memory from this character on, looking for a NULL character. It counts the number of characters before it finds the NULL character. In other words, it gives us the length of a C-style NULL-terminated string.

thumb_up 3 thumb_down 0 flag 0

3Gdigital, supported data, but still circuit switched
UMTS / WCDMA, EvDO
Includes data but still onto circuit switched architecture

3.5G faster data, added true always-on / packet data (HSPA)

Data rates of 2Mbps-tens of Mbps

NB The USA and few other places had a rival technology ("CDMA"). IS95, EvDO.
That was marketed as 3G when it was launched, but initially was probably closer to 2.5G and then upgraded to 3G with the EvDO upgraded.

At least 200Kbps up to 3Mbps speed.

4Gwireless broadband
OFDMA, flat architecture, true packet switched
Pure data: voice as VoIP (VoLTE)
Most people say this is LTE & WiMAX, (though some people are waiting for an upgrade to LTE-A, based on a rather silly data rate definition).

4.5G term not widely used but some people say that is LTE-A

Data rates of Tens of Mbps - Hundreds of Mbps

4G delivers up to 100Mbps for mobile access, and up to 1Gbps for wireless access. Most wireless carriers offering HSPA (High Speed Packet Access) at up to 6Mbps are claiming that they offer 4G network.

3G as compared to 4G

  • Lower data speed
  • Low quality
  • No LTE model
  • Changes to 2g while calling

4G compared to 3G

  • High speed data
  • High quality
  • Available LTE and VoLTE model
  • Crystal clarity call while support VoLTE
  • And many more

thumb_up 1 thumb_down 0 flag 0

A participation constraint defines the number of times an object in an object class can participate in a connected relationship set. Every connection of a relationship set must have a participation constraint. However, participation constraints do not apply to relationships.

The basic form for a participation constraint is min:max, where min is a non negative integer, and max is either a non negative integer or a star (*). The star designates an arbitrary non negative number greater than min. The most common participation constraints are 0:1, 1:1, 0:*, and 1:*.

thumb_up 0 thumb_down 0 flag 0

#include<iostream>

#include<climits>

using namespace std;


// A BST node

struct Node

{

    int key;

    Node *left, *right;

};


// A function to find

int KSmallestUsingMorris(Node *root, int k)

{

    // Count to iterate over elements till we

    // get the kth smallest number

    int count = 0;


int ksmall = INT_MIN; // store the Kth smallest

    Node *curr = root; // to store the current node


while (curr != NULL)

    {

        // Like Morris traversal if current does

        // not have left child rather than printing

        // as we did in inorder, we will just

        // increment the count as the number will

        // be in an increasing order

        if (curr->left == NULL)

        {

            count++;


// if count is equal to K then we found the

            // kth smallest, so store it in ksmall

            if (count==k)

                ksmall = curr->key;


// go to current's right child

            curr = curr->right;

        }

        else

        {

            // we create links to Inorder Successor and

            // count using these links

            Node *pre = curr->left;

            while (pre->right != NULL && pre->right != curr)

                pre = pre->right;


// building links

            if (pre->right==NULL)

            {

                //link made to Inorder Successor

                pre->right = curr;

                curr = curr->left;

            }


// While breaking the links in so made temporary

            // threaded tree we will check for the K smallest

            // condition

            else

            {

                // Revert the changes made in if part (break link

                // from the Inorder Successor)

                pre->right = NULL;


count++;


// If count is equal to K then we found

                // the kth smallest and so store it in ksmall

                if (count==k)

                    ksmall = curr->key;


curr = curr->right;

            }

        }

    }

    return ksmall; //return the found value

}


// A utility function to create a new BST node

Node *newNode(int item)

{

    Node *temp = new Node;

    temp->key = item;

    temp->left = temp->right = NULL;

    return temp;

}


/* A utility function to insert a new node with given key in BST */

Node* insert(Node* node, int key)

{

    /* If the tree is empty, return a new node */

    if (node == NULL) return newNode(key);


/* Otherwise, recur down the tree */

    if (key < node->key)

        node->left  = insert(node->left, key);

    else if (key > node->key)

        node->right = insert(node->right, key);


/* return the (unchanged) node pointer */

    return node;

}


// Driver Program to test above functions

int main()

{

    /* Let us create following BST

              50

           /     \

          30      70

         /  \    /  \

       20   40  60   80 */

    Node *root = NULL;

    root = insert(root, 50);

    insert(root, 30);

    insert(root, 20);

    insert(root, 40);

    insert(root, 70);

    insert(root, 60);

    insert(root, 80);


for (int k=1; k<=7; k++)

       cout << KSmallestUsingMorris(root, k) << " ";


return 0;

}

thumb_up 0 thumb_down 0 flag 0

class FindTriplet
{

    // returns true if there is triplet with sum equal
// to 'sum' present in A[]. Also, prints the triplet
boolean find3Numbers(int A[], int arr_size, int sum)
{
int l, r;

        /* Sort the elements */
quickSort(A, 0, arr_size - 1);

        /* Now fix the first element one by one and find the
other two elements */
for (int i = 0; i < arr_size - 2; i++)
{
// To find the other two elements, start two index variables
// from two corners of the array and move them toward each
// other
l = i + 1; // index of the first element in the remaining elements
r = arr_size - 1; // index of the last element
while (l < r)
{
if (A[i] + A[l] + A[r] == sum)
{
System.out.print("Triplet is " + A[i] + " ," + A[l]
+ " ," + A[r]);
return true;
}
else if (A[i] + A[l] + A[r] < sum)
l++;

                else // A[i] + A[l] + A[r] > sum
r--;
}
}

        // If we reach here, then no triplet was found
return false;
}

    int partition(int A[], int si, int ei)
{
int x = A[ei];
int i = (si - 1);
int j;

        for (j = si; j <= ei - 1; j++)
{
if (A[j] <= x)
{
i++;
int temp = A[i];
A[i] = A[j];
A[j] = temp;
}
}
int temp = A[i + 1];
A[i + 1] = A[ei];
A[ei] = temp;
return (i + 1);
}

    /* Implementation of Quick Sort
A[] --> Array to be sorted
si  --> Starting index
ei  --> Ending index
*/
void quickSort(int A[], int si, int ei)
{
int pi;

        /* Partitioning index */
if (si < ei)
{
pi = partition(A, si, ei);
quickSort(A, si, pi - 1);
quickSort(A, pi + 1, ei);
}
}

    // Driver program to test above functions
public static void main(String[] args)
{
FindTriplet triplet = new FindTriplet();
int A[] = {1, 4, 45, 6, 10, 8};
int sum = 22;
int arr_size = A.length;

        triplet.find3Numbers(A, arr_size, sum);
}
}

thumb_up 0 thumb_down 1 flag 0

#include <stdio.h>
int main()
{
int range,test;
int sum = 2;
int n = 3;
// printf("Enter the range.");
scanf("%i",&range);
while (range > 1)
{
int i =2;
while(i<n)
{
test = n%i;
if (test==0)
{
goto end;
}
i++;
}
if (test != 0)
{
sum = sum + n;
range--;
}
end:
n++;
}
printf("The sum is %i",sum);
return 0;
}

thumb_up 0 thumb_down 0 flag 0

Hibernate is an ORM (Object Relational Mapping) Framework.

It just makes the DB connections/interactions much easier and reduces the dependency of the application towards a particular dbms, say MySQL/Oracle/DB2.

In general when we want to interact with database we need to know SQL, our database may change and we might need to change SQL little bit. Hibernate lets us get rid of this problem.
Assume that we have a java class student. Student will have few attributes like name, age, birthday and we have created a web app for registering student details.

Without hibernate
We have to create a student table and define columns like name, age and birthday. Next step will be writing insert queries. We have to take care of transaction management manually like rollback and commit and stuff like that. If, we want to switch to oracle instead Microsoft SQL we might need to change the SQL because it is somewhat different.

With hibernate
We will create object to table mapping in hibernate configuration file. This way hibernate knows which class is mapped to which table and which attributes are mapped to which columns. Configuration takes more info than just class and table mapping. It takes database connection info. Dilect and more.

Now when we save student class, hibernate generates SQL to insert or update details of student. This is what a ORM tool does and hibernate is one of them.

thumb_up 4 thumb_down 1 flag 0

Spring is the most popular framework when building Java Enterprise Edition (EE) applications. Its core feature is dependency injection which results in inversion of control.

Best thing about Spring is we can have numerous tools and APIs in our Java project and there would be one thing keeping them together, Spring.

Dependency Injection / Inversion of Control

Suppose we create a Vehicle interface and there are Car, Bus and Bike classes implementing it.

To use functionalities of a class Car in our implementation class we would declare it as -

Vehicle vehicle = new Car();

The problem here is that vehicle is hard coded as a Car and if someone says that they want implementation as a Bike, we would have to change our code.

But with Spring, we would just declare a vehicle object as-

Vehicle vehicle;

In this case, Spring would provide the flexibility of instantiating the vehicle as a Car, Bus or Bike with a change in some configuration file, without touching the code in implementation file.

This is the core feature of Spring but it's just one of the myriad features it provides.

  • Need to access the database from Java application? Use Spring's JDBC or ORM modules.
  • Need to insert the logging feature in our Java application? Use Spring's AOP module.
  • Need to create a web based Java application? Use Spring's Web Module.
  • Need to perform unit and integration testing? Use Spring's Test module.
  • Need to create a messaging-based application? Use Spring's Messaging module.
  • Need to implement security features in your Java application? Use Spring's Security module.

Spring is the most comprehensive framework we could use to build our Enterprise level Java application.

thumb_up 1 thumb_down 0 flag 0

#define my_sizeof(type) (char *)(&type+1)-(char*)(&type)

int main()

{

double x;

printf ( "%d" , my_sizeof(x));

getchar ();

return 0;

}

We can also implement using function instead of macro, but function implementation cannot be done in C as C doesn't support function overloading and sizeof() is supposed to receive parameters of all data types.

See implemetation of sizeof() operator

thumb_up 1 thumb_down 0 flag 0

When we drop an egg from a floor x, there can be two cases (1) The egg breaks (2) The egg doesn't break.

1) If the egg breaks after dropping from xth floor, then we only need to check for floors lower than x with remaining eggs; so the problem reduces to x-1 floors and n-1 eggs
2) If the egg doesn't break after dropping from the xth floor, then we only need to check for floors higher than x; so the problem reduces to k-x floors and n eggs.

Since we need to minimize the number of trials in worst case, we take the maximum of two cases. We consider the max of above two cases for every floor and choose the floor which yields minimum number of trials.

              k ==> Number of floors   n ==> Number of Eggs   eggDrop(n, k) ==> Minimum number of trials needed to find the critical                     floor in worst case.   eggDrop(n, k) = 1 + min{max(eggDrop(n - 1, x - 1), eggDrop(n, k - x)):                   x in {1, 2, ..., k}}            

See Extended Egg Droping Puzzle

thumb_up 0 thumb_down 0 flag 0

#include<stdio.h>

#define bool int


/* Function to check if x is power of 2*/

bool isPowerOfTwo (int x)

{

  /* First x in the below expression is for the case when x is 0 */

  return x && (!(x&(x-1)));

}


/*Driver program to test above function*/

int main()

{

  isPowerOfTwo(31)? printf("Yes\n"): printf("No\n");

  return 0;

}

Output

No

thumb_up 1 thumb_down 0 flag 0

Let we make our first attempt on x'th floor.

If it breaks, we try remaining (x-1) floors one by one.
So in worst case, we make x trials.

If it doesn't break, we jump (x-1) floors (Because we have
already made one attempt and we don't want to go beyond
x attempts.  Therefore (x-1) attempts are available),
Next floor we try is floor x + (x-1)

Similarly, if this drop does not break, next need to jump
up to floor x + (x-1) + (x-2), then x + (x-1) + (x-2) + (x-3)
and so on.

Since the last floor to be tired is 100'th floor, sum of
series should be 100 for optimal value of x.

 x + (x-1) + (x-2) + (x-3) + .... + 1  = 100

 x(x+1)/2  = 100
x = 13.651

Therefore, we start trying from 14'th floor. If Egg breaks
we one by one try remaining 13 floors.  If egg doesn't break
we go to 27th floor.
If egg breaks on 27'th floor, we try floors form 15 to 26.
If egg doesn't break on 27'th floor, we go to 39'th floor.

An so on...

The optimal number of trials is 14 in worst case.

thumb_up 1 thumb_down 1 flag 0

The singleton pattern is for classes which should only ever have one instance.  I.e. there's still an object instance that gets created and used - it's simply the only one of its kind - a singleton - and there should never be another instance of that same class, by design.

A Singleton is considered better than a pure static class in object-oriented design because the single instance can be treated like an ordinary class instance; it can be passed as a parameter to methods or other class constructors, it can be set as a reference in a property, etc etc. A pure static forces the code to deal with the object statically instead of as an instance, which runs counter to several key concepts of object-oriented design.

Example :

    a <- [System new]
-> System
b <- [System new]
-> System
a==b
-> True

Here's a simple pattern for a Singleton in .NET:

    public sealed class Singleton
{
private static readonly Singleton instance = new Singleton();

        static Singleton()
{
}

        private Singleton()
{
}

        public static Singleton Instance
{
get { return instance; }
}
}

thumb_up 3 thumb_down 0 flag 0

Design patterns represent the best practices used by experienced  object-oriented software developers. Design patterns are solutions to  general problems that software developers faced during software development.

In software engineering, a design pattern is a general repeatable solution to a commonly occurring problem in software design. A design pattern isn't a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations.

Types of design patterns:

  • Creational Patterns
  • Structural Patterns
  • Behavioral Patterns
  • J2EE Patterns

thumb_up 3 thumb_down 0 flag 0

In Java, we have 3 different kinds of Design Patterns.

  1. Creational - How objects are created
  2. Behavioral - How objects interact (behave) with each other
  3. Structural - How objects are structured or laid out (relation with each other)

Factory Design Pattern is one of the creational design patterns. It is to help us with the creation of an object at runtime without the client needing to know the internal implementation details (not even the name of the class). For this, the factory needs to have one additional but mandatory input - which indicates what type of object we want to create.

For example, we have 2 different Databases in our application. Say MySQL and Oracle and we want to work with both but we would decide at runtime which database we want to work with. In this scenario, we go with a Factory Pattern by providing the name of the database we want to work with, as an input parameter.
For example :

    public DAOInstance getDAOInstance(String input)
{
if(input.equalsIgnoreCase("Oracle"))
{
return new OracleDAO();
}
else if (input.equalsIgnoreCase("MySQL"))
{
return new MySQLDAO();
}
else
return null;
}

thumb_up 2 thumb_down 1 flag 0

Inheritance in oops is a mechanism in which one object acquires all the properties and behaviors of parent object.

Inheritance represents the IS-A relationship, also known as parent-child relationship.

Why use inheritance in java

  • For Method Overriding (so runtime polymorphism can be achieved).
  • For Code Reusability.

thumb_up 7 thumb_down 0 flag 0

If the graph has cycles with negative scores, then you can get an arbitrarily large negative number as total score (by following the cycle again and again), so there is no single optimum path anymore.

If there are negative weights but no cycles, then there is still a problem with Dijkstra's algorithm: it assumes that the first path you find to any node is always the shortest path. But if there are negative weights, you might later find improvements of the route to a node (via negative edges); this improvement is not handled correctly in Dijkstra's algorithm.

Bellman-Ford can handle negative edges so long as there are no negative cycles; it can report if such cycles exist (in that case, you don't really have a problem since you can find an arbitrarily good route by repeatedly following the cycle).

thumb_up 0 thumb_down 0 flag 0

A pure virtual function (or abstract function) in C++ is a virtual function for which we don't have implementation, we only declare it. A pure virtual function is declared by assigning 0 in declaration. See the following example.

// An abstract class

class Test

{

// Data members of class

public :

// Pure Virtual Function

virtual void show() = 0;

/* Other members */

};

See Pure Virtual Functions

thumb_up 1 thumb_down 1 flag 0

Prim's algorithm is also a Greedy algorithm. It starts with an empty spanning tree. The idea is to maintain two sets of vertices. The first set contains the vertices already included in the MST, the other set contains the vertices not yet included. At every step, it considers all the edges that connect the two sets, and picks the minimum weight edge from these edges. After picking the edge, it moves the other endpoint of the edge to the set containing MST.
A group of edges that connects two set of vertices in a graph is called cut in graph theory.So, at every step of Prim's algorithm, we find a cut (of two sets, one contains the vertices already included in MST and other contains rest of the verices), pick the minimum weight edge from the cut and include this vertex to MST Set (the set that contains already included vertices).

How does Prim's Algorithm Work? The idea behind Prim's algorithm is simple, a spanning tree means all vertices must be connected. So the two disjoint subsets (discussed above) of vertices must be connected to make aSpanningTree. And they must be connected with the minimum weight edge to make it aMinimumSpanning Tree.

Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2) Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE. Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn't include all vertices
….a) Pick a vertex u which is not there in mstSet and has minimum key value.
….b) Include u to mstSet.
….c) Update key value of all adjacent vertices of u. To update the key values, iterate through all adjacent vertices. For every adjacent vertex v, if weight of edge u-v is less than the previous key value of v, update the key value as weight of u-v

The idea of using key values is to pick the minimum weight edge from cut. The key values are used only for vertices which are not yet included in MST, the key value for these vertices indicate the minimum weight edges connecting them to the set of vertices included in MST.

Let us understand with the following example:

The set mstSet is initially empty and keys assigned to vertices are {0, INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with minimum key value. The vertex 0 is picked, include it in mstSet. So mstSet becomes {0}. After including to mstSet, update key values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key values of 1 and 7 are updated as 4 and 8. Following subgraph shows vertices and their key values, only the vertices with finite key values are shown. The vertices included in MST are shown in green color.

Pick the vertex with minimum key value and not already included in MST (not in mstSET). The vertex 1 is picked and added to mstSet. So mstSet now becomes {0, 1}. Update the key values of adjacent vertices of 1. The key value of vertex 2 becomes 8.

Pick the vertex with minimum key value and not already included in MST (not in mstSET). We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So mstSet now becomes {0, 1, 7}. Update the key values of adjacent vertices of 7. The key value of vertex 6 and 8 becomes finite (7 and 1 respectively).

Pick the vertex with minimum key value and not already included in MST (not in mstSET). Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update the key values of adjacent vertices of 6. The key value of vertex 5 and 8 are updated.

We repeat the above steps until mstSet includes all vertices of given graph. Finally, we get the following graph.

thumb_up 2 thumb_down 0 flag 0

Given a connected and undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees. A minimum spanning tree (MST) or minimum weight spanning tree for a weighted, connected and undirected graph is a spanning tree with weight less than or equal to the weight of every other spanning tree. The weight of a spanning tree is the sum of weights given to each edge of the spanning tree.

thumb_up 2 thumb_down 1 flag 0

  1. In case of a waterfall model, the stages are in a sequence. Once one stage is over, the process moves to the next phase. Hence, accommodating changes in the middle is difficult. On the other hand, accommodating changes in requirement is easy in a spiral model. This is because we need not start from the beginning because prototypes are created in every phase.
  2. Secondly, because of the sequential nature of Waterfall model, if any mistake is done in between of a phase, we may need to begin from scratch. On the other hand, in spiral model because the prototypes are tested at the end of each phase, mistakes can be handled.
  3. Waterfall model is best suited for large projects. Spiral model is best suited in case of complex projects.

thumb_up 0 thumb_down 1 flag 0

Yes, an array with 10,000 elements filled with 1, 2, 3 can be sorted in O(n) using Dutch National Flag Algorithm, or 3-way Partitioning

// Java program to sort an array of 1, 2 and 3

import java.io.*;

class countzot {

    // Sort the input array, the array is assumed to

    // have values in {1, 2, 3}

    static void sort123(int a[], int arr_size)

    {

        int lo = 0;

        int hi = arr_size - 1;

        int mid = 0,temp=0;

        while (mid <= hi)

        {

            switch (a[mid])

            {

            case 1:

            {

                temp   =  a[lo];

                a[lo]  = a[mid];

                a[mid] = temp;

                lo++;

                mid++;

                break;

            }

            case 2:

                mid++;

                break;

            case 3:

            {

                temp = a[mid];

                a[mid] = a[hi];

                a[hi] = temp;

                hi--;

                break;

            }

            }

        }

    }

    /* Utility function to print array arr[] */

    static void printArray(int arr[], int arr_size)

    {

        int i;

        for (i = 0; i < arr_size; i++)

            System.out.print(arr[i]+" ");

        System.out.println("");

    }


    /*Driver function to check for above functions*/

    public static void main (String[] args)

    {

        int arr[] = {3, 1, 1, 3, 1, 2, 1, 2, 3, 3, 3, 1};

        int arr_size = arr.length;

        sort123(arr, arr_size);

        System.out.println("Array after seggregation ");

        printArray(arr, arr_size);

    }

}

thumb_up 0 thumb_down 0 flag 0

Time complexity of the traversing a n*n array will be O(n*n).

thumb_up 0 thumb_down 0 flag 0

Pointers store address of variables or a memory location. So, a pointer is a variable which contains the address in memory of another variable. We can have a pointer to any variable type.

// General syntax datatype *var_name;   // An example pointer "ptr" that holds // address of an integer variable or holds // address of a memory whose value(s) can // be accessed as integer values through "ptr" int *ptr;            

thumb_up 1 thumb_down 0 flag 0

In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don't measure the actual running time). We calculate, how does the time (or space) taken by an algorithm increases with the input size.
For example, let us consider the search problem (searching a given item) in a sorted array. One way to search is Linear Search (order of growth is linear) and other way is Binary Search (order of growth is logarithmic). To understand how Asymptotic Analysis solves the above mentioned problems in analyzing algorithms, let us say we run the Linear Search on a fast computer and Binary Search on a slow computer. For small values of input array size n, the fast computer may take less time. But, after certain value of input array size, the Binary Search will definitely start taking less time compared to the Linear Search even though the Binary Search is being run on a slow machine. The reason is the order of growth of Binary Search with respect to input size logarithmic while the order of growth of Linear Search is linear. So the machine dependent constants can always be ignored after certain values of input size.

The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that doesn't depend on machine specific constants, and doesn't require algorithms to be implemented and time taken by programs to be compared. Asymptotic notations are mathematical tools to represent time complexity of algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly used to represent time complexity of algorithms.

1) Θ Notation: The theta notation bounds a functions from above and below, so it defines exact asymptotic behavior. A simple way to get Theta notation of an expression is to drop low order terms and ignore leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)

For a given function g(n), we denote Θ(g(n)) is following set of functions.

Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such                   that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-negative for values of n greater than n0.

2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a function only from above. For example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).

The Big O notation is useful when we only have upper bound on time complexity of an algorithm. Many times we easily find an upper bound by simply looking at the algorithm.

O(g(n)) = { f(n): there exist positive constants c and                    n0 such that 0 <= f(n) <= cg(n) for                    all n >= n0}

3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a function, Ω notation provides an asymptotic lower bound.

Ω Notation< can be useful when we have lower bound on time complexity of an algorithm. As discussed in the previous post, the best case performance of an algorithm is generally not useful, the Omega notation is the least used notation among all three.

For a given function g(n), we denote by Ω(g(n)) the set of functions.

Ω (g(n)) = {f(n): there exist positive constants c and                   n0 such that 0 <= cg(n) <= f(n) for                   all n >= n0}.

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be written as Ω(n), but it is not a very useful information about insertion sort, as we are generally interested in worst case and sometimes in average case.

thumb_up 0 thumb_down 0 flag 0

A simple solution is to traverse the linked list until we find the node we have to delete. But this solution requires pointer to the head node which contradicts the problem statement.

Fast solution is to copy the data from the next node to the node to be deleted and delete the next node. Something like following.

// Find next node using next pointer     struct node *temp  = node_ptr->next;      // Copy data of next node to this node     node_ptr->data  = temp->data;      // Unlink next node     node_ptr->next  = temp->next;      // Delete next node     free(temp);

See delete a node in a singly linked list

thumb_up 2 thumb_down 0 flag 1

Dijkstra's algorithm is an algorithm for finding the shortest path between nodes in a graph, which may represent, for example, road networks.

In Dijkstra's algorithm, we generate a SPT (shortest path tree) with given source as root. We maintain two sets, one set contains vertices included in shortest path tree, other set includes vertices not yet included in shortest path tree. At every step of the algorithm, we find a vertex which is in the other set (set of not yet included) and has minimum distance from source.

Below are the detailed steps used in Dijkstra's algorithm to find the shortest path from a single source vertex to all other vertices in the given graph.
Algorithm
1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in shortest path tree, i.e., whose minimum distance from source is calculated and finalized. Initially, this set is empty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values as INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn't include all vertices
….a) Pick a vertex u which is not there in sptSetand has minimum distance value.
….b) Include u to sptSet.
….c) Update distance value of all adjacent vertices of u. To update the distance values, iterate through all adjacent vertices. For every adjacent vertex v, if sum of distance value of u (from source) and weight of edge u-v, is less than the distance value of v, then update the distance value of v.

For example:

The set sptSetis initially empty and distances assigned to vertices are {0, INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with minimum distance value. The vertex 0 is picked, include it in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8. Following subgraph shows vertices and their distance values, only the vertices with finite distance values are shown. The vertices included in SPT are shown in green color.

Pick the vertex with minimum distance value and not already included in SPT (not in sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of 1. The distance value of vertex 2 becomes 12.

Pick the vertex with minimum distance value and not already included in SPT (not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values of adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9 respectively).

Pick the vertex with minimum distance value and not already included in SPT (not in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance values of adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.

We repeat the above steps until sptSet doesn't include all vertices of given graph. Finally, we get the following Shortest Path Tree (SPT).

See Dijisktra Algorithm

thumb_up 2 thumb_down 0 flag 0

Operator overloading is a specific case of polymorphism (part of the OO nature of the language) in which some or all operators like +, = or == are treated as polymorphic functions and as such have different behaviors depending on the types of its arguments. It can easily be emulated using function calls.

C++ allows us to specify more than one definition for a function name or an operator in the same scope, which is called function overloading and operator overloading in c++ .

An overloaded declaration is a declaration that had been declared with the same name as a previously declared declaration in the same scope, except that both declarations have different arguments and obviously different definition (implementation).

When we call an overloaded function or operator, the compiler determines the most appropriate definition to use by comparing the argument types we used to call the function or operator with the parameter types specified in the definitions. The process of selecting the most appropriate overloaded function or operator is called overload resolution.

The name of an operator function is always operator keyword followed by symbol of operator and operator functions are called when the corresponding operator is used.

Operator overloading helps us to do these two things:

  1. Use operators for manipulating objects using operators
  2.  Expand use of operators

Example:

              class box {     int height;     int width;     public:     box operator +(box &b)     {         box temp;         temp.height=height + b.height;         temp.width=width+ b.width;         return temp;     } };  main() {     box b1,b2,b3;     b3 = b1+ b2; }            

Example : redefining + operator to add to strings like

"abc" + "def" to work like a concatenation resulting into "abcdef"

See Operator overloading

thumb_up 1 thumb_down 0 flag 0

Both quick sort and merger sort take O(nlogn) time and hence time taken to sort the elements remains same.

However, quick sort is superior than merge sort because :

  1. The biggest advantage is locality of reference. Its cache performance is higher than other sorting algorithms. This is because of its in-place characteristic. See Locality of reference
  2. In-place sorting algorithm :  Quick sort is in-place sorting algorithm where as merge sort is not in-place. In-place sorting means, it does not use additional storage space to perform sorting. In merge sort, to merge the sorted arrays it requires a temporary array and hence it is not in-place.
  3. As mentioned in CLRS, it has small constants in O(nlgn) .
  4. Merge sort requires extra 'N' memory where N is the size of element which is very expensive. Extra space requirement for allocating and de-allocating leads to more complexity.It requires only sorted data.

thumb_up 5 thumb_down 0 flag 0

If the data set is small, both the sorting algorithms are fast and differences are not noticeable. However, when the data sets are very large compare to the main memory that one has ( hence, the cache memory compared to the full main memory ) locality matters.

Mergesort : Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list

Heapsort : Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root.

1. Merger sort is stable while heap sort is not stable because operations on the heap can change the relative order of equal items.

2. Heap Sort is more memory efficient and also in place. It doesn't copy the array and store it elsewhere (like merge sort) hence needs less space. It can use used in situations like what's the top 10 numbers from a list of 1 million numbers.

3. Heapsort may make more comparisons than optimal. Each siftUp operation makes two comparisons per level, so the comparison bound is approximately 2n log_2 n. In practice heapsort does only slightly worse than quicksort.

4. Space Complexity

    Merge sort : O(n)

    Heap sort : O(1) (if done iteratively)  But we would have to heapify the array first which will take O(n) time.

5. Merge sort works best on linked lists.

thumb_up 1 thumb_down 7 flag 0

Lets assume that there are two processes namely P1 and P2. P1 and P2 each get their own stack.

We have two stacks, one for P1 and one for P2. They are completely separated. All activation frames for P1 go on P1's stack, and all activation frames for P2 go on P2's stack. They operate completely independently, and on a multi-cpu processor, can even operate concurrently.

A single processor (whether on a multi-cpu processor or not) can alternate between P1 and P2. When a processor stops running P1 and starts running P2, this one form of the context switch. The state of P1 is saved, and the state P2 is loaded. This save/restore operation will effectively switch stacks by updating the CPU registers that refer to the stack.

Also, a process usually reflects an address space. An address space is like memory as an array. It gets more complicated than this because of memory that is shared between processes (each other) and the kernel, and various protection mechanisms; however, conceptually, we can think of an address space as a big array of bytes starting at index 0. Each process is given its own address space, which allows each process to have its own stack and heap independent of the other processes, without worry of conflicting indexes (i.e. of conflicting addresses).

Now, a single process can also run multiple threads. Processes P1 and P2 could also be replicated using T1 and T2, two threads in one process. With multiple threads in one process, the threads are all sharing the same address space, though T1 and T2 still each have their own stack. T1, the process's first thread, will start at main, but T2 will start wherever directed to do so (by T1 firing up a new thread, T2, so T2 is unlikely to start at main). Otherwise the scenarios are very similar between P1, P2 and T1, T2.

Note that a context switch between P1 and P2 involves also changing the address space (more cpu registers that here refer to the address space), whereas between T1 and T2 only the stack state needs to be switched and not the address space. T1 and T2 will also share a common heap (though may each be directed toward different areas of that heap for their allocations) whereas P1 and P2 will have independent heaps (though they still may establish some shared memory).

Address spaces are supported by cpu hardware feature called virtual memory. This enables separate address spaces, and protects one process from potentially erratic behaviors of another, in that one process is prevented from touching the memory of another (modulo debuggers, etc..).

see here

Fundamentally, the cpu has registers that refer to the context of its current execution state. These registers in some way define, among other things:

  • the address space (out of which the stack(s) and heap are allocated)
  • the stack (and thus the existing activation frames, and where new ones go)
  • the instruction pointer, which identifies the current instruction stream

A context switch saves and restores these things. By saving the current context, that thread (of a process) context is suspended for later resumption. By restoring some other context, that other thread (of a process) is resumed.

Note that not all of the context needs to be saved every time, for example, the address space is probably already saved for the process, so it just needs to be reloaded on resumption instead of both save and reloaded. However, the stack and instruction pointer move during execution, of course, so these do need to be saved so they can be restored later.

When multiple CPUs are present and concurrently executing, they each have their own full notion of context, of referring to address space, activation stack and instruction stream using their own registers.

It is then up to the operating system and managing run-times to properly allocate new stack within a process when a new thread is requested, or to allocate a new address space when a new process is requested.

  1. The stack is typically stored in the "low" addresses of memory and fills upward toward its upper limit. The heap is typically stored at the "top" of the address space and grows toward the stack.

  2. The O/S stores a "context" per running process. The operation of saving and restoring process state is called a "context switch."

thumb_up 1 thumb_down 0 flag 0

First start, from two bottles :

We can test 2 bottles with only 1 rat. Feed 1 bottle to the rat. This leads to 2 possible cases:

The rat dies, or
The rat doesn't die

This will tell us which of the 2 bottles is poisoned.

Lets try to show this using the below table:

0 in a cell depicts that bottle was not fed to that rat.

1 in a cell depicts that bottle was fed to that rat.

Note: This table only displays the feeding pattern. The table does not depict the life/ death outcome of the rat(s) involved.

Depending on the rat's life/ death outcome, it would be easy to figure out which bottle (row) was poisoned. Say the rat dies. This would mean the row with the '1' will have the poisoned bottle. Say the rat survives, this would mean the row with the '0' would be the poisoned row.

Key thing to note: each row is unique!

Similarly, We can test up to 4 bottles, with 2 rats. Say the bottles are labeled a, b, c and d.
Feed Rat 1: c and d
Feed Rat 2: b and d

Table built on the same logic as the one above. Notice again each row is unique. Therefore, a unique combination of rats will die telling us which bottle is poisoned.

This will lead to one of four possibilities for the rats:

  • Neither dies → bottle 1 poisoned
  • Only rat 2 dies → bottle 2 was poisoned
  • Only rat 1 dies → bottle 3 was poisoned
  • Both die → bottle 4 was poisoned

Carrying on, for 8 bottles :

Table built on the same logic as the ones above. Depending on which rats die and live, we can figure out which bottle was poisoned.

In this table again, notice how each row is unique. Therefore, a unique combination of rats will die telling us which bottle is poisoned.

To explain further, here is the table with 4 rats and 16 bottles:

This table was also follow the same logic. And of course, in this table again, each row is unique. Therefore, a unique combination of rats will die telling us which bottle is poisoned.

In this way we will get a unique row and can easily identify which bottle is poisoned.

Similar problem  puzzle

thumb_up 3 thumb_down 0 flag 0

Output :

Mahesh because argument passed in printf function is a string.

thumb_up 1 thumb_down 0 flag 0

Macros in C are string replacements commonly used to define constants.

For example:-

#define MAX 1000

It sets the value of identifier MAX to 1000 and whenever we want to use 1000 in our program we can use MAX instead of writing 1000.

Similarly ,

#define PI 3.1415926

Whenever we have to use value of pi in our code we can simply write PI instead of writing lengthy value of pi again and again.

Basically macros are one of the Preprocessor directives available in C.Macro substitution is a process where an identifier in a program is replaced by a predefined string.

The macro definition takes the following form:

#define identifier string

If this statement is included in the program at the beginning ,then the preprocessor replaces every occurrence of the identifier in the source code by the string .

The keyword #define is written just as shown, followed by the identifier and a string with at least one blank space between them. The definition is not terminated by semicolon. The string may be any constant/expression while the identifier must be a valid C name.

There are different type of macro substitution.The most common forms are:-

  1. Simple macro substitution(simple string replacements to define constants):-

#define M 5

#define COUNT 10

etc.

2. Argumented macro substitution(this permits us to define more complex and more useful forms of replacements):-

It takes the form:-

#define identifier(f1,f2,..,fn) string

for example, #define CUBE(x) (x*x*x)

if the following statement appear in the prog.-

volume=CUBE(side);

then the preprocessor will expand this statement to:

volume=(side*side*side);

3. Nested macro substitution (we can also use one macro in the definition of another macro):-

for example:-

#define SQUARE(x) (x*x)

#define CUBE (SQUARE(x)*(x))

Also,

#define M 5

#define N (M+1)

#define MAX(M,N) ((M>N)?M:N)

Undefining a Macro:-

A defined macro can be undefined ,using the statement-

#undef identifier

This is useful when we want to restrict the definition only to a particular part of the program.

See Macros in C

thumb_up 8 thumb_down 0 flag 0

A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the push down stacks only two operations are allowed: push the item into the stack, and pop the item out of the stack.

MS Word could use the concept of stack in Redo and Undo options (and also in Clipboard), while the browsers might use it in the Back and Forward buttons.

  • Lets use Stack, Back Stack and Forward Stack.
  • When stack is empty, disable the button.
  • When we navigate to new Url, push url on Back Stack. Clear Forward Stack, this is the complicated step in case of Linked List or any other implementation.
  • When you hit the back button, pop the top Url from Back Stack, push it in Forward Stack.
  • When you hit the forward button, pop the top Url from Forward Stack and push it on to Back Stack.

This logic can also be used for Undo / Redo functionality. For undo/redo functionality again let us take 2 stacks. One stack for "undoing," i.e. Going backward to the page we were just at, and one would be for "redoing," I.e. Going forward.

pseudo code:

    function moveBackward(undo, redo)
if undo is not empty then
Push current page onto the redo stack
Pop the undo stack and go to that page
End if
End function

    function moveForward (undo, redo)
if redo is not empty then
Push current page onto the undo stack
Pop the redo stack and go to that page
End if
End function

Another example of stack is if we wished to make some photo editing website and if user wish to add some filter on the photo and there we can see the application of stack. whenever new filter is added it is done on the top of the recent layer so if we wish to remove the edit then it will pop the latest edit and older one will be now on the top. other example can be like online editor where latest edit can be undo.

thumb_up 3 thumb_down 1 flag 0

Method 1 :

following delete all rowids that is BIGGER than the SMALLEST rowid value (for a given key).

DELETE FROM table_name A
WHERE ROWID > (SELECT min(rowid) FROM table_name B
WHERE A.key_values = B.key_values);

Method 2 :

create table table_name2 as select distinct * from table_name1;
drop table table_name1;
rename table_name2 to table_name1;

thumb_up 7 thumb_down 0 flag 0

Object Oriented Programming (OOP) is a programming paradigm where the complete software operates as a bunch of objects talking to each other. An object is a collection of data and methods that operate on its data.

So, if a problem is solved in terms of classes and objects it is known as Object Oriented Programming. Object oriented programming is a technique to create programs based on real world.

The main advantage of OOP is better manageable code that covers following.

1) The overall understanding of the software is increased as the distance between the language spoken by developers and that spoken by users.

2) Object orientation eases maintenance by the use of encapsulation.   One can easily change the underlying representation by keeping the methods same.

OOP paradigm is mainly useful for relatively big software. See this for a complete example that shows advantages of OOP over procedural programing.

Following are the oops concepts :

  1. Encapsulation
  2. Abstraction
  3. Inheritance
  4. Polymorphism

thumb_up 4 thumb_down 0 flag 0

DDL is data definition language. it is used to define database structure or schema.

Some examples :

 1. CREATE - to create objects in the database
 2. ALTER - alters the structure of the database
 3. DROP - delete objects from the database
 4. TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed
 5. COMMENT - add comments to the data dictionary
 6. RENAME - rename an object

DML is Data Manipulation Language. DML statements are used for managing data within schema objects.

Some examples :

 1. SELECT - retrieve data from the a database
 2. INSERT - insert data into a table UPDATE - updates existing data within a table
 3. DELETE - deletes all records from a table, the space for the records remain
 4. MERGE - UPSERT operation (insert or update)
 5. CALL - call a PL/SQL or Java subprogram
 6. EXPLAIN PLAN - explain access path to data
 7. LOCK TABLE - control concurrency

thumb_up 0 thumb_down 0 flag 0

java code for reversing the sequence of vowels in a given  string :

import java.util.Scanner;

class GfG
{
    void rearrange(String str)
    {
        // array of strings containing string separated by " " original string str
        String sarr[] = str.split(" ");

                String fstring = ""; // result string

                for(int k=0; k<sarr.length; k++)
        {
            char ch [] = sarr[k].toCharArray(); // array of characters of string sarr[i]
            int i = 0;
            int j = ch.length-1;
            while(i<j)
            {
                if(ch[i] != 'a' && ch[i] != 'e' && ch[i] != 'i' && ch[i] != 'o' && ch[i] != 'u')
                {
                    i++;
                }

                                    if( ch[j] != 'a' && ch[j] != 'e' && ch[j] != 'i' && ch[j] != 'o' && ch[j] != 'u')
                {
                    j--;
                }

                                    if(ch[i] == 'a' || ch[i] == 'e' || ch[i] == 'i' || ch[i] == 'o' || ch[i] == 'u')
                {
                    if(ch[j] == 'a' || ch[j] == 'e' || ch[j] == 'i' || ch[j] == 'o' || ch[j] == 'u' )
                    {
                        char c = ch[i];
                        ch[i] = ch[j];
                        ch[j] = c;
                        i++;
                        j--;
                    }
                }
            }
            // converted array into string after swapping and concatinated to the final string
            fstring = fstring + new String(ch)+" ";
        }
        System.out.println(fstring);
    }
}

class ReverseOrder
{
    public static void main(String args[])
    {
        Scanner sc = new Scanner(System.in);
        String str = sc.nextLine();

                GfG g = new GfG();
        g.rearrange(str);
    }
}

input : geeksforgeeks is a Computer Science Portal

output : geeksforgeeks is a Cemputor Sceenci Partol

thumb_up 0 thumb_down 0 flag 0

  • A class is a templet. Class is like the blueprint for a house. Using this blueprint, we can build as many houses as we like.

        Examples of a class :

  1. Sketch of a building
  2. Rubber stamp
  3. Negative of our photograph
  • An object is an instance of a class. Each house we build (or instantiate, in OO lingo) is an object, also known as an instance.

       Examples of an object :

  1. Building itself
  2. Impression of that stamp
  3. Photograph itself
  • Each house also has an address. If we want to tell someone where the house is, we give them a card with the address written on it. That card is the object's reference.
  • If you want to visit the house, we look at the address written on the card. This is called dereferencing.

Examples :

GfG g1 = new GfG();

GfG g2 = g1;

Here, GfG is a class, new GfG() is the instance made on the heap, g1 and g2 are refering to that instance.

thumb_up 11 thumb_down 0 flag 1

The period of time that starts when a software product is conceived and end when the product is no longer available for use. the software life cycle typically includes a requirement phase, design phase, implementation phase, test phase, insatallation and check out phase, operation and maintenance phase and sometimes retirement phase.

Software Development Life Cycle (SDLC), (or sometimes Systems Development Life Cycle) is a structural approach used for developing particular software products. We may say that SDLC is a subset of PDLC.

The primary objective of SDLC is to ensure that software built is of good quality.

The life-cycle of SDLC is typically divided into five phases and each phase has its own specific process as well as deliverables.

1. Requirements gathering or analysing user requirements

2. Designing the program

3. Coding the program

4. Documentation and testing of the system

5. Operation and maintenance of the system

A "software process model" is an abstract representation of a process.SDLC provides various Software Development Models:
1. Build and Fix Model
2. Prescriptive Model - Waterfall Model
3. Prototyping MOdel
4. Iterative Enhancement MOdel
5. Spiral Model

thumb_up 25 thumb_down 0 flag 0

Data types are declarations for memory location or variables that determine the characteristics of the data that may be stored and the methods (operations) of processing that are permitted involving them.

The C language provides the four basic data types char, int, float and double, and the modifiers signed, unsigned, short and long.

Data type                        Format specifier                           Size

Char                                 %c                                               1 byte

Integer                             %d                                               2 bytes

Float                                %f                                                 4 bytes

Double                             %f or %lf                                    8 bytes

Short                                 %hi                                             2 bytes

long                                   %li                                              4 bytes

See article

A storage class defines the scope (visibility) and life-time of variables and/or functions within a C Program.

Storage Classes in C

1.Automatic

Automatic variables are also called as local variables. If a variable is not specified with any storage class, it is automatic by default. Its scope is within the function where it is declared.

2.External

External or global variables are declared outside the function. They are visible to the entire program.

When a variable is declared as extern then it is not present in the particular module of the program and the compiler need to check some other module where the variable is present actually.

3.Static

When a variable is declared as static it exist till the end of the program.They are by default initialized to zero.

4.Register

Register variable informs the compiler that the variable has to be stored in register rather than in memory. As register access is fast, frequently used variables can be declared as register.

thumb_up 21 thumb_down 0 flag 0

There can be multiple answers possible. Some of them are :

1. Trick question, place 50 coins in both groups and in theory they all have heads just not necessarily facing up or down.

2. Split into two groups, one with 80 coins and the other with 20. Flip over every coin in the group with 20 coins.

Explanation : Let there are 2 groups named as A and B. Let group A has 80 coins ( 62 tails-up and 18 heads-up) and group B has 20 coins (18 tails-up and 2 heads up). Now flip all the coins of group B. Now group B has 2 tails-up coins and 18 heads-up coins. So, both the groups have same number of coins facing heads-up.

thumb_up 0 thumb_down 0 flag 0

The memcpy function is used to copy a block of data from a source address to a destination address. Below is its prototype.

void * memcpy(void * destination, const void * source, size_t num);

The idea is to simply typecast given addresses to char *(char takes 1 byte). Then one by one copy data from source to destination. Below is implementation of this idea.
// A C implementation of memcpy()
#include<stdio.h>
#include<string.h>

void myMemCpy(void *dest, void *src, size_t n)
{
// Typecast src and dest addresses to (char *)
char *csrc = (char *)src;
char *cdest = (char *)dest;

   // Copy contents of src[] to dest[]
for (int i=0; i<n; i++)
cdest[i] = csrc[i];
}

// Driver program
int main()
{
char csrc[] = "GeeksforGeeks";
char cdest[100];
myMemCpy(cdest, csrc, strlen(csrc)+1);
printf("Copied string is %s", cdest);

   int isrc[] = {10, 20, 30, 40, 50};
int n = sizeof(isrc)/sizeof(isrc[0]);
int idest[n], i;
myMemCpy(idest, isrc,  sizeof(isrc));
printf("\nCopied array is ");
for (i=0; i<n; i++)
printf("%d ", idest[i]);
return 0;
}

Output:

Copied string is GeeksforGeeks
Copied array is 10 20 30 40 50

thumb_up 4 thumb_down 0 flag 0

Queue works on the principal of "First come first serve".

A simple idea is to keep 2 data structures - a doubly linked-list and a hash map. The doubly linked list contains History objects (which contain a url string and a timestamp) in order sorted by timestamp. Each node would have the timestamp, URL, and a pointer to the node in the queue representing the next older access of that same URL. The queue would be capped to both a certain size and certain time range and then we could delete entries from the back of the queue as needed. And the hash map is a Map<String, History>, with urls as the key.

Java based implementation :

class History
{
History prev;
History next;
String url;
Long timestamp;
void remove()
{
prev.next = next;
next.prev = prev;
next = null;
prev = null;
}
}

When we add a url to the history, check to see if it's in the hash map; if it is then update its timestamp, remove it from the linked list, and add it to the end of the linked list. If it's not in the hash map then add it to the hash map and also add it to the end of the linked list. Adding a url (whether or not it's already in the hash map) is a constant time operation.

class Main
{
History first; // first element of the linked list
History last; // last element of the linked list
HashMap<String, History> map;

  void add(String url)
{
History hist = map.get(url);
if(hist != null)
{
hist.remove();
hist.timestamp = System.currenttimemillis();
}
else
{
hist = new History(url, System.currenttimemillis());
map.add(url, hist);
}
last.next = hist;
hist.prev = last;
last = hist;
}
}

To get the history from e.g. the last week, traverse the linked list backwards until we hit the correct timestamp.

If thread-safety is a concern, then we can use a thread-safe queue for urls to be added to the history, and use a single thread to process this queue; this way our map and linked list don't need to be thread-safe i.e. we don't need to worry about locks etc.

For persistence we can serialize / deserialize the linked list; when we deserialize the linked list, reconstruct the hash map by traversing it and adding its elements to the map. Then to clear the history we'd null the list and map in memory and delete the file we serialized the data too.

thumb_up 2 thumb_down 0 flag 0

Let's consider the following implementation of Linear Search.
#include <stdio.h>

// Linearly search x in arr[].  If x is present then return the index,
// otherwise return -1
int search(int arr[], int n, int x)
{
int i;
for (i=0; i<n; i++)
{
if (arr[i] == x)
return i;
}
return -1;
}

/* Driver program to test above functions*/
int main()
{
int arr[] = {1, 10, 30, 15};
int x = 30;
int n = sizeof(arr)/sizeof(arr[0]);
printf("%d is present at index %d", x, search(arr, n, x));

    getchar();
return 0;
}

Worst Case Analysis
In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case that causes maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be Θ(n).

Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict) distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). Following is the value of average case time complexity.

Best Case Analysis (Bogus)
In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case that causes minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Θ(1).

See article

thumb_up 10 thumb_down 0 flag 0

  1. C is a procedural language whereas C++ is an object oriented language.
  2. Security : In case of C, the data is not secured while the data is secured(hidden) in C++.
  3. Approach : C uses the top-down approach while C++ uses the bottom-up approach.
  4. Driver : C is function-driven while C++ is object-driven.
  5. Function overloading : C++ supports function overloading while C does not.
  6. Exception Handling : C++ supports Exception Handling while C does not.
  7. Structures : Structures in C cannot have functions.
  8. Functions : Inline functions are not available in C.
  9. Namespace : C does not have namespace feature while C++ uses namespace which avoid name collisions.

thumb_up 27 thumb_down 3 flag 0

0! + 0! = 2

2 + 0! = 3

3 + 0! = 4

4 + 0! = 5

5! = 120

thumb_up 1 thumb_down 0 flag 0

The problem is similar to tail command in linux which displays the last few lines of a file. It is mostly used for viewing log file updates as these updates are appended to the log files.

Below is its C++ implementation
#include <bits/stdc++.h>
using namespace std;

#define SIZE 100

// Utility function to sleep for n seconds
void sleep(unsigned int n)
{
clock_t goal = n * 1000 + clock();
while (goal > clock());
}

// function to read last n lines from the file
// at any point without reading the entire file
void tail(FILE* in, int n)
{
int count = 0;  // To count '\n' characters

    // unsigned long long pos (stores upto 2^64 – 1
// chars) assuming that long long int takes 8
// bytes
unsigned long long pos;
char str[2*SIZE];

    // Go to End of file
if (fseek(in, 0, SEEK_END))
perror("fseek() failed");
else
{
// pos will contain no. of chars in
// input file.
pos = ftell(in);

        // search for '\n' characters
while (pos)
{
// Move 'pos' away from end of file.
if (!fseek(in, --pos, SEEK_SET))
{
if (fgetc(in) == '\n')

                    // stop reading when n newlines
// is found
if (count++ == n)
break;
}
else
perror("fseek() failed");
}

        // print last n lines
printf("Printing last %d lines -\n", n);
while (fgets(str, sizeof(str), in))
printf("%s", str);
}
printf("\n\n");
}

// Creates a file and prints and calls tail() for
// 10 different values of n (from 1 to 10)
int main()
{
FILE* fp;
char buffer[SIZE];
// Open file in binary mode
// wb+ mode for reading and writing simultaneously
fp = fopen("input.txt", "wb+");
if (fp == NULL)
{
printf("Error while opening file");
exit(EXIT_FAILURE);
}
srand(time(NULL));

    // Dynamically add lines to input file
// and call tail() each time
for (int index = 1; index <= 10; index++)
{
/* generate random logs to print in input file*/
for (int i = 0; i < SIZE - 1; i++)
buffer[i] = rand() % 26 + 65; // A-Z
buffer[SIZE] = '\0';

        /* code to print timestamp in logs */
// get current calendar time
time_t ltime = time(NULL);

        // asctime() returns a pointer to a string
// which represents the day and time
char* date = asctime(localtime(&ltime));

        // replace the '\n' character in the date string
// with '\0' to print on the same line.
date[strlen(date)-1] = '\0';

        /* Note in text mode '\n' appends two characters,
so we have opened file in binary mode */
fprintf(fp, "\nLine #%d [%s] - %s", index,
date, buffer);

        // flush the input stream before calling tail
fflush(fp);

        // read last index lines from the file
tail(fp, index);

        // sleep for 3 seconds
// note difference in timestamps in logs
sleep(3);
}

    /* close the file before ending program */
fclose(fp);

    return 0;
}

See article

thumb_up 3 thumb_down 0 flag 0

The problem describes two processes, the producer and the consumer, which share a common, fixed-size buffer used as a queue.

  • The producer's job is to generate data, put it into the buffer, and start again.
  • At the same time, the consumer is consuming the data (i.e. removing it from the buffer), one piece at a time.

Problem
To make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.

Solution
The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
An inadequate solution could result in a deadlock where both processes are waiting to be awakened.

Implementation of Producer Consumer Class

  • A LinkedList list – to store list of jobs in queue.
  • A Variable Capacity – to check for if the list is full or not
  • A mechanism to control the insertion and extraction from this list so that we do not insert into list if it is full or remove from it if it is empty.

Implementation : Java program to implement solution of producer consumer problem.
import java.util.LinkedList;

 public class Threadexample
{
    public static void main(String[] args)
                        throws InterruptedException
    {
        // Object of a class that has both produce()
        // and consume() methods
        final PC pc = new PC();

         // Create producer thread
        Thread t1 = new Thread(new Runnable()
        {
            @Override
            public void run()
            {
                try
                {
                    pc.produce();
                }
                catch(InterruptedException e)
                {
                    e.printStackTrace();
                }
            }
        });

         // Create consumer thread
        Thread t2 = new Thread(new Runnable()
        {
            @Override
            public void run()
            {
                try
                {
                    pc.consume();
                }
                catch(InterruptedException e)
                {
                    e.printStackTrace();
                }
            }
        });

         // Start both threads
        t1.start();
        t2.start();

         // t1 finishes before t2
        t1.join();
        t2.join();
    }

     // This class has a list, producer (adds items to list
    // and consumber (removes items).
    public static class PC
    {
        // Create a list shared by producer and consumer
        // Size of list is 2.
        LinkedList<Integer> list = new LinkedList<>();
        int capacity = 2;

         // Function called by producer thread
        public void produce() throws InterruptedException
        {
            int value = 0;
            while (true)
            {
                synchronized (this)
                {
                    // producer thread waits while list
                    // is full
                    while (list.size()==capacity)
                        wait();

                     System.out.println("Producer produced-"
                                                  + value);

                     // to insert the jobs in the list
                    list.add(value++);

                     // notifies the consumer thread that
                    // now it can start consuming
                    notify();

                     // makes the working of program easier
                    // to  understand
                    Thread.sleep(1000);
                }
            }
        }

         // Function called by consumer thread
        public void consume() throws InterruptedException
        {
            while (true)
            {
                synchronized (this)
                {
                    // consumer thread waits while list
                    // is empty
                    while (list.size()==0)
                        wait();

                     //to retrive the ifrst job in the list
                    int val = list.removeFirst();

                     System.out.println("Consumer consumed-"
                                                    + val);

                     // Wake up producer thread
                    notify();

                     // and sleep
                    Thread.sleep(1000);
                }
            }
        }
    }
}

Output :

Producer produced-0
Producer produced-1
Consumer consumed-0
Consumer consumed-1
Producer produced-2

thumb_up 8 thumb_down 0 flag 0

Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into sub-problems and stores the result of sub-problems to avoid the computing same results again.

Following are the two main properties of a problem that suggest that the given problem can be solved using Dynamic programming :

  1. Overlapping Subproblems
  2. Optimal Substructure

1. Overlapping Subproblems : Dynamic Programming is mainly used when solutions of same subproblems are needed again and again. In dynamic programming, computed solutions to subproblems are stored in a table so that these don't have to recomputed. So Dynamic Programming is not useful when there are no common (overlapping) subproblems because there is no point storing the solutions if they are not needed again. For example, Binary Search doesn't have common subproblems.
There are following two different ways to store the values so that these values can be reused:
a) Memoization (Top Down)
b) Tabulation (Bottom Up)

a) Memoization (Top Down): The memoized program for a problem is similar to the recursive version with a small modification that it looks into a lookup table before computing solutions. We initialize a lookup array with all initial values as NIL. Whenever we need solution to a subproblem, we first look into the lookup table. If the precomputed value is there then we return that value, otherwise we calculate the value and put the result in lookup table so that it can be reused later.

Example :
#include<stdio.h>
#define NIL -1
#define MAX 100

int lookup[MAX];

/* Function to initialize NIL values in lookup table */
void _initialize()
{
int i;
for (i = 0; i < MAX; i++)
lookup[i] = NIL;
}

/* function for nth Fibonacci number */
int fib(int n)
{
if (lookup[n] == NIL)
{
if (n <= 1)
lookup[n] = n;
else
lookup[n] = fib(n-1) + fib(n-2);
}

   return lookup[n];
}

int main ()
{
int n = 40;
_initialize();
printf("Fibonacci number is %d ", fib(n));
return 0;
}

b) Tabulation (Bottom Up): The tabulated program for a given problem builds a table in bottom up fashion and returns the last entry from table. For example, for the same Fibonacci number, we first calculate fib(0) then fib(1) then fib(2) then fib(3) and so on. So literally, we are building the solutions of subproblems bottom-up.

#include<stdio.h>
int fib(int n)
{
int f[n+1];
int i;
f[0] = 0;   f[1] = 1;
for (i = 2; i <= n; i++)
f[i] = f[i-1] + f[i-2];

  return f[n];
}

int main ()
{
int n = 9;
printf("Fibonacci number is %d ", fib(n));
return 0;
}

2) Optimal Substructure: A given problems has Optimal Substructure Property if optimal solution of the given problem can be obtained by using optimal solutions of its subproblems.

For example, the Shortest Path problem has following optimal substructure property:
If a node x lies in the shortest path from a source node u to destination node v then the shortest path from u to v is combination of shortest path from u to x and shortest path from x to v. The standard All Pair Shortest Path algorithms like Floyd–Warshall and Bellman–Ford are typical examples of Dynamic Programming

thumb_up 4 thumb_down 0 flag 0

  1. Our computer stores parts of our programs memory in a cache that has a much smaller latency than main memory (even when compensating for cache hit time).

  2. C arrays are stored in a contiguous by row major order. This means if we ask for element x, then element x+1 is stored in main memory at a location directly following where x is stored.

  3. It's typical for our computer cache to "pre-emptively" fill cache with memory addresses that haven't been used yet, but that are locally close to memory that our program has used already.

Therefore when we enumerate our array via the row major, we're enumerating it in such a way where it's stored in a contiguous manner in memory, and our machine has already taken the liberty of pre-loading those addresses into cache for you because it guessed that we wanted it. Therefore we achieve a higher rate of cache hits. When we're enumerating an array in another non contiguous manner then our machine likely wont predict the memory access pattern we're applying, so it wont be able to pre-emptively pull memory addresses into cache for us, and we wont incur as many cache hits, so main memory will have to be accessed more frequently which is slower than our cache.

thumb_up 6 thumb_down 2 flag 0

Yes, we can do this  and it will run without any exception or error

Example :

              class GfG {     public static void main (String[] args) throws java.lang.Exception     {     int a[] = new int[5];     for(int i=0; i<4; i++)     {         a[i] = i++;         System.out.println(a[i]);     }     } }            

thumb_up 7 thumb_down 0 flag 0

  1. The terms 32-bit and 64-bit refer to the way a computer's processor (also called a CPU), handles information. The 64-bit operating system handles large amounts of random access memory (RAM) more effectively than a 32-bit system.
  2. A 64-bit processor is capable of storing more computational values, including memory addresses, which means it's able to access over four billion times as much physical memory than a 32-bit processor.
  3. Through hardware emulation, it's possible to run 32-bit software and operating systems on a machine with a 64-bit processor. The opposite isn't true however, in that 32-bit processors cannot run software designed with 64-bit architecture in mind.

thumb_up 2 thumb_down 0 flag 0

We can count number of set bits in O(1) time using lookup table.

In GCC, we can directly count set bits using __builtin_popcount(). So we can avoid a separate function for counting set bits.

// C++ program to demonstrate __builtin_popcount()

              #include <iostream> using namespace std;  int main() {    cout << __builtin_popcount (4) << endl;    cout << __builtin_popcount (15);    return 0; }            


Output :

1

4

thumb_up 8 thumb_down 0 flag 0

A linked list is a linear data structure in which the elements contain references to the next (and optionally the previous) element. Linked lists offer O(1) insert after and removal of any element with known memory location, O(1) list concatenation, and O(1) access at the front (and optionally back) positions as well as O(1) next element access. Random access and random index insertion/removal have O(n) complexity.

Here head is pointing to the first node A, and last node with data part 'D' contains NULL as it has no more node to point.

Advantages over arrays
1) Dynamic size
2) Ease of insertion/deletion

Drawbacks:
1) Random access is not allowed. We have to access elements sequentially starting from the first node. So we cannot do binary search with linked lists.
2) Extra memory space for a pointer is required with each element of the list.

Types of linked list:-

1) Singly Linked list:- it will contain data and address of next node

2) Double Linked list:it will contain data , address of previous as well as next node

3) Circular linked list;:- it will contain data and next as well as last node next will point to first node.

thumb_up 0 thumb_down 0 flag 0

We can use adjacency list to maintain cricket score.

thumb_up 7 thumb_down 0 flag 1

Deleting a derived class object using a pointer to a base class that has a non-virtual destructor results in undefined behavior. To correct this situation, the base class should be defined with a virtual destructor.

So, Virtual destructors are useful when we have to delete an instance of a derived class through a pointer to base class:

// A program with virtual destructor

              #include<iostream>   using namespace std;   class base {   public:     base()          {         cout<<"Constructing base \n";     }     virtual ~base()     {         cout<<"Destructing base \n";     }      };   class derived: public base {   public:     derived()          {         cout<<"Constructing derived \n";     }     ~derived()     {         cout<<"Destructing derived \n";     } };   int main(void) {   derived *d = new derived();     base *b = d;   delete b;   getchar();   return 0; }            

Output :

Constructing base
Constructing derived
Destructing derived
Destructing base

See article

thumb_up 0 thumb_down 0 flag 0

Students whose marks are in the given range can be found using Binary Search Tree.

Let the given range be (k1, k2) where k1<k2. Now, we have to check whether the marks of students lie in the given range.

Algorithm:
1) If value of root's key is greater than k1, then recursively call in left subtree.
2) If value of root's key is in range, then print the name of the student.
3) If value of root's key is smaller than k2, then recursively call in right subtree.

Example :

For example, if k1 = 10 and k2 = 22, then our function must print the name of the students whose marks lie in the given range of (10,22).

thumb_up 3 thumb_down 0 flag 0

Copy Constructor is a type of constructor which is used to create a copy of an already existing object of a class type. It is usually of the form X (X&), where X is the class name. The compiler provides a default Copy Constructor to all the classes.

If a copy constructor is not defined in a class, the compiler itself defines one.If the class has pointer variables and has some dynamic memory allocations, then it is a must to have a copy constructor.

Copy constructor is used to :

  1. Initialize one object from another of the same type.
  2. Copy an object to pass it as an argument to a function.
  3. Copy an object to return it from an object.

most common form of copy constructor is :

classname (const classname &obj) {     // body of constructor }            

Here, obj is a reference to an object that is being used to initialize another object.

Example :

              #include<iostream> #include<stdio.h>   using namespace std;   class Test { public:    Test() {}    Test(const Test &t)    {       cout<<"Copy constructor called "<<endl;    } };   int main() {   Test t1, t2;   t2 = t1;   Test t3 = t1;   getchar();   return 0; }            

Output : Copy constructor called

Default constructor does only shallow copy.

Deep copy is possible only with user defined copy constructor. In user defined copy constructor, we make sure that pointers (or references) of copied object point to new memory locations.

thumb_up 3 thumb_down 0 flag 1

Copy Constructor is a type of constructor which is used to create a copy of an already existing object of a class type. It is usually of the form X (X&), where X is the class name. The compiler provides a default Copy Constructor to all the classes.

Copy constructor is used to :

  1. Initialize one object from another of the same type.
  2. Copy an object to pass it as an argument to a function.
  3. Copy an object to return it from an object.

most common form of copy constructor is :

classname (const classname &obj) {     // body of constructor }            

Here, obj is a reference to an object that is being used to initialize another object.

Example :

              #include<iostream> #include<stdio.h>   using namespace std;   class Test { public:    Test() {}    Test(const Test &t)    {       cout<<"Copy constructor called "<<endl;    } };   int main() {   Test t1, t2;   t2 = t1;   Test t3 = t1;   getchar();   return 0; }            

Output : Copy constructor called

thumb_up 4 thumb_down 0 flag 0

The name of an operator function is always operator keyword followed by symbol of operator and operator functions are called when the corresponding operator is used.

Operator overloading helps us to do these two things:

  1. Use operators for manipulating objects using operators
  2.  Expand use of operators

Example:

              class box {     int height;     int width;     public:     box operator +(box &b)     {         box temp;         temp.height=height + b.height;         temp.width=width+ b.width;         return temp;     } };  main() {     box b1,b2,b3;     b3 = b1+ b2; }            

See article

thumb_up 2 thumb_down 0 flag 0

There are two ways of creating threads :

  1. by extending Thread class.
  2. by implementing Runnable Interface

1. Creating thread by extending Thread class

The easiest way to create thread is to write a class that extends the Thread class. Then create an instance of our thread class and call its start method. These three steps can be stated as :

  1. A subclass must extend Thread class
  2. Define run() method in the subclass.
  3. Create an object of the subclass and call start() method.

Example :

              class MultiThread extends Thread {     public void run()     {           System.out.println("thread is running...");       }       public static void main(String args[])     {           MultiThread mt = new MultiThread();           mt.start();       }   }                          

2. Creating thread by implementing Runnable interface

The runnable class describes the method classes needed to create and interact with a thread. In order to use the Runnable interface in our class, we must define the methods described by the Runnable interface the run() method.

The content of the run() method is the portion of our program that will become the new thread. Statements outside the run() method are part of main thread. The new thread terminates when the run() method terminates. Controls than return to the statement that called the run() method.So,

  1. Write a class that implements Runnable interface
  2. Define run() method in the class
  3. Pass an instance of the class to the Thread class constructor
  4. Call start method on the Thread class

Example :

              class A implements Runnable  {     @Override     public void run()     {         // implement run method here         System.out.println("run method implemented");     }      public static void main()     {         final A obj = new A();         Thread t1 = new Thread(new A());         t1.start();     } }            

See article

thumb_up 9 thumb_down 0 flag 0

Typically LRU cache is implemented using a queue (implementation using doubly linked list) and a hash map.
We are given total possible page numbers that can be referred. We are also given cache (or memory) size (Number of page frames that cache can hold at a time). The LRU caching scheme is to remove the least recently used frame when the cache is full and a new page is referenced which is not there in cache.

When a page is accessed, there can be 2 cases:
1. Page is present in the cache - If the page is already present in the cache, we move the page to the start of the list.
2. Page is not present in the cache - If the page is not present in the cache, we add the page to the list.
How to add a page to the list:
a. If the cache is not full, add the new page to the start of the list.
b. If the cache is full, remove the last node of the linked list and move the new page to the start of the list.

We use two data structures to implement an LRU Cache.

  1. Queue which is implemented using a doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size).The most recently used pages will be near front end and least recently pages will be near rear end.
  2. A Hash with page number as key and address of the corresponding queue node as value. Hash Map (key: page number, value: page) is used for O(1) access to pages in cache.

See article

thumb_up 2 thumb_down 0 flag 0

Static memory allocation : The allocation of memory for the specific fixed purposes of a program in a predetermined fashion controlled by the compiler is said to be static memory allocation. Static Allocation means, that the memory for our variables is allocated when the program starts. The size is fixed when the program is created. It applies to global variables, file scope variables, and variables qualified with static defined inside functions. Memory is allocated for the declared variable by the compiler. The address can be obtained by using 'address of' operator and can be assigned to a pointer. The memory is allocated during compile time. Since most of the declared variables have static memory, this kind of assigning the address of a variable to a pointer is known as static memory allocation.

Dynamic memory allocation :The allocation of memory (and possibly its later deallocation) during the running of a program and under the control of the program is said to be dynamic memory allocation.  Our program now controls the exact size and the lifetime of these memory locations. If we don't free it, we'll run into memory leaks, which may cause our application to crash, since it, at some point cannot allocate more memory.

Example :

int* func() {
int* mem = malloc(1024);
return mem;
}

int* mem = func(); /* still accessible */

In the upper example, the allocated memory is still valid and accessible, even though the function terminated. When we are done with the memory, we have to free it:

free(mem);

thumb_up 1 thumb_down 0 flag 0

If we want to define a class member that will be used independently of any object of that class, to create such a member, precede its declaration with the keyword static.

When a member is decalred static, it can be accessed before any object of its class is created and without reference to any object.

We can decalre methods, variables and class( inner classes only ) to be static. The most common example of static member is main(). main() method is declared as static because it must be called before any object exist.

Static variables :

  1. They belong to a class and not to an instance of the class.
  2. They're initialized only once in the lifetime of the program when the first object of the class is created. That is they're allocated memory only once when the class is first loaded.
  3. A single copy of the variable is shared by all the instances of that class.

Static variables are accessible in the format ClassName.VariableName.

Static methods :

  1. can call only static methods.
  2. must only access static data.
  3. can't refer to super or this in any way.

Static methods are accessible in the format ClassName.MethodName.

See article

thumb_up 4 thumb_down 0 flag 0

In java, garbage means unreferenced objects.

garbage collection (GC) is a form of automatic memory management. Garbage Collection is process of reclaiming the runtime unused memory automatically. In other words, it is a way to destroy the unused objects. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program. Garbage collection was invented by John McCarthy.

Advantages of Garbage Collection :

  • It makes java memory efficient because garbage collector removes the unreferenced objects from heap memory.
  • It is automatically done by the garbage collector(a part of JVM) so we don't need to make extra efforts.
  • Dangling pointer bugs, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is dereferenced. By then the memory may have been reassigned to another use, with unpredictable results.
  • Double free bugs, which occur when the program tries to free a region of memory that has already been freed, and perhaps already been allocated again.
  • Certain kinds of memory leaks, in which a program fails to free memory occupied by objects that have become unreachable, which can lead to memory exhaustion. (Garbage collection typically does not deal with the unbounded accumulation of data that is reachable, but that will actually not be used by the program.)
  • Efficient implementations of persistent data structure.

Finalize() method : The finalize() method is invoked each time before the object is garbage collected. This method can be used to perform cleanup processing. This method is defined in Object class as: protected void finalize(){}

gc() method : The gc() method is used to invoke the garbage collector to perform cleanup processing. The gc() is found in System and Runtime classes.

public class TestGarbage

{

    public void finalize()

    {

        System.out.println("object is garbage collected");

    }

        public static void main(String args[])

        {

            TestGarbage s1=new TestGarbage();

            TestGarbage s2=new TestGarbage();

            s1=null;

            s2=null;

            System.gc();

        }

}

    Ouput :

    object is garbage collected object is garbage collected

    thumb_up 13 thumb_down 0 flag 0

    1. Memory type : Stack is used for static memory allocation and Heap for dynamic memory allocation, both stored in the computer's RAM.

    2. How they work :  Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and it's allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO(Last in first out) order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.

    Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory . Element of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.

    3. Which one is faster : The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What's more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.

    4. When to use : We can use the stack if we know exactly how much data we need to allocate before compile time and it is not too big.We can use heap if you don't know exactly how much data we will need at run-time or if we need to allocate a lot of data.

    5. Size : The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

    6. Type of data structure : Stack is a linear data structure, also known as LIFO(last in first out) while Heap is a non linear data structure. It is like an upside down tree in which root node is at the top.

    thumb_up 4 thumb_down 0 flag 0

    A function to pointer is one which returns a pointer.

    int *function_pointer()

    Example :

    int *add(int num1,int num2)

    {
    .
    .
    }

    A pointer to a function is a pointer that points to a function. A function pointer is a pointer that either has an indeterminate value, or has a null pointer value, or points to a function.

    Example 1 :

    int (*f)()

    This is a pointer to a function.  The name of the pointer is 'f'.  But the function it points to could be any function that takes no parameters and returns an int. We use function pointers when we need to implement a Asynchronous mechanism.

    Example 2 :

    int add();
    int (*addi)();
    addi =&add;
    then (*addi)() represents add();

    .

    thumb_up 3 thumb_down 0 flag 0

    Topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time.

    For example, a topological sorting of the following graph is "5 4 2 3 1 0". There can be more than one topological sorting for a graph. For example, another topological sorting of the following graph is "4 5 2 3 1 0". The first vertex in topological sorting is always a vertex with in-degree as 0 (a vertex with no in-coming edges).

    thumb_up 9 thumb_down 0 flag 0

    Cycle in a Graph can be detected using Depth First Traversal. DFS for a connected graph produces a tree. There is a cycle in a graph only if there is a back edge present in the graph. A back edge is an edge that is from a node to itself (selfloop) or one of its ancestor in the tree produced by DFS. In the following graph, there are 3 back edges, marked with cross sign. We can observe that these 3 back edges indicate 3 cycles present in the graph.

    For a disconnected graph, we get the DFS forrest as output. To detect cycle, we can check for cycle in individual trees by checking back edges.

    To detect a back edge, we can keep track of vertices currently in recursion stack of function for DFS traversal. If we reach a vertex that is already in the recursion stack, then there is a cycle in the tree. The edge that connects current vertex to the vertex in the recursion stack is back edge. We have used recStack[] array to keep track of vertices in the recursion stack.

    Depth first search is more memory efficient than breadth first search as we can backtrack sooner. It is also easier to implement if we use the call stack but this relies on the longest path not overflowing the stack. Also if our graph is directed then we have to not just remember if we have visited a node or not, but also how we got there. Otherwise we might think we have found a cycle but in reality all we have is two separate paths A->B but that doesn't mean there is a path B->A. For example,

    If we do BFS starting from 0, it will detect as cycle is present but actually there is no cycle.

    With a depth first search we can mark nodes as visited as we descend and unmark them as we backtrack.

    thumb_up 4 thumb_down 1 flag 0

    Breadth First search : BFS is an algorithm for traversing or searching tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a "search key"') and explores the neighbor nodes first, before moving to the next level neighbors.

    Breadth First Traversal (or Search) for a graph is similar to Breadth First Traversal of a tree (See method 2 of this post). Unlike trees, graphs may contain cycles, so we may come to the same node again. To avoid processing a node more than once, we use a boolean visited array. For simplicity, it is assumed that all vertices are reachable from the starting vertex.
    For example, in the following graph, we start traversal from vertex 2. When we come to vertex 0, we look for all adjacent vertices of it. 2 is also an adjacent vertex of 0. If we don't mark visited vertices, then 2 will be processed again and it will become a non-terminating process. A Breadth First Traversal of the following graph is 2, 0, 3, 1.

    For implementation see article

    Depth-first search : DFS is an algorithm for traversing or searching tree or graph data structures. It starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far as possible along each branch before backtracking.

    Depth First Traversal (or Search) for a graph is similar to Depth First Traversal of a tree. Unlike trees, graphs may contain cycles, so we may come to the same node again. To avoid processing a node more than once, we use a boolean visited array.
    For example, in the following graph, we start traversal from vertex 2. When we come to vertex 0, we look for all adjacent vertices of it. 2 is also an adjacent vertex of 0. If we don't mark visited vertices, then 2 will be processed again and it will become a non-terminating process. A Depth First Traversal of the following graph is 2, 0, 1, 3.

    For implementation see article

    In terms of implementation, BFS is usually implemented with Queue, while DFS uses a Stack. Both algorithms, provide a complete traversal of the graph, visiting every vertex in the graph.

    If there are memory contraints, DFS is a good choice, as BFS takes up a lot of space.

    thumb_up 8 thumb_down 1 flag 0

    Max heap : a max heap is a complete binary tree in which the value of each node is less than or equal to the value of its parent, with the maximum-value element at the root.

    Example :      10
                          /  \
                        5    3
                       / \
                    4     2

    Min heap : a min heap is a complete binary tree in which the value of each node is greater than or equal to the value of its parent, with the minimum-value element at the root.

    Example :      2
                         / \
                       5   3
                      / \
                    7    6

    Heaps are used for Prioritizing and Sorting.

    Priority Queues: Priority queues can be efficiently implemented using Binary Heap because it supports insert(), delete() and extractmax(), decreaseKey() operations in O(logn) time. Binomoial Heap and Fibonacci Heap are variations of Binary Heap. These variations perform union also in O(logn) time which is a O(n) operation in Binary Heap. Heap Implemented priority queues are used in Graph algorithms like Prim's Algorithm and Dijkstra's algorithm.

    Order statistics: The Heap data structure can be used to efficiently find the kth smallest (or largest) element in an array.

    Selection algorithms : A heap allows access to the min or max element in constant time, and other selections (such as median or kth-element) can be done in sub-linear time on data that is in a heap.

    Graph algorithms : By using heaps as internal traversal data structures, run time will be reduced by polynomial order. Examples of such problems are Prim's minimal-spanning-tree algorithm and Dijkstra's shortest-path algorithm.

    See article

    thumb_up 2 thumb_down 0 flag 0

    Global variables
    1. A global variable is a variable with global scope meaning that it is visible (hence accessible) throughout the program, unless shadowed.

    2. The set of all global variables is known as the global environment or global state.

    3. These variables can be accessed (i.e. known) by any function comprising the program.

    4. They are implemented by associating memory locations with variable names.

    5. They do not get recreated if the function is recalled.

    6. Global variables are used extensively to pass information between sections of code that do not share a caller/callee relation like concurrent threads and signal handlers.

    7. Variables defined in global scope are allocated in a data segment (or, generally, a memory space requested from the operating system) that exists for the lifetime of the process.

    Static variables :

    1. A static variable is a variable that has been allocated statically so that its lifetime or "extent" extends across the entire run of the program.

    2. Static variables are allocated memory in data segment, not stack segment. See memory layout of C programs for details.

    3. Static variables (like global variables) are initialized as 0 if not initialized explicitly. For example in the below program, value of x is printed as 0, while value of y is something garbage. See this for more details.

    #include <stdio.h>

    int main()

    {

        static int x;

        int y;

        printf("%d \n %d", x, y);

    }

    Output :

    0
    [some_garbage_value]

    4. A static int variable remains in memory while the program is running. A normal or auto variable is destroyed when a function call where the variable was declared is over.

    For example, we can use static int to count number of times a function is called, but an auto variable can't be sued for this purpose.

    For example below program prints "1 2″

    #include<stdio.h>
    int fun()
    {
    static int count = 0;
    count++;
    return count;
    }

    int main()
    {
    printf("%d ", fun());
    printf("%d ", fun());
    return 0;
    }

    Output :

    1 2

    But below program prints 1 1

    #include<stdio.h>
    int fun()
    {
    int count = 0;
    count++;
    return count;
    }

    int main()
    {
    printf("%d ", fun());
    printf("%d ", fun());
    return 0;
    }

    Output :

    1 1

    thumb_up 8 thumb_down 0 flag 0

    1. Methods : Interface contains methods that must be abstract, while abstract class may contain concrete methods / abstract methods.

    2. Members : An interface cannot contain fields, constructors, or destructors and it has only the property's signature but no implementation, while an abstract class can contain field, constructors, and destructors and implementation properties.

    3. Inheritance : An interface can support multiple inheritance while an abstract class cannot support multiple inheritance. Thus, a class may inherit multiple interfaces but only one abstract class.

    4. Member variables : An interface contains variables that must be static and final, while an abstract class may contain non-final and final variables.

    5. Members visibility : Members in an interface are public by default, while abstract class may contain non-public members.

    6. Keyword : An interface should be implemented using keyword "implements", while an abstract class should be extended using keyword "extends".

    7. An interface can extend another interface only, while an abstract class can extend another class and implement multiple interfaces.

    8. An interface is absolutely abstract, while an abstract class can be invoked if a main() exists.

    9. Flexibility : Interface is more flexible than abstract class because one  class can only "extends" one super class, but "implements" multiple interfaces.

    Example :

    1. Abstract class

    abstract class
    {
       abstract void absmethod1();
       abstract void absmethod2();
       void concmethod()
       {
       //concrete method
       }
    }

    2. Interface

    interface myinterface
    {
        void method1();
        void method2();

        // no concrete method
    }

    thumb_up 6 thumb_down 2 flag 0

    We can load 1Gb(let say RAM = 1GB) file content onto the main memory. Now we can sort the data in the file using inplace heap sort. After sorting, write the sorted data in a temporary file (using file handling)  and will do this for all 1000 chunks(files of 1 GB each) and then merge these files. Now read N smallest numbers from the merged file.

    thumb_up 8 thumb_down 0 flag 0

    Critical Section: a critical section is group of instructions/statements or region of code that need to be executed atomically (read this post for atomicity), such as accessing a resource (file, input or output port, global data, etc.).

    Remainder Section: The remaining portion of the program excluding the Critical Section.

    Race around Condition: The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition.

    In concurrent programming, if one thread tries to change the value of shared data at the same time as another thread tries to read the value (i.e. data race across threads), the result is unpredictable.The access to such shared variable (shared memory, shared files, shared port, etc…) to be synchronized.

    A simple solution to critical section can be thought as shown below,

    acquireLock();
    Process Critical Section
    releaseLock();

    A thread must acquire a lock prior to executing critical section. The lock can be acquired by only one thread. There are various ways to implement locks in the above pseudo code.

    A solution for the critical section problem must satisfy the following three conditions:

    1. Mutual Exclusion: If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section.
    2. Progress: If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely.
    3. Bounded Waiting: There exists a bound on the number of times other processes can enter into the critical section after a process has made request to access the critical section and before the requested is granted.

    thumb_up 12 thumb_down 0 flag 0

    Differences between C++ and Java:

    1. Design Goal : Java was created initially as an interpreter for printing systems and support network computing. It relies on a virtual machine to be secure and highly portable. While C++ was designed for system and application programming , extending the C programming language.

    2. Execution : At compilation time Java Source code converts into byte code .The interpreter execute this byte code at run time and gives output. Java is interpreted for the most part and hence platform independent. While C++ is run and compiled using compiler which converts source code into machine level languages so c++ is platform dependent.

    3. Platform dependency : Java is platform independent language but c++ is depends upon operating system machine etc.

    4. Java uses compiler and interpreter both and in c++ their is only compiler.

    5. Memory safety : Java is a memory-safe language, whereas C++ is not. This means that errors in Java programs are detected in defined ways - for example, attempting a bad cast or indexing an array out of bounds results in an exception. Similar errors in C++ lead to undefined behavior, where instead of raising an exception or crashing, your program might keep running and crash later or even give the wrong answer or behavior.

    6. Libraries : Java contains standard libraries for solving specific tasks. While C++ relies on non-standard third party libraries.

    7. Inheritance : Java does not support multiple inheritance, so no virtual functions are there. while C++ supports multiple inheritance, hence virtual functions.

    8. Memory management : Java has an automatic system for allocating and freeing memory (garbage collection). While C++ uses memory allocation and de-allocation functions.

    9. Operator overloading : Java does not support operator overloading. While C++ permits operator overloading.

    10. Array : Java has a string class as part of the java.lang.package while null-terminated array of characters are used in C and C++.

    11. Destructors : No destructor in Java while C++ has destructors.

    12. typedef : Java does not support typedef or #define while C++ supports typedef and #define.

    13. struct, union and pointer : Java does not support the struct, union and pointer data type while C++ supports all these data types.

    14. calling a method : Java supports call by value only. There is no call by reference in java while C++ supports both call by value and call by reference.

    15. Thread : Java has built in support for threads while C++ does not have built in support for threads. It relies on third-party libraries for thread support.

    Advantages of C++ :

    1. C++ design decisions always favor execution speed and lower memory usage first.

    2. It finds a wide range of applications – from GUI applications to 3D graphics for games to real-time mathematical simulations.

    3. provide more freedom to developer.

    Advantages of Java :

    1. Interpreted

    2. Robust

    3. Secure

    4. Portable

    5. Multithreaded

    6. Dynamic

    7. Garbage collected.

    Disadvantages of C++

    1. Less secure

    2. High Complexity

    3. Lack of garbage collector

    4. No built-in support for threads.

    Disadvantages of Java :

    1. Slower than C++.

    2. Java takes more memory space than C++.

    thumb_up 2 thumb_down 0 flag 0

    A virtual function is a member function that we expect to be redefined in derived classes. When we refer to a derived class object using a pointer or a reference to the base class, we can call a virtual function for that object and execute the derived class's version of the function.

    Virtual functions ensure that the correct function is called for an object, regardless of the expression used to make the function call.

    Suppose a base class contains a function declared as virtual and a derived class defines the same function. The function from the derived class is invoked for objects of the derived class, even if it is called using a pointer or reference to the base class. The following example shows a base class that provides an implementation of the PrintBalance function and two derived classes :

    #include <iostream>

    using namespace std;

    class Account {

    public:

       Account( double d ) { _balance = d; }

       virtual double GetBalance() { return _balance; }

       virtual void PrintBalance() { cerr << "Error. Balance not available for base type." << endl; }

    private:

        double _balance;

    };

    class CheckingAccount : public Account {

    public:

       CheckingAccount(double d) : Account(d) {}

       void PrintBalance() { cout << "Checking account balance: " << GetBalance() << endl; }

    };

    class SavingsAccount : public Account {

    public:

       SavingsAccount(double d) : Account(d) {}

       void PrintBalance() { cout << "Savings account balance: " << GetBalance(); }

    };

    int main() {

       // Create objects of type CheckingAccount and SavingsAccount.

       CheckingAccount *pChecking = new CheckingAccount( 100.00 ) ;

       SavingsAccount  *pSavings  = new SavingsAccount( 1000.00 );

       // Call PrintBalance using a pointer to Account.

       Account *pAccount = pChecking;

       pAccount->PrintBalance();

       // Call PrintBalance using a pointer to Account.

       pAccount = pSavings;

       pAccount->PrintBalance();

    }

    In the preceding code, the calls to PrintBalance are identical, except for the object pAccount points to. Because PrintBalance is virtual, the version of the function defined for each object is called. The PrintBalance function in the derived classes CheckingAccount and SavingsAccount "override" the function in the base class Account.

    If a class is declared that does not provide an overriding implementation of the PrintBalance function, the default implementation from the base class Account is used.

    Functions in derived classes override virtual functions in base classes only if their type is the same. A function in a derived class cannot differ from a virtual function in a base class in its return type only; the argument list must differ as well.

    When calling a function using pointers or references, the following rules apply:

    • A call to a virtual function is resolved according to the underlying type of object for which it is called.

    • A call to a nonvirtual function is resolved according to the type of the pointer or reference.

    Because virtual functions are called only for objects of class types, we cannot declare global or static functions as virtual.

    The virtual keyword can be used when declaring overriding functions in a derived class, but it is unnecessary; overrides of virtual functions are always virtual. See article

    thumb_up 7 thumb_down 0 flag 0

    Whenever the CPU becomes idle, it is the job of the CPU Scheduler to select another process from the ready queue to run next.

    Schedulers are often implemented so they keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality of Service. A scheduler may aim at one of many goals, for example, maximizing throughput (the total amount of work completed per time unit), minimizing response time (time from work becoming enabled until the first point it begins execution on resources), or minimizing latency (the time between work becoming enabled and its subsequent completion), maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process). See article

    There are three types of process scheduler :

    1. Long Term or job scheduler It bring the new process to the 'Ready State'. It controls Degree of Multi-programming , i.e., number of process present in ready state at any point of time.

    2. Short term ot CPU scheduler: It is responsible for selecting one process from ready state for scheduling it on the running state. Note: Short term scheduler only selects the process to schedule it doesn't load the process on running.
    Dispatcher is responsible for loading the selected process by Short Term scheduler on the CPU (Ready to Running State) Context switching is done by dispatcher only. A dispatcher does following:
    1) Switching context.
    2) Switching to user mode.
    3) Jumping to the proper location in the newly loaded program.

    3. Medium term scheduler It is responsible for suspending and resuming the process. It mainly does swapping (moving processes from main memory to disk and vice versa). See article

    Following are the scheduling algorithms :

    1. First-Come First-Serve Scheduling, FCFS

    2. Shortest-Job-First Scheduling, SJF

    3. Priority Scheduling

    4. Round Robin Scheduling

    5. Multilevel Queue Scheduling

    6. Multilevel Feedback-Queue Scheduling

    7. Shortest Remaining Time First(SRTF)

    thumb_up 4 thumb_down 0 flag 0

    A linked list is known as heterogeneous when nodes of linked list can contain different type of information.
    A void pointer can point to any type of data either in-built data type or user defined structure.

    We can do this by creating an array or linked list of elements that encode both the data and the type of data. We could use a struct that includes a type indicator and a union of the various types that we want to handle, and the create an array or linked list of that struct:

    typedef struct {
        int type_indicator;
        union {
            float f;
            int i;
            double d;
            void *p;
            char c;
        }
    } item;

    item array[10];

    For a linked list instead of an array, we would also need to add a item *next pointer.

    thumb_up 2 thumb_down 0 flag 0

    printf:
    printf function is used to print character stream of data on stdout console.

    Syntax :

        int printf(const char* str, ...);

    Example :

    // simple print on stdout
    #include<stdio.h>
    int main()
    {
       printf("hello geeksquiz");
       return 0;
    }

    Output : hello geeksquiz

    sprintf :

    sprintf is like printf. Instead on displaying the formated string on the standard output consol, it stores the formated data in a string pointed to by the char pointer (the very first parameter) or char buffer which are specified in sprintf. The string location is the only difference between printf and sprintf syntax.

    Syntax :

        int sprintf(char *str, const char *string,...);

    Example :

    // Example program to demonstrate sprintf()
    #include<stdio.h>
    int main()
    {
        char buffer[50];
        int a = 10, b = 20, c;
        c = a + b;
        sprintf(buffer, "Sum of %d and %d is %d", a, b, c);

         // The string "sum of 10 and 20 is 30" is stored
        // into buffer instead of printing on stdout
        printf("%s", buffer);

         return 0;
    }

    Ouput : Sum of 10 and 20 is 30

    For detailed explanation see article

    thumb_up 4 thumb_down 0 flag 0

    Deadlock is situation when 2 or more processes wait for each other to finish and none of them ever finishes. Consider an example when 2 trains are coming towards each other on same track and there is only one track, none of the trains can move forward once they are in front of each other. Similar situation occurs in Operating System when there are 2 or more processes holding the same resources and wait for resources held by others.

    Real life example of deadlock :

    Your Internet is not working because you forget to make payment. It will work once you pay. You cannot pay because there is no Internet. As accepted payment mode by service provider is online payment/Internet banking.  Here

    1. Deadlock can arise if following four conditions hold simultaneously (Necessary Conditions)
      Mutual Exclusion: One or more than one resource are non-sharable (Only one process can use at a time)
      Hold and Wait: A process is holding at least one resource and waiting for resources.
      No Preemption: A resource cannot be taken from a process unless the process releases the resource.
      Circular Wait: A set of processes are waiting for each other in circular form.


      Methods for handling deadlock
      There are three ways to handle deadlock
      1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.

      2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred.

      3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take.

    For detailed explanation see Deadlock prevention and Deadlock Recovery

    Virtual Memory

    If your computer lacks the random access memory (RAM) needed to run a program or operation, Windows uses virtual memory to compensate. Virtual memory combines your computer's RAM with temporary space on your hard disk. When RAM runs low, virtual memory moves data from RAM to a space called a paging file. Moving data to and from the paging file frees up RAM to complete its work.
    The more RAM your computer has, the faster your programs will generally run. If a lack of RAM is slowing your computer, you might be tempted to increase virtual memory to compensate.

    How Virtual Memory Works?
    If a computer has less RAM (say 128 MB or 256 MB) available for the CPU to use, unfortunately, that amount of RAM is not enough to run all of the programs that most users expect to run at once.
    For example, if you load the operating system, an e-mail program, a Web browser, word processor and other programs into RAM simultaneously, 128 megabytes is not enough to hold it all. If there were no such thing as virtual memory, then once you filled up the available RAM your computer would have to say, "Sorry, you can not load any more applications. Please close another application to load a new one." With virtual memory, what the computer can do is look at RAM for areas that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application.
    Because this copying happens automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it only has 128 megabytes installed. Because hard disk space is so much cheaper than RAM chips, it also has a nice economic benefit. ­

    The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously.
    The operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow. The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. On a Windows machine, page files have a .SWP extension.

    thumb_up 8 thumb_down 0 flag 0

    A pointer pointing to a memory location that has been deleted (or freed) is called dangling pointer. There are three different ways where Pointer acts as dangling pointer

    1.  Deallocation of Memory

         // Deallocating a memory pointed by ptr causes dangling pointer

         #include<stdlib.h>
         #include<stdio.h>
         int main()
        {
              int *ptr = (int *)malloc(sizeof(int));

                  // After below free call, otr becomes a dangling pointer
              free(ptr);

              // No more dangling pointer
              ptr = NULL;
         }

    2.  Function Call

           // The pointer pointing to local variable becomes
        // dangling when local variable is static

        #include<stdio.h>

           int *fun()
        {
         // x is local variable and goes out of scope
         // after an execution of fun() is over.
         int x = 5;

             return &x;
        }


       // Driver code
       int main()
       {
        int *p = fun();
        fflush(stdin);

        // p points to something which is not
        // valid any more
        printf("%d", *p);
        return 0;
       }

    Output: A garbage Address


    The above problem doesn't appear (or p doesn't become dangling) if x is a static variable.

    // The pointer pointing to local variable doesn't
    // become dangling when local variable is static.

    #include<stdio.h>
    int *fun()
    {
        // x now has scope throughout the program
        static int x = 5;

         return &x;
    }

     int main()
    {
        int *p = fun();
        fflush(stdin);

             // Not a dangling pointer as it points
        // to static variable.
        printf("%d",*p);
    }

    Output: 5

    3. Variable goes out of Scope

    void main()
    {
       int *ptr;
       .....
       .....
       {
           int ch;
           ptr = &ch;
       }
       .....
       // Here ptr is dangling pointer
    }

    For deatailed explaination see article

    thumb_up 4 thumb_down 0 flag 0

    Network topology is the arrangement of the various elements (links, nodes, etc.) of a computer network. Essentially, it is the topological structure of a network and may be depicted physically or logically.

    Simply it means the way in which the devices in the network are connected to each other. Various topologies are available.

    BUS Topology

    Bus topology is a network type in which every computer and network device is connected to single cable. When it has exactly two endpoints, then it is called Linear Bus topology.

    Features of Bus Topology

    1. It transmits data only in one direction.
    2. Every device is connected to a single cable

    Advantages of Bus Topology

    1. It is cost effective.
    2. Cable required is least compared to other network topology.
    3. Used in small networks.
    4. It is easy to understand.
    5. Easy to expand joining two cables together.

    Disadvantages of Bus Topology

    1. Cables fails then whole network fails.
    2. If network traffic is heavy or nodes are more the performance of the network decreases.
    3. Cable has a limited length.
    4. It is slower than the ring topology.

    RING Topology

    It is called ring topology because it forms a ring as each computer is connected to another computer, with the last one connected to the first. Exactly two neighbours for each device.

    Features of Ring Topology

    1. A number of repeaters are used for Ring topology with large number of nodes, because if someone wants to send some data to the last node in the ring topology with 100 nodes, then the data will have to pass through 99 nodes to reach the 100th node. Hence to prevent data loss repeaters are used in the network.
    2. The transmission is unidirectional, but it can be made bidirectional by having 2 connections between each Network Node, it is called Dual Ring Topology.
    3. In Dual Ring Topology, two ring networks are formed, and data flow is in opposite direction in them. Also, if one ring fails, the second ring can act as a backup, to keep the network up.
    4. Data is transferred in a sequential manner that is bit by bit. Data transmitted, has to pass through each node of the network, till the destination node.

    Advantages of Ring Topology

    1. Transmitting network is not affected by high traffic or by adding more nodes, as only the nodes having tokens can transmit data.
    2. Cheap to install and expand

    Disadvantages of Ring Topology

    1. Troubleshooting is difficult in ring topology.
    2. Adding or deleting the computers disturbs the network activity.
    3. Failure of one computer disturbs the whole network.

    STAR Topology

    In this type of topology all the computers are connected to a single hub through a cable. This hub is the central node and all others nodes are connected to the central node.

    Features of Star Topology

    1. Every node has its own dedicated connection to the hub.
    2. Hub acts as a repeater for data flow.
    3. Can be used with twisted pair, Optical Fibre or coaxial cable.

    Advantages of Star Topology

    1. Fast performance with few nodes and low network traffic.
    2. Hub can be upgraded easily.
    3. Easy to troubleshoot.
    4. Easy to setup and modify.
    5. Only that node is affected which has failed, rest of the nodes can work smoothly.

    Disadvantages of Star Topology

    1. Cost of installation is high.
    2. Expensive to use.
    3. If the hub fails then the whole network is stopped because all the nodes depend on the hub.
    4. Performance is based on the hub that is it depends on its capacity

    MESH Topology

    It is a point-to-point connection to other nodes or devices. All the network nodes are connected to each other. Mesh has n(n-2)/2 physical channels to link ndevices.

    There are two techniques to transmit data over the Mesh topology, they are :

    1. Routing
    2. Flooding

    Routing

    In routing, the nodes have a routing logic, as per the network requirements. Like routing logic to direct the data to reach the destination using the shortest distance. Or, routing logic which has information about the broken links, and it avoids those node etc. We can even have routing logic, to re-configure the failed nodes.

    Flooding

    In flooding, the same data is transmitted to all the network nodes, hence no routing logic is required. The network is robust, and the its very unlikely to lose the data. But it leads to unwanted load over the network.

    Types of Mesh Topology

    1. Partial Mesh Topology : In this topology some of the systems are connected in the same fashion as mesh topology but some devices are only connected to two or three devices.
    2. Full Mesh Topology : Each and every nodes or devices are connected to each other.

    Features of Mesh Topology

    1. Fully connected.
    2. Robust.
    3. Not flexible.

    Advantages of Mesh Topology

    1. Each connection can carry its own data load.
    2. It is robust.
    3. Fault is diagnosed easily.
    4. Provides security and privacy.

    Disadvantages of Mesh Topology

    1. Installation and configuration is difficult.
    2. Cabling cost is more.
    3. Bulk wiring is required.

    TREE Topology

    It has a root node and all other nodes are connected to it forming a hierarchy. It is also called hierarchical topology. It should at least have three levels to the hierarchy.

    Features of Tree Topology

    1. Ideal if workstations are located in groups.
    2. Used in Wide Area Network.

    Advantages of Tree Topology

    1. Extension of bus and star topologies.
    2. Expansion of nodes is possible and easy.
    3. Easily managed and maintained.
    4. Error detection is easily done.

    Disadvantages of Tree Topology

    1. Heavily cabled.
    2. Costly.
    3. If more nodes are added maintenance is difficult.
    4. Central hub fails, network fails.

    thumb_up 1 thumb_down 0 flag 0

    Selection sort makes O(n) swaps which is minimum among all sorting algorithms mentioned above.

    For quick sort, it is O(nlogn). Consider the best case in all recursion that pivot is middle element.

     For merge sort, it is n{logn} - 2^{logn}+1 where {} means greatest integer function.

    For detailed explanation see Merger sort at Number of Comparisons in Merge-Sort

    For detailed explanation see Quick sort at Quicksort

    thumb_up 4 thumb_down 0 flag 0

    As the name implies, a class is said to be singleton if it limits the number of objects of that class to one. The purpose or use of Singleton class is to have single copy of object into memory throughout its lifetime execution.

    We can't have more than a single object for such classes.Singleton classes are employed extensively in concepts like Networking and Database Connectivity. A common example of a Singleton is a Factory Object: something which is called upon by many parts of the system to create instances of another class of Object. Another example is w hen there is only one bar code reader (or printer) attached to the machine, it makes sense to have only one instance of our class that interacts with it throughout your application.

    A class can be made Singleton class by making constructor private. So that We can not create an object outside of the class.

    Example :

    // Java program to demonstrate implementation of Singleton
    // pattern using private constructors.
    import java.io.*;

     class MySingleton
    {
        static MySingleton instance = null;
        public int x = 10;

           // private constructor can't be accessed outside the class
        private MySingleton() {  }

         // Factory method to provide the users with instances
        static public MySingleton getInstance()
        {
            if (instance == null)
                 instance = new MySingleton();

             return instance;
        }
    }

     // Driver Class
    class Main
    {
       public static void main(String args[])
       {
           MySingleton a = MySingleton.getInstance();
           MySingleton b = MySingleton.getInstance();
           a.x = a.x + 10;
           System.out.println("Value of a.x = " + a.x);
           System.out.println("Value of b.x = " + b.x);
       }
    }

    Output :
    Value of a.x = 20
    Value of b.x = 2

    For detailed explanation see the article

    thumb_up 4 thumb_down 0 flag 0

    The Java Virtual Machine (JVM) is an execution environment for Java applications.

    - It interprets compiled Java binary code (called bytecode) to enable a computer's processor to carry out a Java program's instructions.
    - The JVM is a main component of Java architecture, and is part of the JRE (Java Runtime Environment).
    - The JVM is operating system-dependent. In other words, the JVM must translate the bytecode into machine language, and the machine language depends on which operating system is being used, which makes the JVM platform-dependent, or operating system-dependent.
    - The JVM is responsible for allocating memory needed by the Java program.
    - Java was designed to allow application programs to be built that could be run on any platform without having to be rewritten or recompiled by the programmer for each separate platform. The Java virtual machine makes this possible.

    thumb_up 24 thumb_down 0 flag 0

    Indexing is a data structure technique used to find data more quickly and efficiently in a table.

    An index is a small table having only two columns. The first column contains a copy of the primary or candidate key of a table and the second column contains a set of pointers holding the address of the disk block where that particular key value can be found. The users cannot see the indexes, they are just used to speed up searches/queries.

    Indexes are of 3 types:

    1. Primary indexes : In primary index, there is a one-to-one relationship between the entries in the index table and the records in the main table. Primary index is defined on an ordered data file. The data file is ordered on a key field. The key field is generally the primary key of the relation.

    2. Secondary indexes : Secondary index may be generated from a field which is a candidate key and has a unique value in every record, or a non-key with duplicate values .While creating the index, generally the index table is kept in the primary memory (RAM) and the main table, because of its size is kept in the secondary memory (Hard Disk).

    3. Clustering indexes : Clustering index is defined on an ordered data file. The data file is ordered on a non-key field e.g. if we are asked to create an index on a non-unique key, such as Dept-id. There could be several employees in each department. Here we use a clustering index, where all employees belonging to the same Dept-id are considered to be within a single cluster, and the index pointers point to the cluster as a whole.

    thumb_up 3 thumb_down 0 flag 0

    A Binary Search Tree is a tree in which

    1. data of all the nodes in the left sub-tree of the root node is less than or equal to the data of the root.

    2. data of all the nodes in the right sub-tree of the root node is greater than or equal to the data of the root.

    BST are used in many search applications where data is constantly entering/leaving, such as the map and set objects in many languages' libraries e.g. CouchDB.

    thumb_up 10 thumb_down 1 flag 0

    Linked lists are of mainly 3 types :

    1. Singly linked list

    2. Doubly linked list

    3. Circular linked list

    Loop in a linked list can be detected using Floyd's Cycle detection algorithm. For detailed explanation see the article

    Company Tags (Subjective Problems)

    Source: https://practice.geeksforgeeks.org/answers/Amit+Khandelwal+1/

    Posted by: shirlenevollere0193287.blogspot.com

    Post a Comment

    Previous Post Next Post