Interview Questions

Choose A Topic

Frequently Asked Questions

You can test the software in many different ways. Some types of testing are conducted by software developers and some by specialized quality assurance staff. Here are a few different kinds of software testing, along with a brief description of each.

Type Description
Unit Testing A programmatic test that tests the internal working of a unit of code, such as a method or a function.
Integration Testing Ensures that multiple components of systems work as expected when they are combined to produce a result.
Regression Testing Ensures that existing features/functionality that used to work are not broken due to new code changes.
System Testing Complete end-to-end testing is done on the complete software to make sure the whole system works as expected.
Smoke Testing A quick test performed to ensure that the software works at the most basic level and doesn’t crash when it’s started. Its name originates from the hardware testing where you just plug the device and see if smoke comes out.
Performance Testing Ensures that the software performs according to the user’s expectations by checking the response time and throughput under specific load and environment.
User-Acceptance Testing Ensures the software meets the requirements of the clients or users. This is typically the last step before the software is live, i.e. it goes to production.
Stress Testing Ensures that the performance of the software doesn’t degrade when the load increases. In stress testing, the tester subjects the software under heavy loads, such as a high number of requests or stringent memory conditions to verify if it works well.
Usability Testing Measures how usable the software is. This is typically performed with a sample set of end-users, who use the software and provide feedback on how easy or complicated it is to use the software.
Security Testing Now more important than ever. Security testing tries to break a software’s security checks, to gain access to confidential data. Security testing is crucial for web-based applications or any applications that involve money.

Software testing is governed by seven principles:

  • Absence of errors fallacy: Even if the software is 99% bug-free, it is unusable if it does not conform to the user’s requirements. Software needs to be bug-free 99% of the time, and it must also meet all customer requirements.
  • Testing shows the presence of errors: Testing can verify the presence of defects in software, but it cannot guarantee that the software is defect-free. Testing can minimize the number of defects, but it can’t remove them all.
  • Exhaustive testing is not possible: The software cannot be tested exhaustively, which means all possible test cases cannot be covered. Testing can only be done with a select few test cases, and it’s assumed that the software will produce the right output in all cases. Taking the software through every test case will cost more, take more effort, etc., which makes it impractical.
  • Defect clustering: The majority of defects are typically found in a small number of modules in a project. According to the Pareto Principle, 80% of software defects arise from 20% of modules.
  • Pesticide Paradox: It is impossible to find new bugs by re-running the same test cases over and over again. Thus, updating or adding new test cases is necessary in order to find new bugs.
  • Early testing: Early testing is crucial to finding the defect in the software. In the early stages of SDLC, defects will be detected more easily and at a lower cost. Software testing should start at the initial phase of software development, which is the requirement analysis phase.
  • Testing is context-dependent: The testing approach varies depending on the software development context. Software needs to be tested differently depending on its type. For instance, an ed-tech site is tested differently than an Android app.

The dictionary definition of regression is the act of going back to a previous place or state. In software, regression implies that a feature that used to work suddenly stopped working after a developer added a new code or functionality to the software.

Regression problems are pervasive in the software industry, as new features are getting added all the time. Developers don’t build these features in isolation, separate from the existing code. Instead, the new code interacts with the legacy code and modifies it in various ways, introducing side effects, whether intended or not.

As a result, there is always a chance that introducing new changes may negatively impact a working feature. It’s important to keep in mind that even a small change has the potential to cause regression.

Regression testing helps ensure that the new code or modifications to the existing code don’t break the present behaviour. It allows the tester to verify that the new code plays well with the legacy code.

Imagine a tourist in a foreign city. There are two ways in which they can explore the city.

  • Follow a map, itinerary, or a list of places they should visit
  • Explore randomly, following the streets as they lead them to new places

With the first approach, the tourist follows a predetermined plan and executes it. Though they may visit famous spots, they might miss out on hidden, more exciting places in the city. With the second approach, the tourist wanders around the city and might encounter strange and exotic places that the itinerary would have missed.

Both approaches have their pros and cons.

A tester is similar to a tourist when they are testing software. They can follow a strict set of test cases and test the software according to them, with the provided inputs and outputs, or they can explore the software.

When a tester doesn’t use the test scripts or a predefined test plan and randomly tests the software, it is called exploratory testing. As the name suggests, the tester is exploring the software as an end-user would. It’s a form of black-box testing.

In exploratory testing, the tester interacts with the software in whatever manner they want and follows the software’s instructions to navigate various paths and functionality. They don’t have a strict plan at hand.

Exploratory testing primarily focuses on behavioural testing. It is effective for getting familiar with new software features. It also provides a high-level overview of the system that helps evaluate and quickly learn the software.

Though it seems random, exploratory testing can be powerful in an experienced and skilled tester’s hands. As it’s performed without any preconceived notions of what software should and shouldn’t do, it allows greater flexibility for the tester to discover hidden paths and problems along those paths.

End to End testing is the process of testing a software system from start to finish. The tester tests the software just like an end-user would. For example, to test a desktop software, the tester would install the software as the user would, open it, use the application as intended, and verify the behavior. Same for a web application.

There is an important difference between end-to-end testing vs. other forms of testing that are more isolated, such as unit testing. In end-to-end testing, the software is tested along with all its dependencies and integrations, such as databases, networks, file systems, and other external services.

Unit testing is the process of testing a single unit of code in an isolated manner. The unit of code can be a method, a class, or a module. Unit testing aims to focus on the smallest building blocks of code to get confidence to combine them later to produce fully functioning software.

A unit test invokes the code and verifies the result with the expected result. If the expected and actual outcomes match, then the unit test passes. Otherwise, it fails.

A good unit test has the following characteristics:

  1. It should test a single piece of functionality.
  2. It is fully automated and repeatable.
  3. It should run quickly and provide immediate feedback.
  4. It should be isolated and shouldn’t interact with external dependencies such as network, database, or file system unless needed. You can use the mocking technique to simulate the external dependencies and isolate the code under test

API stands for Application Programming Interface. It is a means of communication between two software components. An API abstracts the internal workings and complexity of a software program and allows the user of that API to solely focus on the inputs and outputs required to use it.

When building software, developers rarely write software from scratch and make use of other third-party libraries. An API allows two software components to talk to each other by providing an interface that they can understand.

Another use of an API is to provide data required by an application. Let’s say you are building a weather application that displays the temperature. Instead of building the technology to collect the temperature yourself, you’d access the API provided by the meteorological institute.

A test environment consists of a server/computer on which a tester runs their tests. It is different from a development machine and tries to represent the actual hardware on which the software will run; once it’s in production.

Whenever a new build of the software is released, the tester updates the test environment with the latest build and runs the regression tests suite. Once it passes, the tester moves on to testing new functionality.

When software is being tested, the code coverage measures how much of the program’s source code is covered by the test plan. Code coverage testing runs in parallel with actual product testing. Using the code coverage tool, you can monitor the execution of statements in your source code. A complete report of the pending statements, along with the coverage percentage, is provided at the end of the final testing.

Among the different types of test coverage techniques are:

  • Statement/Block Coverage: Measures how many statements in the source code have been successfully executed and tested.
  • Decision/Branch Coverage: This metric measures how many decision control structures were successfully executed and tested.
  • Path Coverage: This ensures that the tests are conducted on every possible route through a section of the code.
  • Function coverage: It measures how many functions in the source code have been executed and tested at least once.

  • Black-box testing in software testing: In black-box testing, the system is tested only in terms of its external behaviour; it does not consider how the software functions on the inside. This is the only limitation of the black-box test. It is used in Acceptance Testing and System Testing.
  • White-box testing in software testing: A white-box test is a method of testing a program that takes into account its internal workings as part of its review. It is used in integration testing and unit testing.
  • Grey-box testing in software testing: A Gray Box Testing technique can be characterized as a combination of a black box as well as a white box testing technique used in the software testing process. Using this technique, you can test a software product or application with a partial understanding of its internal structure.

It is extremely beneficial to use automation testing when using the agile model in software testing. It helps in achieving maximum test coverage in a lesser time of the sprint.

  • Test Case: Test Cases are a series of actions executed during software development to verify a particular feature or function. A test case consists of test steps, test data, preconditions, and postconditions designed to verify a specific requirement.
  • Test Scenario: Usually, a test scenario consists of a set of test cases covering the end-to-end functionality of a software application. A test scenario provides a high-level overview of what needs to be tested.
  • Test Scripts: When it comes to software testing, a test script refers to the set of instructions that will be followed in order to verify that the system under test performs as expected. The document outlines each step to be taken and the expected results.

Bugs and errors differ in the following ways:

Bugs Errors
Software bugs are defects, which occur when the software or an application does not work as intended. A bug occurs when there is a coding error, which causes the program to malfunction. Errors in code are caused by problems with the code, which means that the developer could have misunderstood the requirement or the requirement was not defined correctly, leading to a mistake.
The bug is submitted by the testers. Errors are raised by test engineers and developers.
Logic bugs, resource bugs, and algorithmic bugs are types of bugs. Syntactic error, error handling error, error handling error, user interface error, flow control error, calculation error, and testing error are types of errors.
The software is detected before it is deployed in production. The error occurs when the code is unable to be compiled.

A test plan is basically a dynamic document monitored and controlled by the testing manager. The success of a testing project totally depends upon a well-written test plan document that describes software testing scope and activities. It basically serves as a blueprint that outlines the what, when, how, and more of the entire test process.

A test plan must include the following details:

  • Test Strategy
  • Test Objective
  • Test Scope
  • Reason for Testing
  • Exit/Suspension Criteria
  • Resource Planning
  • Test Deliverables

Test report is basically a document that includes a total summary of testing objectives, activities, and results. It is very much required to reflect testing results and gives an opportunity to estimate testing results quickly. It helps us to decide whether the product is ready for release or not. It also helps us determine the current status of the project and the quality of the product. A test report must include the following details:

  • Test Objective
  • Project Information
  • Defect
  • Test Summary

Test deliverables, also known as test artifacts, are basically a list of all of the documents, tools, and other components that are given to the stakeholders of a software project during the SDLC. Test deliverables are maintained and developed in support of the test. At every phase of SDLC, there are different deliverables as given below:

Before Testing Phase

  • Test plans document.
  • Test cases documents.
  • Test Design specifications.

During Testing Phase

  • Test Scripts
  • Simulators.
  • Test Data
  • Test Traceability Matrix
  • Error logs and execution logs

After testing Phase

  • Test Results/reports
  • Defect Report
  • Installation/ Test procedures guidelines
  • Release notes

Different categories of debugging include:

  • Brute force debugging
  • Backtracking
  • Cause elimination
  • Program slicing
  • Fault tree analysis

MVC and MVP both are architectural patterns that are used to develop applications.


MVC stands for Model View Controller. It is an architectural pattern that is used to develop Java Enterprise Applications. It splits an application into three logical components i.e. Model, View, and Controller. It separates the business-specific logic (Model component) from the presentation layer (View component) from each other.

The model components contain data and logic related to it. The View component is responsible for displaying model objects inside the user interface. The Controller receives the input and calls model objects based on handler mapping. It also passes model objects to views in order to display output inside the view layer.


MVP stands for Model View Presenter. It is derived from the MVC architectural pattern. It adds an extra layer (known as indirection) to the architectural pattern that splits the View and Controller into View and Presenter. The role of Controller is replaced with a Presenter. It exists at the same level as View in MVC. It contains UI business logic for the View. The invocations received from the View directly sends to the Presenter. It maintains the action (events) between View and Model. The Presenter does not directly communicate with the View. It communicates through an interface.

The major difference between MVC and MVP architectural pattern is that in MVC architectural pattern Controller does not pass the data from the Model to the View. It only notifies the View to get the data from the Model itself.

While in MVP architectural pattern the View and Model layers are connected with each other. The presenter itself receives the data from the Model and sends it to the View to show.

Another difference is that MVC is often used in web-frameworks while MVP is used in app development.

Pair programming (a fundamental aspect of programming) is an agile software development technique in which two developer works together on the same machine (system). The developer who writes the code is called the driver and the developer who reviews (checks code, proofread, and spell checks) the code is called the navigator. The programming technique is more efficient and coding mistakes reduced to the minimum. The disadvantage of pair programming is that it increases the cost.


4. What is CORS in MVC and how it works?

CORS stands for Cross-Origin Resource Sharing. It is a W3C standard and HTTP-header-based mechanism. It permits a server to indicate any other origins (like domain, port, etc.) instead of the requested one. In other words, it enables one website to access the resources of another website using JavaScript.

It supports secure cross-origin requests and transfers data between servers and browsers. Advanced browsers use CORS in APIs. It is flexible and safe in comparison to JSONP (JSON with Padding). It provides better web service integration.

While using the MVC to enable CORS, the same CORS service can be used but we cannot use the same CORS middleware. We can use a particular CORS for a particular action, for a particular controller, and globally for all controllers.

A pre-flight check (or request) is sent by the browser to the server (hosting the cross-origin resource) which ensures that the server will permit the actual request or not. For example, invoking the URL through

CORS stands for Cross-Origin Resource Sharing. It is a W3C standard and HTTP-header-based mechanism. It permits a server to indicate any other origins (like domain, port, etc.) instead of the requested one. In other words, it enables one website to access the resources of another website using JavaScript.

It supports secure cross-origin requests and transfers data between servers and browsers. Advanced browsers use CORS in APIs. It is flexible and safe in comparison to JSONP (JSON with Padding). It provides better web service integration.

While using the MVC to enable CORS, the same CORS service can be used but we cannot use the same CORS middleware. We can use a particular CORS for a particular action, for a particular controller, and globally for all controllers.

A pre-flight check (or request) is sent by the browser to the server (hosting the cross-origin resource) which ensures that the server will permit the actual request or not. For example, invoking the URL through

We can use the following ways to optimize the scalability and efficiency of a website:

  • Reducing DNS lookup
  • Avoiding URL redirects
  • Avoiding duplicate codes
  • Avoiding unnecessary images
  • Leveraging browser caching
  • Deferring parsing of JavaScript
  • Avoiding inline JavaScript and CSS
  • Using srcset for responsive images
  • Placing all assets on a cookie-free domain, preferably using a CDN

S.N. Basis of Comparison Get Post
1 Purpose The Get request is designed for getting data from the server. The Post request is designed for sending the data to the server.
2 Post Mechanism The request is sent via URL. The request is sent via an HTTP request body.
3 Parameter Passing The request parameters are transmitted as a query string appended to the request. The request parameters are transmitted with the body of the request.
4 Default It is the default method hence it implements automatically. We need to specify manually.
5 Capacity We can send limited data with the Get request. We can send a large amount of data with the Post request.
6 Data Type It always submits data as text. We can send any type of data.
7 Security The use of Get is safe because it is idempotent. The use of Post unsafe because it is non-idempotent.
8 Visibility of Data The data is visible to the user as it puts the data in the URL. The data is not visible to the user as it puts the data in the message body.
9 Bookmark and Caching The Get request can be bookmarked and caching. The post request cannot be bookmarked and caching.
10 Efficiency It is more efficient than post. It is less efficient.
11 Example Search is the best example of Get request. Login is the best example of a Post request.

A program may have the property of referential transparency if any two expressions in the program that have the same value can be substituted for one another anywhere in the program without changing the result of the program. It is used in functional programming. For example, consider the following code snippet:

  1. count1 = (fun(x) + y) * (fun(x) – z);
  2. temp = fun(a);
  3. count2 = temp + y * temp – z;

The variables count1 and count2 will be equal if the value of fun(x) is not reflected. If the variable count1 is not equal to the variable count2, the referential transparency is violated.

The term REST stands for Representational State Transfer. It is an architectural style that is used to create Web Services. It uses HTTP requests to access and use the data. We can create, update, read, and delete data.

An API (Application Program Interface) for a website is the code that allows two software programs to communicate with each other. It allows us to write requesting services from an operating system or other application.

A promise is an object that can be returned synchronously from an asynchronous function. It may be in the following three states:

  • Fulfilled: If a promise called the onFulfilled() method, a promise will be in fulfilled state.
  • Rejected: If a promise called the onRejceted() method, a promise will be in rejected state.
  • Pending: If a promise is not yet fulfilled or rejected, a promise will be in pending state.

A promise will be settled if and only if it is not pending.

There are the following ways to optimize the load time of a web application:

  • Optimize image size and format
  • Compress and optimize the content
  • Avoid redirects
  • Cache the web page
  • Minimize the HTTP requests
  • Optimize dependencies
  • Put stylesheet reference at the top
  • Place script reference at the bottom
  • Put JavaScript and CSS externally

CI/CD is a best practice to develop applications in which code changes more frequently and rapidly. Sometimes, it is also known as CI\CD pipeline. It is widely used in DevOps and also an agile methodology.

Continuous integration is a coding philosophy or deployment practice in which developers integrate their code in a shared repository several times a day. Because modern application requires developing code in different platforms. The goal of continuous integration is to establish an automated mechanism that builds, test, and package the application.

Continuous delivery starts where CI ends. It automatically delivers the application to the selected infrastructure. CD ensures the automated delivery of code if any changes are made in the code.

In software design, we use the following architectural design patterns:

  • Model View Controller
  • Master-Slave Pattern
  • Layered Pattern
  • Model View Presenter
  • Monolithic Architecture
  • Event-Driven Architecture Pattern

Long polling is an effective method for creating a stable server connection without using the WebSocket or Server-Side Events protocols. It operates at the top of the conventional client-server model. Note that Node.js is using the same technique as the next development model.

In this method, the client sends the request and the server responds until the connexon is open as it contains new and unique information. As soon as the server responds, a request to the client can be submitted. When the data is available, the server will return a query. It functions when the client application stops and the server ends requests.

In web design, the idea of using HTML elements to indicate what they actually are. It is known as semantic HTML or semantic markup.

Semantic HTML is HTML that represents meaning to the web page rather than just presentation. For example, tag <p> indicates that a paragraph is enclosed in it. It is both semantic and presentational because the user know what paragraph are and the browser also know how to display them. On the other hand, tags such as <b> and <i> are not semantic. They only represent how text should look. These tags do not provide any additional meaning to the markup.

Example of semantic HTML tags are header tags <h1> to <h6><abbr><cite><tt><code><blockquote><em>, etc. There are some other semantic HTML tags that are used to build a standards-compliant website.

We should use the semantic HTML for the following reasons:

  • It provides additional information about the document in which it is used. It also aids in communication.
  • Semantic tags make it clear to the browser what the meaning of a page and its content is.
  • It provides information about the contents of those tags that goes beyond just how they look on a page.
  • It gives us many more hooks for styling the content of the page.
  • The clarity of semantic tag is also communicated with search engines that ensure the right pages are delivered for the right queries.

Null: Null means a variable is assigned with a null value. If we use it with typeof operator it gives result as an object. We should never assign a variable to null because the programmer uses it to represent a variable that has no value. Note that JavaScript will never automatically assign the value to null.

Undefined: Undefined means the variable is declared but not assigned any value to it. It may be a variable itself does not exist. If we use it with typeof operator it gives the result undefined. It is not valid in JSON.

Note: Null and undefined both are primitive.

Let’s understand it through an example.

  1. var var1
  2. var var2 = null //assigning null value to the variable var2
  3. log(`var1 : ${var1}, type : ${type of(var1)}`)
  4. console.log(`var2 : ${var2}, type : ${type of(var2)}`)

When we execute the above code, it generates the following output:

  1. Var1 : undefined, type : undefined
  2. var2 : null, type : object

From the above output, we can observe that the value of var1 is undefined also its type is undefined. Because we have not assigned any value to the variable var1. The value null is assigned to the variable var2. It prints its type as abject. Since null is an assignment value and we can assign it to a variable. Therefore, JavaScript treats null and undefined relatively equally because both represent an empty value.

Both, REST and GraphQL, are API design architectures that can be used to develop web services, especially for data-driven applications.

GraphQL is an API design architecture, but with a different approach that is much flexible. REST is a robust methodology and API design architecture used to implement web services.
It follows client-driven architecture. It follows server-driven architecture.
It does not deal with the dedicated resources. It deals with the dedicated resources.
It has a single endpoint that takes dynamic parameters. It has multiple endpoints.
It provides stateless servers and structured access to resources. It provides stateless servers and flexible controlled access to resources.
It is elastic in nature. It is not rigid in nature.
It supports only JSON format. It supports XML, JSON, HTML, YAML, and other formats also.
The client defines response data that it needs via a query language. Data represented as resources over HTTP through URI.
It provides synchronous and asynchronous communication in multiple protocols such as HTTP, MQTT, AMQP. It provides synchronous communication through HTTP only.
Its design based on HTTP (status, methods, and URI). Its design based on message exchange.
It provides high consistency across all platforms. It is difficult to achieve high consistency across all platforms.
Development speed is fast. Development speed is slow.

In Java, a connection leak is a situation when the developer forgets to close the JDBC connection, it is known as connection leak. The most common type of Connection Leak experienced in Java development, is when using a Connection Pool (such as DBCP). We can fix it by closing the connection and giving special attention to the error handling code.

A session is a conversational state between client and server and it can consist of multiple requests and responses between client and server. Therefore, HTTP and web server both are stateless, the only way to maintain a session is when some unique information about the session (session-id) is passed between server and client in every request and response. We can use the following methods to maintain the session:

  • User Authentication
  • HTML Hidden Field
  • Cookies
  • URL Rewriting
  • Session Management API

Servlet Context Servlet Config
Servlet Context represents the whole web application running on a particular JVM  and common for all the servlet. Servlet Config object represents single servlet.
It is just like a global parameter associated with the whole application. It is the same as the local parameter associated with a particular servlet.
It has application-wide scope so define outside servlet tag in the web.xml file. It is a name-value pair defined inside the servlet section of web.xml files so it has servlet wide scope.
get Servlet Context() method is used to get the context object. get Servlet Config() method is used to get the config object.
To get the MIME type of a file or application session related information is stored using a servlet context object The shopping cart of a user is a specific to particular user so here we can use servlet config.

RequestDispatcher is an interface that is used to forward the request to another resource that can be HTMLJSP, or another servlet in the same application. We can also use it to include the content of another resource in the response. The interface contains two methods forward() and include().

A Full Stack Web Developer is a person who is familiar with developing both client and server software. In addition to mastering CSS and HTML, they are also know how to program browsers, databases, and servers.

To fully comprehend the role of Full Stack developer, you must understand the web development components – front end  and back end

The front end comprises a visible part of the application in which the user interacts, while the back end includes business logic.

Some of the popular tools used by full-stack developers to make development more accessible and efficient are:

  • Backbone
  • Visual Studio Code
  • WebStorm
  • Slack
  • Electron
  • TypeScript
  • CodePen
  • GitHub

A Full Stack developer should be familiar with:

  • Basic languages – Must be proficient in basic languages like HTML, CSS, and SQL.
  • Front-end frameworks – BootStrap, AngularJS, VueJS, ReactJS, JavaScript, TypeScript, Python, Ruby, PHP
  • Back-end frameworks – Express, Django, NodeJS, Ruby on Rails
  • Databases – MySQL, SQLite, Postgres, MongoDB, Cassandra, Apache storm, Sphinx
  • Additional skills recommended – Git, Machine Learning, SSH, Linux Command, Data Structures, Character encoding.

As the name suggests, Pair Programming is where two programmers share a single workstation. Formally, one programmer at the keyboard called the “driver” writes the code. The other programmer is the “navigator” who views each line of the code written, spell check, and proofread it. Also, programmers will swap their roles every few minutes and vice-versa.

Cross-origin resource sharing (CORS) is a process that utilizes additional HTTP headers to tell browsers to provide a web application running at one origin. CORS accesses various web resources on different domains. Web scripts can be integrated using CORS when it requests a resource that has an external origin (protocol. Domain, or port) from its own.

Inversion of Control (IoC) is a broad term used by software developers for defining a pattern that is used for decoupling components and layers in the system. It is mostly used in the context of object-oriented programming. Control of objects or portions of a program is transferred to a framework or container with the help of Inversion of Control. It can be achieved using various mechanisms such as service locator pattern, strategy design pattern, factory pattern, and dependency injection.

Dependency Injection is a design pattern by which IoC is executed. Injecting objects or connecting objects with other objects is done by container instead of by the object themselves. It involves three types of classes.

  • Client class: It depends on the service class.
  • Service class: It provides service to the client class.
  • Injector class: It injects service class objects into the client class.

Continuous Integration (CI) is a practice where developers integrate code into a shared repository regularly to detect problems early. CI process involves automatic tools that state new code’s correctness before integration. Automated builds and tests verify every check-in.

The main purpose of multithreading is to provide multiple threads of execution concurrently for maximum utilization of the CPU. It allows multiple threads to exist within the context of a process such that they execute individually but share their process resources.

This is typically a difficult question to answer, but a good developer will be able to go through this with ease. The core difference is GraphQL doesn’t deal with dedicated resources. The description of a particular resource is not coupled to the way you retrieve it. Everything referred to as a graph is connected and can be queried to application needs.

There are quite a lot of possible ways to optimize your website for the best performance:

  • Minimize HTTP requests.
  • Utilize CDNs and remove unused files/scripts.
  • Optimize files and compress images.
  • Browser caching.
  • Apply CSS3 and HTML5.
  • Minify JavaScript & Style Sheets.
  • Optimize caches.

The purpose of the Observer pattern is to define a one-to-many dependency between objects, as when an object changes the state, then all its dependents are notified and updated automatically. The object that watches on the state of another object is called the observer, and the object that is being watched is called the subject.

A Full-Stack engineer is someone with a senior-level role with the experience of a Full-Stack developer, but with project management experience in system administration (configuring and managing computer networks and systems).

Polling is a method by which a client asks the server for new data frequently. Polling can be done in two ways: Long polling and Short Polling.

  • Long polling is a development pattern that surpasses data from server to client with no delay.
  • Short polling calls at fixed delays and is based AJAX-based.

The following table compares the GET and POST:

GET is used to request data from a specified resource. POST is used to send data to a server to create/update a resource.
Can be bookmarked Cannot be bookmarked
Can be cached Not cached
Parameters remain in the browser history Parameters are not saved in the browser history
Data is visible to everyone in the URL Data is not displayed in the URL
Only ASCII characters allowed Binary data is also allowed

If the data within the API is publicly accessible, then it’s not possible to prevent data scraping completely. However, there is an effective solution that will deter most people/bots: rate-limiting (throttling).

Throttling will prevent a specific device from making a defined number of requests within a defined time. Upon exceeding the specified number of requests, 429 Too Many Attempts  HTTP error should be thrown.

Other possible solutions to prevent a bot from scrapping are:

  • Blocking requests based on the user agent string
  • Generating temporary “session” access tokens for visitors at the front end

REST stands for representational state transfer. A RESTful API (also known as REST API) is an architectural style for an application programming interface (API or web API) that uses HTTP requests to obtain and manage information. That data can be used to POST, GET, DELETE, and OUT data types, which refers to reading, deleting, creating, and operations concerning services.

A callback in JavaScript is a function passed as an argument into another function, that is then requested inside the outer function to make some kind of action or routine. JavaScript callback functions can be used synchronously and asynchronously. APIs of the node are written in such a way that they all support callbacks.

Data Attributes are used to store custom data private to the application or page. They allow us to store extra data on the standard, semantic HTML elements. The stored data can be used in JavaScript’s page to create a more engaging user experience.

Data attribute consists of two parts:

  • Must contain at least one character long after the prefix “data-” and should not contain uppercase letters.
  • An attribute can be a string value.

Some of the key skills required for a data analyst include:

  • Knowledge of reporting packages (Business Objects), coding languages (e.g., XML, JavaScript, ETL), and databases (SQL, SQLite, etc.) is a must.
  • Ability to analyze, organize, collect, and disseminate big data accurately and efficiently.
  • The ability to design databases, construct data models, perform data mining, and segment data.
  • Good understanding of statistical packages for analyzing large datasets (SAS, SPSS, Microsoft Excel, etc.).
  • Effective Problem-Solving, Teamwork, and Written and Verbal Communication Skills.
  • Excellent at writing queries, reports, and presentations.
  • Understanding of data visualization software including Tableau and Qlik.
  • The ability to create and apply the most accurate algorithms to datasets for finding solutions.

Data analysis generally refers to the process of assembling, cleaning, interpreting, transforming, and modeling data to gain insights or conclusions and generate reports to help businesses become more profitable.  The following diagram illustrates the various steps involved in the process:

  • Collect Data: The data is collected from a variety of sources and is then stored to be cleaned and prepared. This step involves removing all missing values and outliers.
  • Analyse Data: As soon as the data is prepared, the next step is to analyze it. Improvements are made by running a model repeatedly. Following that, the model is validated to ensure that it is meeting the requirements.
  • Create Reports: In the end, the model is implemented, and reports are generated as well as distributed to stakeholders.

While analyzing data, a Data Analyst can encounter the following issues:

  • Duplicate entries and spelling errors. Data quality can be hampered and reduced by these errors.
  • The representation of data obtained from multiple sources may differ. It may cause a delay in the analysis process if the collected data are combined after being cleaned and organized.
  • Another major challenge in data analysis is incomplete data. This would invariably lead to errors or faulty results.
  • You would have to spend a lot of time cleaning the data if you are extracting data from a poor source.
  • Business stakeholders’ unrealistic timelines and expectations
  • Data blending/ integration from multiple sources is a challenge, particularly if there are no consistent parameters and conventions
  • Insufficient data architecture and tools to achieve the analytics goals on time.

Data cleaning, also known as data cleansing or data scrubbing or wrangling, is basically a process of identifying and then modifying, replacing, or deleting the incorrect, incomplete, inaccurate, irrelevant, or missing portions of the data as the need arises. This fundamental element of data science ensures data is correct, consistent, and usable.

Some of the tools useful for data analysis include:

  • RapidMiner
  • Google Search Operators
  • Google Fusion Tables
  • Solver
  • NodeXL
  • OpenRefine
  • Wolfram Alpha
  • io
  • Tableau, etc.

Data mining Process: It generally involves analyzing data to find relations that were not previously discovered. In this case, the emphasis is on finding unusual records, detecting dependencies, and analyzing clusters. It also involves analyzing large datasets to determine trends and patterns in them.

Data Profiling Process: It generally involves analyzing that data’s individual attributes. In this case, the emphasis is on providing useful information on data attributes such as data type, frequency, etc. Additionally, it also facilitates the discovery and evaluation of enterprise metadata.

Data Mining Data Profiling
It involves analyzing a pre-built database to identify patterns. It involves analyses of raw data from existing datasets.
It also analyzes existing databases and large datasets to convert raw data into useful information. In this, statistical or informative summaries of the data are collected.
It usually involves finding hidden patterns and seeking out new, useful, and non-trivial data to generate useful information. It usually involves the evaluation of data sets to ensure consistency, uniqueness, and logic.
Data mining is incapable of identifying inaccurate or incorrect data values. In data profiling, erroneous data is identified during the initial stage of analysis.
Classification, regression, clustering, summarization, estimation, and description are some primary data mining tasks that are needed to be performed. This process involves using discoveries and analytical methods to gather statistics or summaries about the data.

In the process of data validation, it is important to determine the accuracy of the information as well as the quality of the source. Datasets can be validated in many ways. Methods of data validation commonly used by Data Analysts include:

  • Field Level Validation: This method validates data as and when it is entered into the field. The errors can be corrected as you go.
  • Form Level Validation: This type of validation is performed after the user submits the form. A data entry form is checked at once, every field is validated, and highlights the errors (if present) so that the user can fix them.
  • Data Saving Validation: This technique validates data when a file or database record is saved. The process is commonly employed when several data entry forms must be validated.
  • Search Criteria Validation: It effectively validates the user’s search criteria in order to provide the user with accurate and related results. Its main purpose is to ensure that the search results returned by a user’s query are highly relevant.

In a dataset, Outliers are values that differ significantly from the mean of characteristic features of a dataset. With the help of an outlier, we can determine either variability in the measurement or an experimental error. There are two kinds of outliers i.e., Univariate and Multivariate. The graph depicted below shows there are four outliers in the dataset.

Outliers are detected using two methods:

  • Box Plot Method: According to this method, the value is considered an outlier if it exceeds or falls below 1.5*IQR (interquartile range), that is, if it lies above the top quartile (Q3) or below the bottom quartile (Q1).
  • Standard Deviation Method: According to this method, an outlier is defined as a value that is greater or lower than the mean ± (3*standard deviation).

Data Analysis: It generally involves extracting, cleansing, transforming, modeling, and visualizing data in order to obtain useful and important information that may contribute towards determining conclusions and deciding what to do next. Analyzing data has been in use since the 1960s.
Data Mining: In data mining, also known as knowledge discovery in the database, huge quantities of knowledge are explored and analyzed to find patterns and rules. Since the 1990s, it has been a buzzword.

Data Analysis Data Mining
Analyzing data provides insight or tests hypotheses. A hidden pattern is identified and discovered in large datasets.
It consists of collecting, preparing, and modeling data in order to extract meaning or insights. This is considered as one of the activities in Data Analysis.
Data-driven decisions can be taken using this way. Data usability is the main objective.
Data visualization is certainly required. Visualization is generally not necessary.
It is an interdisciplinary field that requires knowledge of computer science, statistics, mathematics, and machine learning. Databases, machine learning, and statistics are usually combined in this field.
Here the dataset can be large, medium, or small, and it can be structured, semi-structured, and unstructured. In this case, datasets are typically large and structured.

A KNN (K-nearest neighbor) model is usually considered one of the most common techniques for imputation. It allows a point in multidimensional space to be matched with its closest k neighbors. By using the distance function, two attribute values are compared. Using this approach, the closest attribute values to the missing values are used to impute these missing values.

Known as the bell curve or the Gauss distribution, the Normal Distribution plays a key role in statistics and is the basis of Machine Learning. It generally defines and measures how the values of a variable differ in their means and standard deviations, that is, how their values are distributed.

The above image illustrates how data usually tend to be distributed around a central value with no bias on either side. In addition, the random variables are distributed according to symmetrical bell-shaped curves.

The term data visualization refers to a graphical representation of information and data. Data visualization tools enable users to easily see and understand trends, outliers, and patterns in data through the use of visual elements like charts, graphs, and maps. Data can be viewed and analyzed in a smarter way, and it can be converted into diagrams and charts with the use of this technology.

Data visualization has grown rapidly in popularity due to its ease of viewing and understanding complex data in the form of charts and graphs. In addition to providing data in a format that is easier to understand, it highlights trends and outliers. The best visualizations illuminate meaningful information while removing noise from data.

Several Python libraries that can be used on data analysis include:

  • NumPy
  • Bokeh
  • Matplotlib
  • Pandas
  • SciPy
  • SciKit, etc.

Hash tables are usually defined as data structures that store data in an associative manner. In this, data is generally stored in array format, which allows each data value to have a unique index value. Using the hash technique, a hash table generates an index into an array of slots from which we can retrieve the desired value.

Hash table collisions are typically caused when two keys have the same index. Collisions, thus, result in a problem because two elements cannot share the same slot in an array. The following methods can be used to avoid such hash collisions:

  • Separate chaining technique: This method involves storing numerous items hashing to a common slot using the data structure.
  • Open addressing technique: This technique locates unfilled slots and stores the item in the first unfilled slot it finds

An effective data model must possess the following characteristics in order to be considered good and developed:

  • Provides predictability performance, so the outcomes can be estimated as precisely as possible or almost as accurately as possible.
  • As business demands change, it should be adaptable and responsive to accommodate those changes as needed.
  • The model should scale proportionally to the change in data.
  • Clients/customers should be able to reap tangible and profitable benefits from it.

The following are some disadvantages of data analysis:

  • Data Analytics may put customer privacy at risk and result in compromising transactions, purchases, and subscriptions.
  • Tools can be complex and require previous training.
  • Choosing the right analytics tool every time requires a lot of skills and expertise.
  • It is possible to misuse the information obtained with data analytics by targeting people with certain political beliefs or ethnicities.

Based on user behavioral data, collaborative filtering (CF) creates a recommendation system. By analyzing data from other users and their interactions with the system, it filters out information. This method assumes that people who agree in their evaluation of particular items will likely agree again in the future. Collaborative filtering has three major components: users- items- interests.

  • Data science involves the task of transforming data by using various technical analysis methods to extract meaningful insights using which a data analyst can apply to their business scenarios.
  • Data analytics deals with checking the existing hypothesis and information and answers questions for a better and effective business-related decision-making process.
  • Data Science drives innovation by answering questions that build connections and answers for futuristic problems. Data analytics focuses on getting present meaning from existing historical context whereas data science focuses on predictive modeling.
  • Data Science can be considered as a broad subject that makes use of various mathematical and scientific tools and algorithms for solving complex problems whereas data analytics can be considered as a specific field dealing with specific concentrated problems using fewer tools of statistics and visualization.

Data analysis can not be done on a whole volume of data at a time especially when it involves larger datasets. It becomes crucial to take some data samples that can be used for representing the whole population and then perform analysis on it. While doing this, it is very much necessary to carefully take sample data out of the huge data that truly represents the entire dataset.

There are majorly two categories of sampling techniques based on the usage of statistics, they are:

  • Probability Sampling techniques: Clustered sampling, Simple random sampling, Stratified sampling.
  • Non-Probability Sampling techniques: Quota sampling, Convenience sampling, snowball sampling, etc.

Differentiate between the long and wide format data.

Long format Data Wide-Format Data
Here, each row of the data represents the one-time information of a subject. Each subject would have its data in different/ multiple rows. Here, the repeated responses of a subject are part of separate columns.
The data can be recognized by considering rows as groups. The data can be recognized by considering columns as groups.
This data format is most commonly used in R analyses and to write into log files after each trial. This data format is rarely used in R analyses and most commonly used in stats packages for repeated measures ANOVAs.

A p-value is the measure of the probability of having results equal to or more than the results achieved under a specific hypothesis assuming that the null hypothesis is correct. This represents the probability that the observed difference occurred randomly by chance.

  • Low p-value which means values ≤ 0.05 means that the null hypothesis can be rejected and the data is unlikely with true null.
  • High p-value, i.e values ≥ 0.05 indicates the strength in favor of the null hypothesis. It means that the data is like with true null.
  • p-value = 0.05 means that the hypothesis can go either way

Resampling is a methodology used to sample data for improving accuracy and quantify the uncertainty of population parameters. It is done to ensure the model is good enough by training the model on different patterns of a dataset to ensure variations are handled. It is also done in the cases where models need to be validated using random subsets or when substituting labels on data points while performing tests.

Data is said to be highly imbalanced if it is distributed unequally across different categories. These datasets result in an error in model performance and result in inaccuracy.

There are not many differences between these two, but it is to be noted that these are used in different contexts. The mean value generally refers to the probability distribution whereas the expected value is referred to in the contexts involving random variables.

This bias refers to the logical error while focusing on aspects that survived some process and overlooking those that did not work due to lack of prominence. This bias can lead to deriving wrong conclusions.

  • KPI: KPI stands for Key Performance Indicator that measures how well the business achieves its objectives.
  • Lift: This is a performance measure of the target model measured against a random choice model. Lift indicates how good the model is at prediction versus if there was no model.
  • Model fitting: This indicates how well the model under consideration fits given observations.
  • Robustness: This represents the system’s capability to handle differences and variances effectively.
  • DOE: stands for the design of experiments, which represents the task design aiming to describe and explain information variation under hypothesized conditions to reflect variables.

Confounding variables are also known as confounders. These variables are a type of extraneous variables that influence both independent and dependent variables causing spurious association and mathematical relationships between those variables that are associated but are not casually related to each other.

The selection bias occurs in the case when the researcher has to make a decision on which participant to study. The selection bias is associated with those researches when the participant selection is not random. The selection bias is also called the selection effect. The selection bias is caused by as a result of the method of sample collection.

Four types of selection bias are explained below:

  1. Sampling Bias: As a result of a population that is not random at all, some members of a population have fewer chances of getting included than others, resulting in a biased sample. This causes a systematic error known as sampling bias.
  2. Time interval: Trials may be stopped early if we reach any extreme value but if all variables are similar invariance, the variables with the highest variance have a higher chance of achieving the extreme value.
  3. Data: It is when specific data is selected arbitrarily and the generally agreed criteria are not followed.
  4. Attrition: Attrition in this context means the loss of the participants. It is the discounting of those subjects that did not complete the trial.
  5. Define bias-variance trade-off?

Let us first understand the meaning of bias and variance in detail:

Bias: It is a kind of error in a machine learning model when an ML Algorithm is oversimplified. When a model is trained, at that time it makes simplified assumptions so that it can easily understand the target function. Some algorithms that have low bias are Decision Trees, SVM, etc. On the other hand, logistic and linear regression algorithms are the ones with a high bias.

Variance: Variance is also a kind of error. It is introduced into an ML Model when an ML algorithm is made highly complex. This model also learns noise from the data set that is meant for training. It further performs badly on the test data set. This may lead to over lifting as well as high sensitivity.

When the complexity of a model is increased, a reduction in the error is seen. This is caused by the lower bias in the model. But, this does not happen always till we reach a particular point called the optimal point. After this point, if we keep on increasing the complexity of the model, it will be over lifted and will suffer from the problem of high variance.

Trade-off Of Bias And Variance: So, as we know that bias and variance, both are errors in machine learning models, it is very essential that any machine learning model has low variance as well as a low bias so that it can achieve good performance.

Let us see some examples. The K-Nearest Neighbor Algorithm is a good example of an algorithm with low bias and high variance. This trade-off can easily be reversed by increasing the k value which in turn results in increasing the number of neighbours. This, in turn, results in increasing the bias and reducing the variance.

Another example can be the algorithm of a support vector machine. This algorithm also has a high variance and obviously, a low bias and we can reverse the trade-off by increasing the value of parameter C. Thus, increasing the C parameter increases the bias and decreases the variance.

So, the trade-off is simple. If we increase the bias, the variance will decrease and vice versa.

It is a matrix that has 2 rows and 2 columns. It has 4 outputs that a binary classifier provides to it. It is used to derive various measures like specificity, error rate, accuracy, precision, sensitivity, and recall.

The test data set should contain the correct and predicted labels. The labels depend upon the performance. For instance, the predicted labels are the same if the binary classifier performs perfectly. Also, they match the part of observed labels in real-world scenarios. The four outcomes shown above in the confusion matrix mean the following:

  1. True Positive: This means that the positive prediction is correct.
  2. False Positive: This means that the positive prediction is incorrect.
  3. True Negative: This means that the negative prediction is correct.
  4. False Negative: This means that the negative prediction is incorrect.

The formulas for calculating basic measures that comes from the confusion matrix are:

  1. Error rate: (FP + FN)/(P + N)
  2. Accuracy: (TP + TN)/(P + N)
  3. Sensitivity = TP/P
  4. Specificity = TN/N
  5. Precision = TP/(TP + FP)
  6. F-Score  = (1 + b)(PREC.REC)/(b2 PREC + REC) Here, b is mostly 0.5 or 1 or 2.

In these formulas:

FP = false positive
FN = false negative
TP = true positive
RN = true negative


Sensitivity is the measure of the True Positive Rate. It is also called recall.
Specificity is the measure of the true negative rate.
Precision is the measure of a positive predicted value.
F-score is the harmonic mean of precision and recall.

Logistic Regression is also known as the logit model. It is a technique to predict the binary outcome from a linear combination of variables (called the predictor variables).

For example, let us say that we want to predict the outcome of elections for a particular political leader. So, we want to find out whether this leader is going to win the election or not. So, the result is binary i.e. win (1) or loss (0). However, the input is a combination of linear variables like the money spent on advertising, the past work done by the leader and the party, etc.

Linear regression is a technique in which the score of a variable Y is predicted using the score of a predictor variable X. Y is called the criterion variable. Some of the drawbacks of Linear Regression are as follows:

  • The assumption of linearity of errors is a major drawback.
  • It cannot be used for binary outcomes. We have Logistic Regression for that.
  • Overfitting problems are there that can’t be solved.

Classification is very important in machine learning. It is very important to know to which class does an observation belongs. Hence, we have various classification algorithms in machine learning like logistic regression, support vector machine, decision trees, Naive Bayes classifier, etc. One such classification technique that is near the top of the classification hierarchy is the random forest classifier.

So, firstly we need to understand a decision tree before we can understand the random forest classifier and its works. So, let us say that we have a string as given below:

So, we have the string with 5 ones and 4 zeroes and we want to classify the characters of this string using their features. These features are colour (red or green in this case) and whether the observation (i.e. character) is underlined or not. Now, let us say that we are only interested in red and underlined observations. So, the decision tree would look something like this:

So, we started with the colour first as we are only interested in the red observations and we separated the red and the green-coloured characters. After that, the “No” branch i.e. the branch that had all the green coloured characters was not expanded further as we want only red-underlined characters. So, we expanded the “Yes” branch and we again got a “Yes” and a “No” branch based on the fact whether the characters were underlined or not.

So, this is how we draw a typical decision tree. However, the data in real life is not this clean but this was just to give an idea about the working of the decision trees. Let us now move to the random forest.

Random Forest

It consists of a large number of decision trees that operate as an ensemble. Basically, each tree in the forest gives a class prediction and the one with the maximum number of votes becomes the prediction of our model. For instance, in the example shown below, 4 decision trees predict 1, and 2 predict 0. Hence, prediction 1 will be considered.

The underlying principle of a random forest is that several weak learners combine to form a keen learner. The steps to build a random forest are as follows:

  • Build several decision trees on the samples of data and record their predictions.
  • Each time a split is considered for a tree, choose a random sample of mm predictors as the split candidates out of all the pp predictors. This happens to every tree in the random forest.
  • Apply the rule of thumb i.e. at each split m = p√m = p.
  • Apply the predictions to the majority rule.

Let us say that Prob is the probability that we may see a minimum of one shooting star in 15 minutes.

So, Prob = 0.2

Now, the probability that we may not see any shooting star in the time duration of 15 minutes is = 1 – Prob

1-0.2 = 0.8

The probability that we may not see any shooting star for an hour is:

= (1-Prob)(1-Prob)(1-Prob)*(1-Prob)
= 0.8 * 0.8 * 0.8 * 0.8 = (0.8)⁴
≈ 0.40

So, the probability that we will see one shooting star in the time interval of an hour is = 1-0.4 = 0.6

So, there are approximately 60% chances that we may see a shooting star in the time span of an hour.

Deep learning is a paradigm of machine learning. In deep learning,  multiple layers of processing are involved in order to extract high features from the data. The neural networks are designed in such a way that they try to simulate the human brain.

Deep learning has shown incredible performance in recent years because of the fact that it shows great analogy with the human brain.

The difference between machine learning and deep learning is that deep learning is a paradigm or a part of machine learning that is inspired by the structure and functions of the human brain called the artificial neural networks. Learn More.

Gradient: Gradient is the measure of a property that how much the output has changed with respect to a little change in the input. In other words, we can say that it is a measure of change in the weights with respect to the change in error. The gradient can be mathematically represented as the slope of a function.

Gradient Descent: Gradient descent is a minimization algorithm that minimizes the Activation function. Well, it can minimize any function given to it but it is usually provided with the activation function only.

Gradient descent, as the name suggests means descent or a decrease in something. The analogy of gradient descent is often taken as a person climbing down a hill/mountain. The following is the equation describing what gradient descent means:

So, if a person is climbing down the hill, the next position that the climber has to come to is denoted by “b” in this equation. Then, there is a minus sign because it denotes the minimization (as gradient descent is a minimization algorithm). The Gamma is called a waiting factor and the remaining term which is the Gradient term itself shows the direction of the steepest descent.

Types of Digital Marketing formats are as follows:

  • Social Media Profiles
  • Website
  • Images and Video Content
  • Blog Posts and eBooks
  • Reviews and Customer Testimonials
  • Brandes Logos, Images, or Icons

This is the most frequently asked question in a digital marketing interview. In recent years, digital marketing has demonstrated immense power, and here are some of the most compelling reasons:

  • Directly relates to customers’ needs
  • Good exposure to product outreach and analytics
  • A more convenient approach to connect with people from all across the world
  • Changes can be implemented almost immediately if needed

  • Direct marketing aims to increase a company’s revenue by creating demand. The use of stories in brand marketing allows you to connect with your audience on a much deeper level.
  • Direct marketing has a direct impact on top-line revenue. Typically, a high level of urgency and priority is assigned. Brand marketinghas a long-term impact on brand equity and serves as a barrier to market pressures. It’s not urgent, but it’s critical.
  • Testing and measuring are often on the minds of direct marketers. Differentiation is something that brand marketers consider.
  • Response, leads, conversion, and sales are the KPIs used in direct marketing. The focus of brand marketers is on KPIs like awareness, recognition, and engagement.

  • Skills and training– Make sure your staff has the expertise and experience they need to properly deploy digital marketing. Tools, platforms, and trends change quickly, so being up to date is essential.
  • Time-consuming – Duties like optimizing internet advertising campaigns and developing marketing content can consume a significant amount of time.
  • High competition – While internet marketing allows you to access a global audience, it also puts you in direct rivalry with others worldwide. It can be challenging to stand out from the crowd and capture attention among the different messages offered by consumers online.
  • Complaints and feedback – Any nasty comments or criticism of your brand might be seen by your target audience on social media and review sites. It might be challenging to provide excellent customer service online.

  • Short DIY videos
  • Telling a real story or example
  • Audience focused content
  • Personalized content
  • Using AI in content
  • Google Discover
  • NFTs are used by marketers

In Digital Marketing, there are a variety of techniques that can be utilized to achieve a specific aim. Here are a few examples:

  • Google Analytics

Google has developed a free analytics platform for all people that lets you track the performance of your social media site, video and app. You can also calculate your advertisements ROI from here.

  • Ahref is an excellent tool for backlinks and SEO analysis.

  • Mailchimp

Mailchimp is an email marketing tool that lets you manage and communicate with clients, consumers, and other interested parties from one central location.

  • Google Keyword Planner

The Google Keyword Planner is a tool that can assist you in determining keywords for your Search Network campaigns. It’s a free app that allows you to locate keywords relevant to your business and see how many monthly searches they generate and how much it costs to target them.

  • Kissmetrics

Kissmetrics is a comprehensive online analytics software that provides critical website insights and customer engagement.

  • Keyword Discovery

Keyword Discovery provides you with access to the world’s most comprehensive keyword index, which is compiled from all search engines. Access to consumer search keywords for goods and services, as well as search terms that direct users to your competitors’ websites.

  • SEMrush

Semrush is a comprehensive toolkit for gaining web visibility and learning about marketing. SEMrush tools and reports will benefit marketers in SEO, PPC, SMM, Keyword Research, Competitive Research, and content marketing services.

  • Buffer App

Buffer is a small-to-medium-sized business social media management application that allows users to create content, communicate with customers, and track their social media success. Buffer integrates social media networks such as Facebook, Instagram, and Twitter.

  • AdEspresso

AdEspresso is a simple and intuitive Facebook ad management and optimization tool.

Digital marketing can be categorized into Inbound Marketing and Outbound Marketing.

  • Inbound marketing pulls in interested customers, whereas outbound marketing doesn’t care about interest.
  • Consumer need is considered in inbound marketing, but in outbound marketing, it is done according to the product’s needs.

Customer – The person who receives the message.

Content – The message that the customer sees is referred to as content.

Context – The message sent to the consumer.

Conversation – This is when you and your consumer have a conversation.

Dofollow: These links allow search engine crawlers to follow a link to give it a boost from search engine result pages and to pass on the link juice (reputation passed on to another website) to the destination site.

Eg: <ahref=””>XYZ</a>

Nofollow: These links don’t allow search engine crawlers to follow the link. The also doesn’t pass on any link juice to the destination domain.

Eg: <ahref=”” rel=”nofollow”>XYZ</a>

The 301 redirect tells the user and search engine bots that the original web page has permanently moved to another location. 302, on the other hand, only serves as a temporary redirect and does not pass on the link juice to the new location.

Backlinks are external links to your website that can help with:

  • Improved credibility for your website
  • Increased domain authority
  • Gaining organic search ranking
  • Increased referral traffic

  • Create content that’s informative and engaging
  • Optimize your videos. Some ways to do this are:
    • The title needs to have a high search volume and low difficulty
    • The description needs to be relevant to the title you’ve chosen
    • Accurate and relevant video tags need to be used
    • Your title tag must be under 100 characters
    • Use a captivating thumbnail and relevant hashtag
    • Promote your content on other social media platforms

Google uses content that’s mobile-friendly for indexing and ranking websites. If your website has a responsive design, Google will rank it higher up on search engine results. It bases this ranking on how well your website performs on mobile devices.

  • Creating a webpage for each product and service
  • Opting for a business listing on the Google My Business webpage
  • Updating NAP citations on your website and maintaining consistency
  • Embedding a google map
  • Optimizing your meta tags and content
  • Adding your business to local libraries
  • Opting for reviews and ratings

Some of the ways to avoid this are:

  • To use a 301 redirect to the original URL
  • To use a rel=canonical attribute to the original content. This tells the search engine that a specific URL represents the original webpage
  • To opt for a preferred domain in the Google Search Console. This can redirect spiders to crawl web pages in different ways.

  • Using simple website design
  • By optimizing images
  • Improving server response times
  • Reducing redirects
  • Enabling browser caching
  • Opting for a Content Delivery Network (CDN)

Check out this article to know how web page speed affects SERP.

Short tail keywords work best when the objective is to drive many visitors to your website. On the other hand, long-tail keywords are usually used for targeted pages like product pages and articles.

  • Site architecture and navigation: Good site architecture is important for bots to access and index the content easily
  • Responsive design: This makes your website user and mobile-friendly. It will also increase the amount of time a user spends on your site, by extension also improving your site’s ranking
  • Create sitemap: It helps search engine bots understand the structure of your website
  • txt: This instructs search engine crawlers to understand which pages don’t need to be indexed. It is added to the website’s root directory

Next up in this digital marketing interview questions article, let’s cover questions related to Search Engine Marketing.

Some of the most effective and proven ways to increase traffic to your website include:

  • Optimizing your content with relevant keywords
  • Creating targeted and user friendly landing pages
  • Creating super engaging and high-quality, unique content
  • Using digital ads to promote your website
  • Improve your local search

From a birds eye view, on-page SEO refers to the factors and techniques that are specifically focused on optimizing aspects of your site that are under your control. On the other hand, off-page SEO refers to the optimization factors and strategies focused on promoting your site or brand. You can learn about it thoroughly in this tutorial.

AMP, i.e., Accelerated Mobile Pages allow you to create mobile pages that load instantly, helping visitors better engage with the page instead of bouncing off. You can dig deeper into AMP, use cases, strategies and more.

There are a lot of components that make up the .NET framework, and some of them are as follows:

  • .NET Class Library
  • .NET Framework
  • Language Runtime
  • Application Domain
  • Profiling

JIT is the abbreviation of Just in Time. It is a compiler that is used to convert intermediate code into native code easily.

In .NET, during execution, the code is converted into the native language, also called the byte code. This is processed by the CPU, and the framework helps with the conversion.

MSIL is the abbreviation for Microsoft Intermediate Language. It is used to provide the instructions required for operations such as memory handling, exception handling, and more. It can also provide instructions to initialize and store values and methods easily.

The next .NET interview question involves an important concept.

The acronym CTS stands for Common Type System, which encompasses a predefined set of systematic regulations dictating the proper definition of data types in relation to the values provided by a user. Its primary purpose is to characterize and encompass all the various data types utilized within a user’s application.

Nevertheless, it is also permissible for users to construct custom calls using the guidelines established by the CTS. This feature is primarily provided to accommodate the diverse range of programming languages that can be employed when working within the .NET framework.

CLR stands for Common Language Runtime. It forms to be the most vital component of .NET as it provides the foundation for many applications to run on.

If a user writes an application in C#, it gets compiled and converted to intermediate code. After this, CLR takes up the code and works on it with respect to the following aspects:

  • Memory management
  • Security protocols
  • Libraries for loading
  • Thread management

Managed Code Unmanaged Code
Managed by CLR Not managed by any entity
Garbage collection is used to manage memory Runtime environment takes care of the management
The .NET framework is necessary for the execution N

ot dependant on the .NET framework to run

There are four main steps that include in the execution of the managed code. They are as follows:

  1. Choosing a compiler that can execute the code written by a user
  2. Conversion of the code into Intermediate Language (IL) using a compiler
  3. IL gets pushed to CLR, which converts it into native code using JIT
  4. Native code is now executed using the .NET runtime

State management, as the name suggests, is used to constantly monitor and maintain the state of objects in the runtime. A web page or a controller is considered to be an object.

There are two types of state management:

  • Client-side: It is used to store information on the client’s machine and is formed mostly of reusable and simple objects.
  • Server-side: It stores the information on the server and makes it easier to manage and preserve the information on the server.


Object Class
An instance of a class The template for creating an object
A class becomes an object after instantiation The basic scaffolding of an object
Used to access properties from a class The description of methods and properties

system.stringbuilder system.string
Mutable Immutable
Supports using append Cannot use the append keyword

LINQ is the abbreviated form of Language Integrated Query. It was first brought out in 2008, and it provides users with a lot of extra features when working with the .NET framework. One highlight is that it allows the users to manipulate data without any dependency on its source.

An assembly is the simple collection of all of the logical units present. Logical units are entities that are required to build an application and later deploy the same using the .NET framework. It can be considered as a collection of executables and DLL files.

There are four main components of an assembly. They are as follows:

  1. Resource: A collection of related files
  2. MSIL: The Intermediate Language code
  3. Metadata: The binary data of a program
  4. Manifest: A collection of information about the assembly
  5. What is the meaning of caching?

Caching is a term used when the data has to be temporarily stored in the memory so that an application can access it quickly rather than looking for it in a hard drive. This speeds up the execution to an exponential pace and helps massively in terms of performance.

There are three types of caching:

  • Data caching
  • Page caching
  • Fragment caching

The next .NET interview question we will check involves an important concept.

Function Stored Procedure
Can only return one value Can return any number of values
No support for exception handling using try-catch blocks Supports the usage of try-catch blocks for exception handling
The argument consists of only one input parameter Both input and output parameters are present
A function can be called from a stored procedure The stored procedure cannot be called from a function

There are five main types of constructor classes in C# as listed below:

  • Copy constructor
  • Default constructor
  • Parameterized constructor
  • Private constructor
  • Static constructor

There are numerous advantages of making use of a session are as mentioned below:

  • It is used to store user data across the span of an application.
  • It is very easy to implement and store any sort of object in the program.
  • Individual entities of user data can be stored separately if required.
  • The session is secure, and objects get stored on the runtime server.

Manifest is mainly used to store the metadata of the assembly. It contains a variety of metadata which is required for many things as given below:

  • Assembly version information
  • Security identification
  • Scope checking of the assembly
  • Reference validation of classes

Memory-mapped files in .NET are used to instantiate the contents of a logical file into the address of the application. This helps a user run processes simultaneously on a single machine and have the data shared between processes.

The MemoryMappedFile.CreateFromFiles() function is used to obtain a memory-mapped file object easily.

CAS stands for Code Access Security. It is vital in the prevention of unauthorized access to programs and resources in the runtime. It can provide limited access to the code to perform only certain operations rather than providing all at a given point in time. CAS forms to be a part of the native .NET security architecture.

  • s to review.
  • Selecting readable fonts.
  • Picking colors that are both attractive and easy-to-read.
  • Using colors, fonts, and layouts to implement a brand’s identity.
  • Create a map of the website’s structure to ensure a seamless navigation experience.
  • Integrating images, texts, logos, videos, and other elements.
  • Designing optimized versions of websites and pages for desktops and mobile devices.

The following are some common languages used in web design. Among the following, the most fundamental language for web design is HTML, which will provide you with a solid foundation for designing websites and web applications. In fact, it is probably the first language taught to a web designer, thereby making it an essential tool for all designers to have.

  • HTML (Hypertext Markup Language)
  • CSS (Cascading Style Sheets)
  • Java
  • JavaScript
  • SQL (Structured Query Language)
  • PHP (Hypertext Processor)
  • Python

The responsive web design process involves creating a web page that “responds” to or resizes itself to fit whatever screen size the viewer is using. This involves the use of HTML and CSS to resize, shrink, hide, and enlarge a website so that it appears correctly on all device resolutions (desktops, laptops, tablets, and phones). With responsive design, you can have one website that adapts to different screens and devices according to their sizes.

Background images provide a visually appealing and interactive background to a website. These images can be applied in many ways.

  1. In the body tag, pass the URL or location path of a background image in the background attribute.


<body background = “Image URL or path” >

Website Body



<!DOCTYPE html>



<title>Scaler Academy</title>


<body background=”scaler.png”>

<h1>Welcome to Scaler Family</h1>

<p><a href=””></a></p>



  1. Add CSS style properties.Syntax


    body {

    background-image:url(” URL of the image “);




    <!DOCTYPE html>




    body {

    background-image: url(“scaler1.jpg”);





    <h1>Scaler Academy</h1>



By using a color like Red, you can make the “Delete” button more visible, especially if two buttons have to be displayed simultaneously. Almost always, red is used as a symbol of caution, so that is a great way to get the user’s attention.

In graphic design, grid systems provide a two-dimensional framework for aligning and laying out elements. This is a set of measurements that the designer can use to size and align objects within a certain format. It is composed of horizontal and vertical lines that intersect, allowing the contents to be arranged on the page. A grid can facilitate the positioning of multiple design elements on a website in a way that is visually appealing, facilitates a user’s flow, and improves accessibility and the attractiveness of information and visuals. A grid can also be broken down into several subtypes, each with its own specific application in web design.

Although the error message may seem innocuous, it has a big impact on the user’s experience. Error messages written poorly can frustrate users, while well-written error messages increase users’ subjective satisfaction and speed. One should consider the following factors when writing an error message:

  • Write in clear, simple language, without ambiguity, so that the issue can be understood easily.
  • Keep it concise, precise, and meaningful.
  • Be humble and don’t blame users as tone and language are major factors in how they interpret your message.
  • Refrain from using words that are negative.
  • Provide a solution to resolve the error.
  • Format your error messages correctly.
  • Don’t overwhelm users with error messages.

A few of the most common website design problems include:

  • An outdated or inadequate web design.
  • Poor website navigation.
  • Ineffective SEO strategies.
  • Convoluted or unclear user journeys.
  • Excessive use of images, icons, colors, and textures.
  • Lack of quality content.
  • Poor quality images.
  • Hidden contact details.
  • Upload time is extremely slow.
  • Mobile optimization is not available.


Information architecture (IA) refers to the process of planning, organizing, structuring, and labeling content in a comprehensive, logical, and sustainable manner. It serves as a means of structuring and classifying content in a way that is clear and understandable, so users can find what they are looking for with little effort. IA may also be used to redesign an existing product, rather than being limited to new products. Essentially, it determines how users will access your website information as well as how well their user experience will be.

An international organization, the World Wide Web Consortium (W3C) promotes web development. Members of the organization, a full-time staff, experts invited from around the world, and the public collaborate together to create Web Standards. Also, W3C develops standards for the World Wide Web (WWW) to facilitate interoperability and collaboration between all web stakeholders. Since its inception in 1994, W3C has endeavored to lead the web towards a fuller potential.

White space, also referred to as empty space or negative space in web design, refers to the unused space that surrounds the content and functional elements of a web page. With white space, you make your design breathe by minimizing the amount of text and functionality users see at once.

  • By using white space, you can visually group and separate elements, draw attention to a specific element, and reinforce a content grid or layout in web design and other media.
  • Aside from contributing to the harmony and balance of a design and assisting the branding process, whitespace can also be utilized to direct the reader from one element to the next.
  • This makes the website appear clean and uncluttered, providing readers with the information they will enjoy.


When you embed a video on a website using HTML5 video (instead of using YouTube or another video-hosting service), the website must ensure that the video is served in a format that the browser can play. The MP4 video format is currently supported by major browsers, operating systems, and devices (using MPEG4 or h.264 compressions). For Firefox clients and some Android devices that are not capable of playing MP4 videos, having copies of the video in OGV and WebM formats is helpful.

H1 tags can be used as many times as you like on a web page, as there is no upper or lower limit. A website’s H1 (title) tag is of great importance to search engines and other machines that read the code of a web page and interpret its contents. It is generally recommended that web pages contain only one H1 tag. Multiple H1 tags can, however, be used as long as they are not overused, and are contextually relevant to the structure of the page. H1 is considered the main heading or title of an article, document, or section. It may be detrimental to the website’s SEO performance if H1 elements are not used properly.

While these tags provide a unique visual effect (STRONG makes the text bold, EM italicized it, and SMALL shrinks it), this is not their primary purpose, and they should not be used simply to style a piece of content in a specific way. Unlike some other tags, these ones provide a unique visual effect (STRONG makes the text bold, EM italicized it, and SMALL shrinks it). However, they are not intended to provide a visual effect, nor should they simply be used to style content in a particular way.

  • <em> tag: Used to mark up the emphasized text.
  • <strong> tag: Often used in titles, headings, and paragraphs to emphasize the word or phrase of greatest significance. Also, it can be used to emphasize the seriousness or importance of a word or phrase.
  • <small> tag: Typically used for disclaimers, side comments, clarifications, etc.

UX case studies are an essential component of a stellar UX portfolio. Yet, writing UX case studies can often feel daunting. Developing an effective UX case study entails telling the story of a project and communicating not only what you accomplished but also the reasons for it. The following is an outline of what to include in an UX case study:

  • Overview: A short description of the company and project.
  • Problem Statement: State your objectives, such as why you worked on this project and what its objectives were.
  • Client and Audience: Describe the intended niche for the product or service. This may include what were your target audiences when creating your product or service and who precisely was it designed for.
  • Roles and Responsibilities: Provide information about your team, including how your responsibilities were shared. Was your team headed by you, or did you have several experts on hand?
  • Scope and Constraints: Include details regarding your working conditions and your limitations. It may involve tight deadlines, a low budget, or a time zone difference.
  • Work Process: This stage is crucial to your UX story. It should outline in detail the steps you took as well as why you did it (e.g., to solve user pain points or to increase conversion).
  • Results and conclusions: You have come to the end of your tale. Describe the results of your efforts, your achievements, the lessons you learned, and the experience you gained.

HTML elements and HTML tags differ in the following ways:

HTML Tags HTML Elements
Tags indicate the start and end of an HTML element, i.e., the container that holds other HTML elements (except<!DOCTYPE> ). All HTML elements must be enclosed between the start tag and closing tag.
Each tag begins with a ‘<‘ and ends with a ‘>’ symbol. The basic structure of an HTML element consists of the start tag (<tagname>), the close tag (</tagname>) and the content inserted between these tags.
Any text inside ‘<’ and ‘>’ is called a tag. A HTML element is anything written in between HTML tags.
Example:<p> </p> Example:<p>Scaler Academy</p>

jQuery simplifies the process of implementing JavaScript on a website. With jQuery, most common tasks that require a lot of JavaScript code are wrapped into methods that can be invoked with just one line of code. Additionally, jQuery simplifies several complex aspects of JavaScript, such as DOM manipulation and AJAX calls. JQuery is a lightweight JavaScript library that strives to do more with less coding. Following are some features of the jQuery library:

  • HTML/DOM manipulation
  • HTML event methods
  • CSS manipulation
  • Effects and animations
  • AJAX
  • Utilities

Here are a few popular JQuery functions used in web design:

  • Animated hover effect
  • Simple slide panel
  • Accordion#1 and Accordion#2
  • Entire block clickable
  • Styling different link types
  • Collapsible panels
  • Chainable transition effect
  • Simple disappearing effect
  • Image replacement gallery
  • Simple slide panel

Both visibility and display are significant properties in CSS.

  1. Visibility property

In a web document, this property allows a user to specify whether an element is visible or not, while the hidden elements consume space.


visibility: visible| collapse | hidden | inherit | initial;

Property Values: 

  • visible: It specifies that an element must be visible. This is the default value.
  • hidden: Indicates that the element is not visible, but it does affect the layout.
  • collapse: Hides an element when it appears on a row or cell in a table.
  • initial: The visibility property of the element is set to its default value.
  • inherit: The property has been inherited from its parent.
  1. Display property

The Display property specifies how components (such as hyperlinks, divs,  headings, etc.) will be displayed on the web page.


display: none | block | inline | inline-block;

Property Values:

  • none: No elements will be displayed.
  • inline: This makes the element render or appear as an inline element. This is the default value.
  • block: This makes the element render or appear as a block-level element.
  • inline-block: The element is rendered as a block box within an inline box.


CSS, or Cascading Style Sheets, is a style sheet language. Essentially, it controls and oversees how elements should be portrayed on screen, on paper, in speech, or in any other media. CSS allows you to control the text color, font style, spacing between paragraphs, and columns’ size and arrangement. This allows you to quickly change the appearance of several web pages at once.


  • Has lightweight code.
  • Is easy to update and maintain.
  • Provides various formatting options.
  • Increased consistency in design.
  • Easy to present different styles to different audiences.
  • Greater accessibility.
  • Improved accessibility.
  • Has great SEO benefits.
  • Faster download times because of its simplicity.

There are three ways to add CSS to HTML documents:

  • Inline: Inserting the style attribute inside an HTML element.


​​<h1 style=”color:blue;”>Scaler Academy</h1>

<p style=”color:black;”>Welcome to Scaler Family!</p>

  • Internal: Adding <style> element to the <head> section.


<!DOCTYPE html>




body {background-color: powderblue;}

h1  {

color: blue;

font-size: 200%


p    {

color: red;

font-size: 140%;

border: 2px solid black;

margin: 20px;






<h1>Scaler Academy</h1>

<p>Welcome to Scaler Family!</p>




  • External: Linking to an external CSS file using the <link> element.


<!DOCTYPE html>



<link rel=”stylesheet” href=”styles.css”>




<h1>Scaler Academy</h1>

<p>Welcome to Scaler Family!</p>




In cases where an image consists of certain words, it is recommended that the image be saved as a GIF instead of a JPG. JPG files have a file compression feature that causes the fonts to become unreadable.


Different image compression formats use different compression techniques for different purposes.

  • JPEG: The JPEG compression process reduces the size of an image by identifying similar colored areas; the higher the compression level, the more aggressively the process searches for these areas, which can lead to loss of visual information and artifacts at the edges of the compressed region. Colorful photos, illustrations, gradients, and other richly colored images benefit from this compression. It is particularly screenshots, flat icons, simple UI elements, and schematics.
  • PNG: The PNG compression method reduces the number of colors in the image. It may result in a slight loss of color shades depending on the compression level. PNGs are excellent for UI elements, icons, signs, logos, illustrations that contain text, and screenshots. Unlike JPEGs, they also support transparent areas. In general, PNG files are bigger than JPEG files and do not offer good compression for photos and complex, colored images and gradients.
Scroll to Top