Benefits of Enrolling in a Software Testing Course in Pune
If you’re looking to learn software testing in Pune, you’re in luck! Pune is a…
Software testing involves evaluating and verifying a software product’s functionality. Basically, it checks whether the software product matches anticipated requirements and makes sure it is defect-free. It can be said that testing enhances the quality of the product by preventing bugs, reducing development costs, and reducing performance issues.
You can test the software in many different ways. Some types of testing are conducted by software developers and some by specialized quality assurance staff. Here are a few different kinds of software testing, along with a brief description of each.
Type | Description |
Unit Testing | A programmatic test that tests the internal working of a unit of code, such as a method or a function. |
Integration Testing | Ensures that multiple components of systems work as expected when they are combined to produce a result. |
Regression Testing | Ensures that existing features/functionality that used to work are not broken due to new code changes. |
System Testing | Complete end-to-end testing is done on the complete software to make sure the whole system works as expected. |
Smoke Testing | A quick test performed to ensure that the software works at the most basic level and doesn’t crash when it’s started. Its name originates from the hardware testing where you just plug the device and see if smoke comes out. |
Performance Testing | Ensures that the software performs according to the user’s expectations by checking the response time and throughput under specific load and environment. |
User-Acceptance Testing | Ensures the software meets the requirements of the clients or users. This is typically the last step before the software is live, i.e. it goes to production. |
Stress Testing | Ensures that the performance of the software doesn’t degrade when the load increases. In stress testing, the tester subjects the software under heavy loads, such as a high number of requests or stringent memory conditions to verify if it works well. |
Usability Testing | Measures how usable the software is. This is typically performed with a sample set of end-users, who use the software and provide feedback on how easy or complicated it is to use the software. |
Security Testing | Now more important than ever. Security testing tries to break a software’s security checks, to gain access to confidential data. Security testing is crucial for web-based applications or any applications that involve money. |
Software testing is governed by seven principles:
The dictionary definition of regression is the act of going back to a previous place or state. In software, regression implies that a feature that used to work suddenly stopped working after a developer added a new code or functionality to the software.
Regression problems are pervasive in the software industry, as new features are getting added all the time. Developers don’t build these features in isolation, separate from the existing code. Instead, the new code interacts with the legacy code and modifies it in various ways, introducing side effects, whether intended or not.
As a result, there is always a chance that introducing new changes may negatively impact a working feature. It’s important to keep in mind that even a small change has the potential to cause regression.
Regression testing helps ensure that the new code or modifications to the existing code don’t break the present behaviour. It allows the tester to verify that the new code plays well with the legacy code.
Imagine a tourist in a foreign city. There are two ways in which they can explore the city.
With the first approach, the tourist follows a predetermined plan and executes it. Though they may visit famous spots, they might miss out on hidden, more exciting places in the city. With the second approach, the tourist wanders around the city and might encounter strange and exotic places that the itinerary would have missed.
Both approaches have their pros and cons.
A tester is similar to a tourist when they are testing software. They can follow a strict set of test cases and test the software according to them, with the provided inputs and outputs, or they can explore the software.
When a tester doesn’t use the test scripts or a predefined test plan and randomly tests the software, it is called exploratory testing. As the name suggests, the tester is exploring the software as an end-user would. It’s a form of black-box testing.
In exploratory testing, the tester interacts with the software in whatever manner they want and follows the software’s instructions to navigate various paths and functionality. They don’t have a strict plan at hand.
Exploratory testing primarily focuses on behavioural testing. It is effective for getting familiar with new software features. It also provides a high-level overview of the system that helps evaluate and quickly learn the software.
Though it seems random, exploratory testing can be powerful in an experienced and skilled tester’s hands. As it’s performed without any preconceived notions of what software should and shouldn’t do, it allows greater flexibility for the tester to discover hidden paths and problems along those paths.
End to End testing is the process of testing a software system from start to finish. The tester tests the software just like an end-user would. For example, to test a desktop software, the tester would install the software as the user would, open it, use the application as intended, and verify the behavior. Same for a web application.
There is an important difference between end-to-end testing vs. other forms of testing that are more isolated, such as unit testing. In end-to-end testing, the software is tested along with all its dependencies and integrations, such as databases, networks, file systems, and other external services.
Unit testing is the process of testing a single unit of code in an isolated manner. The unit of code can be a method, a class, or a module. Unit testing aims to focus on the smallest building blocks of code to get confidence to combine them later to produce fully functioning software.
A unit test invokes the code and verifies the result with the expected result. If the expected and actual outcomes match, then the unit test passes. Otherwise, it fails.
A good unit test has the following characteristics:
API stands for Application Programming Interface. It is a means of communication between two software components. An API abstracts the internal workings and complexity of a software program and allows the user of that API to solely focus on the inputs and outputs required to use it.
When building software, developers rarely write software from scratch and make use of other third-party libraries. An API allows two software components to talk to each other by providing an interface that they can understand.
Another use of an API is to provide data required by an application. Let’s say you are building a weather application that displays the temperature. Instead of building the technology to collect the temperature yourself, you’d access the API provided by the meteorological institute.
A test environment consists of a server/computer on which a tester runs their tests. It is different from a development machine and tries to represent the actual hardware on which the software will run; once it’s in production.
Whenever a new build of the software is released, the tester updates the test environment with the latest build and runs the regression tests suite. Once it passes, the tester moves on to testing new functionality.
When software is being tested, the code coverage measures how much of the program’s source code is covered by the test plan. Code coverage testing runs in parallel with actual product testing. Using the code coverage tool, you can monitor the execution of statements in your source code. A complete report of the pending statements, along with the coverage percentage, is provided at the end of the final testing.
Among the different types of test coverage techniques are:
It is extremely beneficial to use automation testing when using the agile model in software testing. It helps in achieving maximum test coverage in a lesser time of the sprint.
Bugs and errors differ in the following ways:
Bugs | Errors |
Software bugs are defects, which occur when the software or an application does not work as intended. A bug occurs when there is a coding error, which causes the program to malfunction. | Errors in code are caused by problems with the code, which means that the developer could have misunderstood the requirement or the requirement was not defined correctly, leading to a mistake. |
The bug is submitted by the testers. | Errors are raised by test engineers and developers. |
Logic bugs, resource bugs, and algorithmic bugs are types of bugs. | Syntactic error, error handling error, error handling error, user interface error, flow control error, calculation error, and testing error are types of errors. |
The software is detected before it is deployed in production. | The error occurs when the code is unable to be compiled. |
A test plan is basically a dynamic document monitored and controlled by the testing manager. The success of a testing project totally depends upon a well-written test plan document that describes software testing scope and activities. It basically serves as a blueprint that outlines the what, when, how, and more of the entire test process.
A test plan must include the following details:
Test report is basically a document that includes a total summary of testing objectives, activities, and results. It is very much required to reflect testing results and gives an opportunity to estimate testing results quickly. It helps us to decide whether the product is ready for release or not. It also helps us determine the current status of the project and the quality of the product. A test report must include the following details:
Test deliverables, also known as test artifacts, are basically a list of all of the documents, tools, and other components that are given to the stakeholders of a software project during the SDLC. Test deliverables are maintained and developed in support of the test. At every phase of SDLC, there are different deliverables as given below:
Before Testing Phase
During Testing Phase
After testing Phase
Different categories of debugging include:
Full-stack developer must introduce with the following:
MVC and MVP both are architectural patterns that are used to develop applications.
MVC
MVC stands for Model View Controller. It is an architectural pattern that is used to develop Java Enterprise Applications. It splits an application into three logical components i.e. Model, View, and Controller. It separates the business-specific logic (Model component) from the presentation layer (View component) from each other.
The model components contain data and logic related to it. The View component is responsible for displaying model objects inside the user interface. The Controller receives the input and calls model objects based on handler mapping. It also passes model objects to views in order to display output inside the view layer.
MVP
MVP stands for Model View Presenter. It is derived from the MVC architectural pattern. It adds an extra layer (known as indirection) to the architectural pattern that splits the View and Controller into View and Presenter. The role of Controller is replaced with a Presenter. It exists at the same level as View in MVC. It contains UI business logic for the View. The invocations received from the View directly sends to the Presenter. It maintains the action (events) between View and Model. The Presenter does not directly communicate with the View. It communicates through an interface.
The major difference between MVC and MVP architectural pattern is that in MVC architectural pattern Controller does not pass the data from the Model to the View. It only notifies the View to get the data from the Model itself.
While in MVP architectural pattern the View and Model layers are connected with each other. The presenter itself receives the data from the Model and sends it to the View to show.
Another difference is that MVC is often used in web-frameworks while MVP is used in app development.
Pair programming (a fundamental aspect of programming) is an agile software development technique in which two developer works together on the same machine (system). The developer who writes the code is called the driver and the developer who reviews (checks code, proofread, and spell checks) the code is called the navigator. The programming technique is more efficient and coding mistakes reduced to the minimum. The disadvantage of pair programming is that it increases the cost.
4. What is CORS in MVC and how it works?
CORS stands for Cross-Origin Resource Sharing. It is a W3C standard and HTTP-header-based mechanism. It permits a server to indicate any other origins (like domain, port, etc.) instead of the requested one. In other words, it enables one website to access the resources of another website using JavaScript.
It supports secure cross-origin requests and transfers data between servers and browsers. Advanced browsers use CORS in APIs. It is flexible and safe in comparison to JSONP (JSON with Padding). It provides better web service integration.
While using the MVC to enable CORS, the same CORS service can be used but we cannot use the same CORS middleware. We can use a particular CORS for a particular action, for a particular controller, and globally for all controllers.
A pre-flight check (or request) is sent by the browser to the server (hosting the cross-origin resource) which ensures that the server will permit the actual request or not. For example, invoking the URL https://example.com through https://demo.com.
CORS stands for Cross-Origin Resource Sharing. It is a W3C standard and HTTP-header-based mechanism. It permits a server to indicate any other origins (like domain, port, etc.) instead of the requested one. In other words, it enables one website to access the resources of another website using JavaScript.
It supports secure cross-origin requests and transfers data between servers and browsers. Advanced browsers use CORS in APIs. It is flexible and safe in comparison to JSONP (JSON with Padding). It provides better web service integration.
While using the MVC to enable CORS, the same CORS service can be used but we cannot use the same CORS middleware. We can use a particular CORS for a particular action, for a particular controller, and globally for all controllers.
A pre-flight check (or request) is sent by the browser to the server (hosting the cross-origin resource) which ensures that the server will permit the actual request or not. For example, invoking the URL https://example.com through https://demo.com.
We can use the following ways to optimize the scalability and efficiency of a website:
S.N. | Basis of Comparison | Get | Post |
1 | Purpose | The Get request is designed for getting data from the server. | The Post request is designed for sending the data to the server. |
2 | Post Mechanism | The request is sent via URL. | The request is sent via an HTTP request body. |
3 | Parameter Passing | The request parameters are transmitted as a query string appended to the request. | The request parameters are transmitted with the body of the request. |
4 | Default | It is the default method hence it implements automatically. | We need to specify manually. |
5 | Capacity | We can send limited data with the Get request. | We can send a large amount of data with the Post request. |
6 | Data Type | It always submits data as text. | We can send any type of data. |
7 | Security | The use of Get is safe because it is idempotent. | The use of Post unsafe because it is non-idempotent. |
8 | Visibility of Data | The data is visible to the user as it puts the data in the URL. | The data is not visible to the user as it puts the data in the message body. |
9 | Bookmark and Caching | The Get request can be bookmarked and caching. | The post request cannot be bookmarked and caching. |
10 | Efficiency | It is more efficient than post. | It is less efficient. |
11 | Example | Search is the best example of Get request. | Login is the best example of a Post request. |
A program may have the property of referential transparency if any two expressions in the program that have the same value can be substituted for one another anywhere in the program without changing the result of the program. It is used in functional programming. For example, consider the following code snippet:
The variables count1 and count2 will be equal if the value of fun(x) is not reflected. If the variable count1 is not equal to the variable count2, the referential transparency is violated.
The term REST stands for Representational State Transfer. It is an architectural style that is used to create Web Services. It uses HTTP requests to access and use the data. We can create, update, read, and delete data.
An API (Application Program Interface) for a website is the code that allows two software programs to communicate with each other. It allows us to write requesting services from an operating system or other application.
A promise is an object that can be returned synchronously from an asynchronous function. It may be in the following three states:
A promise will be settled if and only if it is not pending.
There are the following ways to optimize the load time of a web application:
CI/CD is a best practice to develop applications in which code changes more frequently and rapidly. Sometimes, it is also known as CI\CD pipeline. It is widely used in DevOps and also an agile methodology.
Continuous integration is a coding philosophy or deployment practice in which developers integrate their code in a shared repository several times a day. Because modern application requires developing code in different platforms. The goal of continuous integration is to establish an automated mechanism that builds, test, and package the application.
Continuous delivery starts where CI ends. It automatically delivers the application to the selected infrastructure. CD ensures the automated delivery of code if any changes are made in the code.
In software design, we use the following architectural design patterns:
Long polling is an effective method for creating a stable server connection without using the WebSocket or Server-Side Events protocols. It operates at the top of the conventional client-server model. Note that Node.js is using the same technique as the next development model.
In this method, the client sends the request and the server responds until the connexon is open as it contains new and unique information. As soon as the server responds, a request to the client can be submitted. When the data is available, the server will return a query. It functions when the client application stops and the server ends requests.
In web design, the idea of using HTML elements to indicate what they actually are. It is known as semantic HTML or semantic markup.
Semantic HTML is HTML that represents meaning to the web page rather than just presentation. For example, tag <p> indicates that a paragraph is enclosed in it. It is both semantic and presentational because the user know what paragraph are and the browser also know how to display them. On the other hand, tags such as <b> and <i> are not semantic. They only represent how text should look. These tags do not provide any additional meaning to the markup.
Example of semantic HTML tags are header tags <h1> to <h6>, <abbr>, <cite>, <tt>, <code>, <blockquote>, <em>, etc. There are some other semantic HTML tags that are used to build a standards-compliant website.
We should use the semantic HTML for the following reasons:
Null: Null means a variable is assigned with a null value. If we use it with typeof operator it gives result as an object. We should never assign a variable to null because the programmer uses it to represent a variable that has no value. Note that JavaScript will never automatically assign the value to null.
Undefined: Undefined means the variable is declared but not assigned any value to it. It may be a variable itself does not exist. If we use it with typeof operator it gives the result undefined. It is not valid in JSON.
Let’s understand it through an example.
When we execute the above code, it generates the following output:
From the above output, we can observe that the value of var1 is undefined also its type is undefined. Because we have not assigned any value to the variable var1. The value null is assigned to the variable var2. It prints its type as abject. Since null is an assignment value and we can assign it to a variable. Therefore, JavaScript treats null and undefined relatively equally because both represent an empty value.
Both, REST and GraphQL, are API design architectures that can be used to develop web services, especially for data-driven applications.
GraphQL | REST |
GraphQL is an API design architecture, but with a different approach that is much flexible. | REST is a robust methodology and API design architecture used to implement web services. |
It follows client-driven architecture. | It follows server-driven architecture. |
It does not deal with the dedicated resources. | It deals with the dedicated resources. |
It has a single endpoint that takes dynamic parameters. | It has multiple endpoints. |
It provides stateless servers and structured access to resources. | It provides stateless servers and flexible controlled access to resources. |
It is elastic in nature. | It is not rigid in nature. |
It supports only JSON format. | It supports XML, JSON, HTML, YAML, and other formats also. |
The client defines response data that it needs via a query language. | Data represented as resources over HTTP through URI. |
It provides synchronous and asynchronous communication in multiple protocols such as HTTP, MQTT, AMQP. | It provides synchronous communication through HTTP only. |
Its design based on HTTP (status, methods, and URI). | Its design based on message exchange. |
It provides high consistency across all platforms. | It is difficult to achieve high consistency across all platforms. |
Development speed is fast. | Development speed is slow. |
In Java, a connection leak is a situation when the developer forgets to close the JDBC connection, it is known as connection leak. The most common type of Connection Leak experienced in Java development, is when using a Connection Pool (such as DBCP). We can fix it by closing the connection and giving special attention to the error handling code.
A session is a conversational state between client and server and it can consist of multiple requests and responses between client and server. Therefore, HTTP and web server both are stateless, the only way to maintain a session is when some unique information about the session (session-id) is passed between server and client in every request and response. We can use the following methods to maintain the session:
Servlet Context | Servlet Config |
Servlet Context represents the whole web application running on a particular JVM and common for all the servlet. | Servlet Config object represents single servlet. |
It is just like a global parameter associated with the whole application. | It is the same as the local parameter associated with a particular servlet. |
It has application-wide scope so define outside servlet tag in the web.xml file. | It is a name-value pair defined inside the servlet section of web.xml files so it has servlet wide scope. |
get Servlet Context() method is used to get the context object. | get Servlet Config() method is used to get the config object. |
To get the MIME type of a file or application session related information is stored using a servlet context object | The shopping cart of a user is a specific to particular user so here we can use servlet config. |
RequestDispatcher is an interface that is used to forward the request to another resource that can be HTML, JSP, or another servlet in the same application. We can also use it to include the content of another resource in the response. The interface contains two methods forward() and include().
Full Stack development involves developing both the front end and back end of the web application/website at the same time. This process includes three layers:
A Full Stack Web Developer is a person who is familiar with developing both client and server software. In addition to mastering CSS and HTML, they are also know how to program browsers, databases, and servers.
To fully comprehend the role of Full Stack developer, you must understand the web development components – front end and back end
The front end comprises a visible part of the application in which the user interacts, while the back end includes business logic.
Some of the popular tools used by full-stack developers to make development more accessible and efficient are:
A Full Stack developer should be familiar with:
As the name suggests, Pair Programming is where two programmers share a single workstation. Formally, one programmer at the keyboard called the “driver” writes the code. The other programmer is the “navigator” who views each line of the code written, spell check, and proofread it. Also, programmers will swap their roles every few minutes and vice-versa.
Cross-origin resource sharing (CORS) is a process that utilizes additional HTTP headers to tell browsers to provide a web application running at one origin. CORS accesses various web resources on different domains. Web scripts can be integrated using CORS when it requests a resource that has an external origin (protocol. Domain, or port) from its own.
Inversion of Control (IoC) is a broad term used by software developers for defining a pattern that is used for decoupling components and layers in the system. It is mostly used in the context of object-oriented programming. Control of objects or portions of a program is transferred to a framework or container with the help of Inversion of Control. It can be achieved using various mechanisms such as service locator pattern, strategy design pattern, factory pattern, and dependency injection.
Dependency Injection is a design pattern by which IoC is executed. Injecting objects or connecting objects with other objects is done by container instead of by the object themselves. It involves three types of classes.
Continuous Integration (CI) is a practice where developers integrate code into a shared repository regularly to detect problems early. CI process involves automatic tools that state new code’s correctness before integration. Automated builds and tests verify every check-in.
The main purpose of multithreading is to provide multiple threads of execution concurrently for maximum utilization of the CPU. It allows multiple threads to exist within the context of a process such that they execute individually but share their process resources.
This is typically a difficult question to answer, but a good developer will be able to go through this with ease. The core difference is GraphQL doesn’t deal with dedicated resources. The description of a particular resource is not coupled to the way you retrieve it. Everything referred to as a graph is connected and can be queried to application needs.
There are quite a lot of possible ways to optimize your website for the best performance:
The purpose of the Observer pattern is to define a one-to-many dependency between objects, as when an object changes the state, then all its dependents are notified and updated automatically. The object that watches on the state of another object is called the observer, and the object that is being watched is called the subject.
A Full-Stack engineer is someone with a senior-level role with the experience of a Full-Stack developer, but with project management experience in system administration (configuring and managing computer networks and systems).
Polling is a method by which a client asks the server for new data frequently. Polling can be done in two ways: Long polling and Short Polling.
The following table compares the GET and POST:
GET | POST |
GET is used to request data from a specified resource. | POST is used to send data to a server to create/update a resource. |
Can be bookmarked | Cannot be bookmarked |
Can be cached | Not cached |
Parameters remain in the browser history | Parameters are not saved in the browser history |
Data is visible to everyone in the URL | Data is not displayed in the URL |
Only ASCII characters allowed | Binary data is also allowed |
If the data within the API is publicly accessible, then it’s not possible to prevent data scraping completely. However, there is an effective solution that will deter most people/bots: rate-limiting (throttling).
Throttling will prevent a specific device from making a defined number of requests within a defined time. Upon exceeding the specified number of requests, 429 Too Many Attempts HTTP error should be thrown.
Other possible solutions to prevent a bot from scrapping are:
REST stands for representational state transfer. A RESTful API (also known as REST API) is an architectural style for an application programming interface (API or web API) that uses HTTP requests to obtain and manage information. That data can be used to POST, GET, DELETE, and OUT data types, which refers to reading, deleting, creating, and operations concerning services.
A callback in JavaScript is a function passed as an argument into another function, that is then requested inside the outer function to make some kind of action or routine. JavaScript callback functions can be used synchronously and asynchronously. APIs of the node are written in such a way that they all support callbacks.
Data Attributes are used to store custom data private to the application or page. They allow us to store extra data on the standard, semantic HTML elements. The stored data can be used in JavaScript’s page to create a more engaging user experience.
Data attribute consists of two parts:
Some of the key skills required for a data analyst include:
Data analysis generally refers to the process of assembling, cleaning, interpreting, transforming, and modeling data to gain insights or conclusions and generate reports to help businesses become more profitable. The following diagram illustrates the various steps involved in the process:
While analyzing data, a Data Analyst can encounter the following issues:
Data cleaning, also known as data cleansing or data scrubbing or wrangling, is basically a process of identifying and then modifying, replacing, or deleting the incorrect, incomplete, inaccurate, irrelevant, or missing portions of the data as the need arises. This fundamental element of data science ensures data is correct, consistent, and usable.
Some of the tools useful for data analysis include:
Data mining Process: It generally involves analyzing data to find relations that were not previously discovered. In this case, the emphasis is on finding unusual records, detecting dependencies, and analyzing clusters. It also involves analyzing large datasets to determine trends and patterns in them.
Data Profiling Process: It generally involves analyzing that data’s individual attributes. In this case, the emphasis is on providing useful information on data attributes such as data type, frequency, etc. Additionally, it also facilitates the discovery and evaluation of enterprise metadata.
Data Mining | Data Profiling |
It involves analyzing a pre-built database to identify patterns. | It involves analyses of raw data from existing datasets. |
It also analyzes existing databases and large datasets to convert raw data into useful information. | In this, statistical or informative summaries of the data are collected. |
It usually involves finding hidden patterns and seeking out new, useful, and non-trivial data to generate useful information. | It usually involves the evaluation of data sets to ensure consistency, uniqueness, and logic. |
Data mining is incapable of identifying inaccurate or incorrect data values. | In data profiling, erroneous data is identified during the initial stage of analysis. |
Classification, regression, clustering, summarization, estimation, and description are some primary data mining tasks that are needed to be performed. | This process involves using discoveries and analytical methods to gather statistics or summaries about the data. |
In the process of data validation, it is important to determine the accuracy of the information as well as the quality of the source. Datasets can be validated in many ways. Methods of data validation commonly used by Data Analysts include:
In a dataset, Outliers are values that differ significantly from the mean of characteristic features of a dataset. With the help of an outlier, we can determine either variability in the measurement or an experimental error. There are two kinds of outliers i.e., Univariate and Multivariate. The graph depicted below shows there are four outliers in the dataset.
Outliers are detected using two methods:
Data Analysis: It generally involves extracting, cleansing, transforming, modeling, and visualizing data in order to obtain useful and important information that may contribute towards determining conclusions and deciding what to do next. Analyzing data has been in use since the 1960s.
Data Mining: In data mining, also known as knowledge discovery in the database, huge quantities of knowledge are explored and analyzed to find patterns and rules. Since the 1990s, it has been a buzzword.
Data Analysis | Data Mining |
Analyzing data provides insight or tests hypotheses. | A hidden pattern is identified and discovered in large datasets. |
It consists of collecting, preparing, and modeling data in order to extract meaning or insights. | This is considered as one of the activities in Data Analysis. |
Data-driven decisions can be taken using this way. | Data usability is the main objective. |
Data visualization is certainly required. | Visualization is generally not necessary. |
It is an interdisciplinary field that requires knowledge of computer science, statistics, mathematics, and machine learning. | Databases, machine learning, and statistics are usually combined in this field. |
Here the dataset can be large, medium, or small, and it can be structured, semi-structured, and unstructured. | In this case, datasets are typically large and structured. |
A KNN (K-nearest neighbor) model is usually considered one of the most common techniques for imputation. It allows a point in multidimensional space to be matched with its closest k neighbors. By using the distance function, two attribute values are compared. Using this approach, the closest attribute values to the missing values are used to impute these missing values.
Known as the bell curve or the Gauss distribution, the Normal Distribution plays a key role in statistics and is the basis of Machine Learning. It generally defines and measures how the values of a variable differ in their means and standard deviations, that is, how their values are distributed.
The above image illustrates how data usually tend to be distributed around a central value with no bias on either side. In addition, the random variables are distributed according to symmetrical bell-shaped curves.
The term data visualization refers to a graphical representation of information and data. Data visualization tools enable users to easily see and understand trends, outliers, and patterns in data through the use of visual elements like charts, graphs, and maps. Data can be viewed and analyzed in a smarter way, and it can be converted into diagrams and charts with the use of this technology.
Data visualization has grown rapidly in popularity due to its ease of viewing and understanding complex data in the form of charts and graphs. In addition to providing data in a format that is easier to understand, it highlights trends and outliers. The best visualizations illuminate meaningful information while removing noise from data.
Several Python libraries that can be used on data analysis include:
Hash tables are usually defined as data structures that store data in an associative manner. In this, data is generally stored in array format, which allows each data value to have a unique index value. Using the hash technique, a hash table generates an index into an array of slots from which we can retrieve the desired value.
Hash table collisions are typically caused when two keys have the same index. Collisions, thus, result in a problem because two elements cannot share the same slot in an array. The following methods can be used to avoid such hash collisions:
An effective data model must possess the following characteristics in order to be considered good and developed:
The following are some disadvantages of data analysis:
Based on user behavioral data, collaborative filtering (CF) creates a recommendation system. By analyzing data from other users and their interactions with the system, it filters out information. This method assumes that people who agree in their evaluation of particular items will likely agree again in the future. Collaborative filtering has three major components: users- items- interests.
An interdisciplinary field that constitutes various scientific processes, algorithms, tools, and machine learning techniques working to help find common patterns and gather sensible insights from the given raw input data using statistical and mathematical analysis is called Data Science.
The following figure represents the life cycle of data science.
Data analysis can not be done on a whole volume of data at a time especially when it involves larger datasets. It becomes crucial to take some data samples that can be used for representing the whole population and then perform analysis on it. While doing this, it is very much necessary to carefully take sample data out of the huge data that truly represents the entire dataset.
There are majorly two categories of sampling techniques based on the usage of statistics, they are:
Differentiate between the long and wide format data.
Long format Data | Wide-Format Data |
Here, each row of the data represents the one-time information of a subject. Each subject would have its data in different/ multiple rows. | Here, the repeated responses of a subject are part of separate columns. |
The data can be recognized by considering rows as groups. | The data can be recognized by considering columns as groups. |
This data format is most commonly used in R analyses and to write into log files after each trial. | This data format is rarely used in R analyses and most commonly used in stats packages for repeated measures ANOVAs. |
A p-value is the measure of the probability of having results equal to or more than the results achieved under a specific hypothesis assuming that the null hypothesis is correct. This represents the probability that the observed difference occurred randomly by chance.
Resampling is a methodology used to sample data for improving accuracy and quantify the uncertainty of population parameters. It is done to ensure the model is good enough by training the model on different patterns of a dataset to ensure variations are handled. It is also done in the cases where models need to be validated using random subsets or when substituting labels on data points while performing tests.
Data is said to be highly imbalanced if it is distributed unequally across different categories. These datasets result in an error in model performance and result in inaccuracy.
There are not many differences between these two, but it is to be noted that these are used in different contexts. The mean value generally refers to the probability distribution whereas the expected value is referred to in the contexts involving random variables.
This bias refers to the logical error while focusing on aspects that survived some process and overlooking those that did not work due to lack of prominence. This bias can lead to deriving wrong conclusions.
Confounding variables are also known as confounders. These variables are a type of extraneous variables that influence both independent and dependent variables causing spurious association and mathematical relationships between those variables that are associated but are not casually related to each other.
The selection bias occurs in the case when the researcher has to make a decision on which participant to study. The selection bias is associated with those researches when the participant selection is not random. The selection bias is also called the selection effect. The selection bias is caused by as a result of the method of sample collection.
Four types of selection bias are explained below:
Let us first understand the meaning of bias and variance in detail:
Bias: It is a kind of error in a machine learning model when an ML Algorithm is oversimplified. When a model is trained, at that time it makes simplified assumptions so that it can easily understand the target function. Some algorithms that have low bias are Decision Trees, SVM, etc. On the other hand, logistic and linear regression algorithms are the ones with a high bias.
Variance: Variance is also a kind of error. It is introduced into an ML Model when an ML algorithm is made highly complex. This model also learns noise from the data set that is meant for training. It further performs badly on the test data set. This may lead to over lifting as well as high sensitivity.
When the complexity of a model is increased, a reduction in the error is seen. This is caused by the lower bias in the model. But, this does not happen always till we reach a particular point called the optimal point. After this point, if we keep on increasing the complexity of the model, it will be over lifted and will suffer from the problem of high variance.
Trade-off Of Bias And Variance: So, as we know that bias and variance, both are errors in machine learning models, it is very essential that any machine learning model has low variance as well as a low bias so that it can achieve good performance.
Let us see some examples. The K-Nearest Neighbor Algorithm is a good example of an algorithm with low bias and high variance. This trade-off can easily be reversed by increasing the k value which in turn results in increasing the number of neighbours. This, in turn, results in increasing the bias and reducing the variance.
Another example can be the algorithm of a support vector machine. This algorithm also has a high variance and obviously, a low bias and we can reverse the trade-off by increasing the value of parameter C. Thus, increasing the C parameter increases the bias and decreases the variance.
So, the trade-off is simple. If we increase the bias, the variance will decrease and vice versa.
It is a matrix that has 2 rows and 2 columns. It has 4 outputs that a binary classifier provides to it. It is used to derive various measures like specificity, error rate, accuracy, precision, sensitivity, and recall.
The test data set should contain the correct and predicted labels. The labels depend upon the performance. For instance, the predicted labels are the same if the binary classifier performs perfectly. Also, they match the part of observed labels in real-world scenarios. The four outcomes shown above in the confusion matrix mean the following:
The formulas for calculating basic measures that comes from the confusion matrix are:
In these formulas:
FP = false positive
FN = false negative
TP = true positive
RN = true negative
Also,
Sensitivity is the measure of the True Positive Rate. It is also called recall.
Specificity is the measure of the true negative rate.
Precision is the measure of a positive predicted value.
F-score is the harmonic mean of precision and recall.
Logistic Regression is also known as the logit model. It is a technique to predict the binary outcome from a linear combination of variables (called the predictor variables).
For example, let us say that we want to predict the outcome of elections for a particular political leader. So, we want to find out whether this leader is going to win the election or not. So, the result is binary i.e. win (1) or loss (0). However, the input is a combination of linear variables like the money spent on advertising, the past work done by the leader and the party, etc.
Linear regression is a technique in which the score of a variable Y is predicted using the score of a predictor variable X. Y is called the criterion variable. Some of the drawbacks of Linear Regression are as follows:
Classification is very important in machine learning. It is very important to know to which class does an observation belongs. Hence, we have various classification algorithms in machine learning like logistic regression, support vector machine, decision trees, Naive Bayes classifier, etc. One such classification technique that is near the top of the classification hierarchy is the random forest classifier.
So, firstly we need to understand a decision tree before we can understand the random forest classifier and its works. So, let us say that we have a string as given below:
So, we have the string with 5 ones and 4 zeroes and we want to classify the characters of this string using their features. These features are colour (red or green in this case) and whether the observation (i.e. character) is underlined or not. Now, let us say that we are only interested in red and underlined observations. So, the decision tree would look something like this:
So, we started with the colour first as we are only interested in the red observations and we separated the red and the green-coloured characters. After that, the “No” branch i.e. the branch that had all the green coloured characters was not expanded further as we want only red-underlined characters. So, we expanded the “Yes” branch and we again got a “Yes” and a “No” branch based on the fact whether the characters were underlined or not.
So, this is how we draw a typical decision tree. However, the data in real life is not this clean but this was just to give an idea about the working of the decision trees. Let us now move to the random forest.
Random Forest
It consists of a large number of decision trees that operate as an ensemble. Basically, each tree in the forest gives a class prediction and the one with the maximum number of votes becomes the prediction of our model. For instance, in the example shown below, 4 decision trees predict 1, and 2 predict 0. Hence, prediction 1 will be considered.
The underlying principle of a random forest is that several weak learners combine to form a keen learner. The steps to build a random forest are as follows:
Let us say that Prob is the probability that we may see a minimum of one shooting star in 15 minutes.
So, Prob = 0.2
Now, the probability that we may not see any shooting star in the time duration of 15 minutes is = 1 – Prob
1-0.2 = 0.8
The probability that we may not see any shooting star for an hour is:
= (1-Prob)(1-Prob)(1-Prob)*(1-Prob)
= 0.8 * 0.8 * 0.8 * 0.8 = (0.8)⁴
≈ 0.40
So, the probability that we will see one shooting star in the time interval of an hour is = 1-0.4 = 0.6
So, there are approximately 60% chances that we may see a shooting star in the time span of an hour.
Deep learning is a paradigm of machine learning. In deep learning, multiple layers of processing are involved in order to extract high features from the data. The neural networks are designed in such a way that they try to simulate the human brain.
Deep learning has shown incredible performance in recent years because of the fact that it shows great analogy with the human brain.
The difference between machine learning and deep learning is that deep learning is a paradigm or a part of machine learning that is inspired by the structure and functions of the human brain called the artificial neural networks. Learn More.
Gradient: Gradient is the measure of a property that how much the output has changed with respect to a little change in the input. In other words, we can say that it is a measure of change in the weights with respect to the change in error. The gradient can be mathematically represented as the slope of a function.
Gradient Descent: Gradient descent is a minimization algorithm that minimizes the Activation function. Well, it can minimize any function given to it but it is usually provided with the activation function only.
Gradient descent, as the name suggests means descent or a decrease in something. The analogy of gradient descent is often taken as a person climbing down a hill/mountain. The following is the equation describing what gradient descent means:
So, if a person is climbing down the hill, the next position that the climber has to come to is denoted by “b” in this equation. Then, there is a minus sign because it denotes the minimization (as gradient descent is a minimization algorithm). The Gamma is called a waiting factor and the remaining term which is the Gradient term itself shows the direction of the steepest descent.
Marketing professionals can use any form of marketing that uses electronic devices or a digital mode to deliver promotional messaging and track its effectiveness throughout the consumer journey. In practice, digital marketing refers to advertising that appears on a computer, phone, tablet, or another type of electronic device. Online video, display ads, SEO, paid social ads, and social media posts are just a few examples. Digital marketing is usually contrasted with traditional marketing strategies, including magazine ads, billboards, and direct mail.
Types of Digital Marketing formats are as follows:
This is the most frequently asked question in a digital marketing interview. In recent years, digital marketing has demonstrated immense power, and here are some of the most compelling reasons:
In Digital Marketing, there are a variety of techniques that can be utilized to achieve a specific aim. Here are a few examples:
Google has developed a free analytics platform for all people that lets you track the performance of your social media site, video and app. You can also calculate your advertisements ROI from here.
Ahrefs.com is an excellent tool for backlinks and SEO analysis.
Mailchimp is an email marketing tool that lets you manage and communicate with clients, consumers, and other interested parties from one central location.
The Google Keyword Planner is a tool that can assist you in determining keywords for your Search Network campaigns. It’s a free app that allows you to locate keywords relevant to your business and see how many monthly searches they generate and how much it costs to target them.
Kissmetrics is a comprehensive online analytics software that provides critical website insights and customer engagement.
Keyword Discovery provides you with access to the world’s most comprehensive keyword index, which is compiled from all search engines. Access to consumer search keywords for goods and services, as well as search terms that direct users to your competitors’ websites.
Semrush is a comprehensive toolkit for gaining web visibility and learning about marketing. SEMrush tools and reports will benefit marketers in SEO, PPC, SMM, Keyword Research, Competitive Research, and content marketing services.
Buffer is a small-to-medium-sized business social media management application that allows users to create content, communicate with customers, and track their social media success. Buffer integrates social media networks such as Facebook, Instagram, and Twitter.
AdEspresso is a simple and intuitive Facebook ad management and optimization tool.
Digital marketing can be categorized into Inbound Marketing and Outbound Marketing.
Customer – The person who receives the message.
Content – The message that the customer sees is referred to as content.
Context – The message sent to the consumer.
Conversation – This is when you and your consumer have a conversation.
Dofollow: These links allow search engine crawlers to follow a link to give it a boost from search engine result pages and to pass on the link juice (reputation passed on to another website) to the destination site.
Eg: <ahref=”http://www.XYZ.com/”>XYZ</a>
Nofollow: These links don’t allow search engine crawlers to follow the link. The also doesn’t pass on any link juice to the destination domain.
Eg: <ahref=”http://www.XYZ.com/” rel=”nofollow”>XYZ</a>
The 301 redirect tells the user and search engine bots that the original web page has permanently moved to another location. 302, on the other hand, only serves as a temporary redirect and does not pass on the link juice to the new location.
Backlinks are external links to your website that can help with:
Google uses content that’s mobile-friendly for indexing and ranking websites. If your website has a responsive design, Google will rank it higher up on search engine results. It bases this ranking on how well your website performs on mobile devices.
Some of the ways to avoid this are:
Check out this article to know how web page speed affects SERP.
Short tail keywords work best when the objective is to drive many visitors to your website. On the other hand, long-tail keywords are usually used for targeted pages like product pages and articles.
Next up in this digital marketing interview questions article, let’s cover questions related to Search Engine Marketing.
Some of the most effective and proven ways to increase traffic to your website include:
From a birds eye view, on-page SEO refers to the factors and techniques that are specifically focused on optimizing aspects of your site that are under your control. On the other hand, off-page SEO refers to the optimization factors and strategies focused on promoting your site or brand. You can learn about it thoroughly in this tutorial.
AMP, i.e., Accelerated Mobile Pages allow you to create mobile pages that load instantly, helping visitors better engage with the page instead of bouncing off. You can dig deeper into AMP, use cases, strategies and more.
.NET is one of the platforms provided by Microsoft, which is used to build a variety of applications using Windows.
The ability to develop classes, libraries, and APIs and run, debug, and deploy code onto web services and applications form are the key uses of this framework. It supports a lot of languages, and you can work with everything from C# to VB all the way to Perl, and more.
The object-oriented model is championed here in the .NET framework
There are a lot of components that make up the .NET framework, and some of them are as follows:
JIT is the abbreviation of Just in Time. It is a compiler that is used to convert intermediate code into native code easily.
In .NET, during execution, the code is converted into the native language, also called the byte code. This is processed by the CPU, and the framework helps with the conversion.
MSIL is the abbreviation for Microsoft Intermediate Language. It is used to provide the instructions required for operations such as memory handling, exception handling, and more. It can also provide instructions to initialize and store values and methods easily.
The next .NET interview question involves an important concept.
The acronym CTS stands for Common Type System, which encompasses a predefined set of systematic regulations dictating the proper definition of data types in relation to the values provided by a user. Its primary purpose is to characterize and encompass all the various data types utilized within a user’s application.
Nevertheless, it is also permissible for users to construct custom calls using the guidelines established by the CTS. This feature is primarily provided to accommodate the diverse range of programming languages that can be employed when working within the .NET framework.
CLR stands for Common Language Runtime. It forms to be the most vital component of .NET as it provides the foundation for many applications to run on.
If a user writes an application in C#, it gets compiled and converted to intermediate code. After this, CLR takes up the code and works on it with respect to the following aspects:
Managed Code | Unmanaged Code |
Managed by CLR | Not managed by any entity |
Garbage collection is used to manage memory | Runtime environment takes care of the management |
The .NET framework is necessary for the execution | N
ot dependant on the .NET framework to run |
There are four main steps that include in the execution of the managed code. They are as follows:
State management, as the name suggests, is used to constantly monitor and maintain the state of objects in the runtime. A web page or a controller is considered to be an object.
There are two types of state management:
Object | Class |
An instance of a class | The template for creating an object |
A class becomes an object after instantiation | The basic scaffolding of an object |
Used to access properties from a class | The description of methods and properties |
system.stringbuilder | system.string |
Mutable | Immutable |
Supports using append | Cannot use the append keyword |
LINQ is the abbreviated form of Language Integrated Query. It was first brought out in 2008, and it provides users with a lot of extra features when working with the .NET framework. One highlight is that it allows the users to manipulate data without any dependency on its source.
An assembly is the simple collection of all of the logical units present. Logical units are entities that are required to build an application and later deploy the same using the .NET framework. It can be considered as a collection of executables and DLL files.
There are four main components of an assembly. They are as follows:
Caching is a term used when the data has to be temporarily stored in the memory so that an application can access it quickly rather than looking for it in a hard drive. This speeds up the execution to an exponential pace and helps massively in terms of performance.
There are three types of caching:
The next .NET interview question we will check involves an important concept.
Function | Stored Procedure |
Can only return one value | Can return any number of values |
No support for exception handling using try-catch blocks | Supports the usage of try-catch blocks for exception handling |
The argument consists of only one input parameter | Both input and output parameters are present |
A function can be called from a stored procedure | The stored procedure cannot be called from a function |
There are five main types of constructor classes in C# as listed below:
There are numerous advantages of making use of a session are as mentioned below:
Manifest is mainly used to store the metadata of the assembly. It contains a variety of metadata which is required for many things as given below:
Memory-mapped files in .NET are used to instantiate the contents of a logical file into the address of the application. This helps a user run processes simultaneously on a single machine and have the data shared between processes.
The MemoryMappedFile.CreateFromFiles() function is used to obtain a memory-mapped file object easily.
CAS stands for Code Access Security. It is vital in the prevention of unauthorized access to programs and resources in the runtime. It can provide limited access to the code to perform only certain operations rather than providing all at a given point in time. CAS forms to be a part of the native .NET security architecture.
In simple terms, web design refers to the process of creating the visual layout and look of a website so that they resonate with a company’s brand, convey information, and support user-friendliness. Designing a website comprises many components that work together to create the overall user experience including graphic design, UX/UI design, SEO (Search Engine Optimization), and content creation. Whether you’re building a website, mobile application, or maintaining content on a website, appearance and design are crucial elements. Since the mid-2010s, web design has increasingly been geared towards designing for mobile devices and tablet browsers.
The following are some common languages used in web design. Among the following, the most fundamental language for web design is HTML, which will provide you with a solid foundation for designing websites and web applications. In fact, it is probably the first language taught to a web designer, thereby making it an essential tool for all designers to have.
The responsive web design process involves creating a web page that “responds” to or resizes itself to fit whatever screen size the viewer is using. This involves the use of HTML and CSS to resize, shrink, hide, and enlarge a website so that it appears correctly on all device resolutions (desktops, laptops, tablets, and phones). With responsive design, you can have one website that adapts to different screens and devices according to their sizes.
Background images provide a visually appealing and interactive background to a website. These images can be applied in many ways.
Syntax
<body background = “Image URL or path” >
Website Body
</body>
Example
<!DOCTYPE html>
<html>
<head>
<title>Scaler Academy</title>
</head>
<body background=”scaler.png”>
<h1>Welcome to Scaler Family</h1>
<p><a href=”https://www.scaler.com”>Scaler.com</a></p>
</body>
</html>
<style>
body {
background-image:url(” URL of the image “);
}
</style>
Example
<!DOCTYPE html>
<html>
<head>
<style>
body {
background-image: url(“scaler1.jpg”);
}
</style>
</head>
<body>
<h1>Scaler Academy</h1>
</body>
</html>
By using a color like Red, you can make the “Delete” button more visible, especially if two buttons have to be displayed simultaneously. Almost always, red is used as a symbol of caution, so that is a great way to get the user’s attention.
In graphic design, grid systems provide a two-dimensional framework for aligning and laying out elements. This is a set of measurements that the designer can use to size and align objects within a certain format. It is composed of horizontal and vertical lines that intersect, allowing the contents to be arranged on the page. A grid can facilitate the positioning of multiple design elements on a website in a way that is visually appealing, facilitates a user’s flow, and improves accessibility and the attractiveness of information and visuals. A grid can also be broken down into several subtypes, each with its own specific application in web design.
Although the error message may seem innocuous, it has a big impact on the user’s experience. Error messages written poorly can frustrate users, while well-written error messages increase users’ subjective satisfaction and speed. One should consider the following factors when writing an error message:
A few of the most common website design problems include:
Information architecture (IA) refers to the process of planning, organizing, structuring, and labeling content in a comprehensive, logical, and sustainable manner. It serves as a means of structuring and classifying content in a way that is clear and understandable, so users can find what they are looking for with little effort. IA may also be used to redesign an existing product, rather than being limited to new products. Essentially, it determines how users will access your website information as well as how well their user experience will be.
An international organization, the World Wide Web Consortium (W3C) promotes web development. Members of the organization, a full-time staff, experts invited from around the world, and the public collaborate together to create Web Standards. Also, W3C develops standards for the World Wide Web (WWW) to facilitate interoperability and collaboration between all web stakeholders. Since its inception in 1994, W3C has endeavored to lead the web towards a fuller potential.
White space, also referred to as empty space or negative space in web design, refers to the unused space that surrounds the content and functional elements of a web page. With white space, you make your design breathe by minimizing the amount of text and functionality users see at once.
When you embed a video on a website using HTML5 video (instead of using YouTube or another video-hosting service), the website must ensure that the video is served in a format that the browser can play. The MP4 video format is currently supported by major browsers, operating systems, and devices (using MPEG4 or h.264 compressions). For Firefox clients and some Android devices that are not capable of playing MP4 videos, having copies of the video in OGV and WebM formats is helpful.
H1 tags can be used as many times as you like on a web page, as there is no upper or lower limit. A website’s H1 (title) tag is of great importance to search engines and other machines that read the code of a web page and interpret its contents. It is generally recommended that web pages contain only one H1 tag. Multiple H1 tags can, however, be used as long as they are not overused, and are contextually relevant to the structure of the page. H1 is considered the main heading or title of an article, document, or section. It may be detrimental to the website’s SEO performance if H1 elements are not used properly.
While these tags provide a unique visual effect (STRONG makes the text bold, EM italicized it, and SMALL shrinks it), this is not their primary purpose, and they should not be used simply to style a piece of content in a specific way. Unlike some other tags, these ones provide a unique visual effect (STRONG makes the text bold, EM italicized it, and SMALL shrinks it). However, they are not intended to provide a visual effect, nor should they simply be used to style content in a particular way.
UX case studies are an essential component of a stellar UX portfolio. Yet, writing UX case studies can often feel daunting. Developing an effective UX case study entails telling the story of a project and communicating not only what you accomplished but also the reasons for it. The following is an outline of what to include in an UX case study:
HTML elements and HTML tags differ in the following ways:
HTML Tags | HTML Elements |
Tags indicate the start and end of an HTML element, i.e., the container that holds other HTML elements (except<!DOCTYPE> ). | All HTML elements must be enclosed between the start tag and closing tag. |
Each tag begins with a ‘<‘ and ends with a ‘>’ symbol. | The basic structure of an HTML element consists of the start tag (<tagname>), the close tag (</tagname>) and the content inserted between these tags. |
Any text inside ‘<’ and ‘>’ is called a tag. | A HTML element is anything written in between HTML tags. |
Example:<p> </p> | Example:<p>Scaler Academy</p> |
jQuery simplifies the process of implementing JavaScript on a website. With jQuery, most common tasks that require a lot of JavaScript code are wrapped into methods that can be invoked with just one line of code. Additionally, jQuery simplifies several complex aspects of JavaScript, such as DOM manipulation and AJAX calls. JQuery is a lightweight JavaScript library that strives to do more with less coding. Following are some features of the jQuery library:
Here are a few popular JQuery functions used in web design:
Both visibility and display are significant properties in CSS.
In a web document, this property allows a user to specify whether an element is visible or not, while the hidden elements consume space.
Syntax:
visibility: visible| collapse | hidden | inherit | initial;
Property Values:
The Display property specifies how components (such as hyperlinks, divs, headings, etc.) will be displayed on the web page.
Syntax:
display: none | block | inline | inline-block;
Property Values:
CSS, or Cascading Style Sheets, is a style sheet language. Essentially, it controls and oversees how elements should be portrayed on screen, on paper, in speech, or in any other media. CSS allows you to control the text color, font style, spacing between paragraphs, and columns’ size and arrangement. This allows you to quickly change the appearance of several web pages at once.
Advantages:
There are three ways to add CSS to HTML documents:
Example:
<h1 style=”color:blue;”>Scaler Academy</h1>
<p style=”color:black;”>Welcome to Scaler Family!</p>
Example:
<!DOCTYPE html>
<html>
<head>
<style>
body {background-color: powderblue;}
h1 {
color: blue;
font-size: 200%
}
p {
color: red;
font-size: 140%;
border: 2px solid black;
margin: 20px;
}
</style>
</head>
<body>
<h1>Scaler Academy</h1>
<p>Welcome to Scaler Family!</p>
</body>
</html>
Example:
<!DOCTYPE html>
<html>
<head>
<link rel=”stylesheet” href=”styles.css”>
</head>
<body>
<h1>Scaler Academy</h1>
<p>Welcome to Scaler Family!</p>
</body>
</html>
In cases where an image consists of certain words, it is recommended that the image be saved as a GIF instead of a JPG. JPG files have a file compression feature that causes the fonts to become unreadable.
Different image compression formats use different compression techniques for different purposes.
If you’re looking to learn software testing in Pune, you’re in luck! Pune is a…
In the dynamic world of web development, there are several technology stacks available to build…
The world is witnessing an unprecedented surge in data generation, and the demand for data-driven…