Wednesday, 20 December 2023

Artificial Intelligence BCA Notes

 Artificial Intelligence

Defining AI :

Artificial Intelligence (AI) involves creating algorithms that enable computers to perform tasks that typically require human intelligence. These tasks include:

Learning: Acquiring knowledge and skills through experience.

Reasoning: Solving problems through logical deduction.

Problem-Solving: Finding optimal solutions to complex problems.

Perception: Interpreting and understanding sensory input like images or speech.

Natural Language Understanding: Comprehending and generating human language.


Types of AI on the basis of capability:

Narrow AI (Weak AI):

  • Capability: Limited to specific tasks.
  • Functionality: Performs well-defined functions.
  • Example: Virtual assistants, image recognition.

General AI (Strong AI):

  • Capability: Possesses human-like cognitive abilities.
  • Functionality: Understands, learns, and applies knowledge broadly.
  • Example: Theoretical concept, no practical example.

Super AI

  • Capability: Possesses intelligence exceeding the most brilliant human minds.
  • Functionality: Demonstrates general intelligence across all domains.
  • Characteristics: Superior learning, problem-solving, adaptability, creativity, and self-improvement.


Types of AI on the basis of functionality:

  • Reactive Machines (Narrow AI): Follows predefined rules for specific tasks.
  • Limited Memory AI: Learns from historical data and makes decisions based on current and past information.
  • Theory of Mind AI: Understands human emotions, beliefs, and intentions for social interaction.
  • Self-aware AI: Has consciousness and awareness of its own existence and emotions.


Comparison - AI, ML, and Deep Learning:

AI 

ML 

DL 

AI stands for Artificial Intelligence, and is basically the study/process which enables machines to mimic human behaviour through particular algorithm. 

ML stands for Machine Learning, and is the study that uses statistical methods enabling machines to improve with experience. 

DL stands for Deep Learning, and is the study that makes use of Neural Networks(similar to neurons present in human brain) to imitate functionality just like a human brain. 

AI is the broader family consisting of ML and DL as it’s components. 

Ml is the subset of AI

Dl is the subset of ML 

AI is a computer algorithm which exhibits intelligence through decision making. 

ML is an AI algorithm which allows system to learn from data. 

DL is a ML algorithm that uses deep(more than one layer) neural networks to analyze data and provide output accordingly. 

The aim is to basically increase chances of success and not accuracy. 

The aim is to increase accuracy not caring much about the success ratio. 

It attains the highest rank in terms of accuracy when it is trained with large amount of data. 

The efficiency Of AI is basically the efficiency provided by ML and DL respectively. 

Less efficient than DL as it can’t work for longer dimensions or higher amount of data. 

More powerful than ML as it can easily work for larger sets of data. 

AI systems can be rule-based, knowledge-based, or data-driven. 

In reinforcement learning, the algorithm learns by trial and error, receiving feedback in the form of rewards or punishments. 

DL networks consist of multiple layers of interconnected neurons that process data in a hierarchical manner, allowing them to learn increasingly complex representations of the data. 

 


Artificial Intelligence and its applications:

AI finds applications in various domains:

  • Healthcare: Diagnosis, personalized medicine.
  • Finance: Fraud detection, algorithmic trading.
  • Robotics: Automation, autonomous systems.
  • Natural Language Processing: Chatbots, language translation.
  • Gaming: Intelligent opponents, procedural content generation.
  • Autonomous Vehicles: Self-driving cars and drones.

AI Techniques:

Various techniques are employed in AI development:

  • Rule-Based Systems: Decision-making based on predefined rules.
  • Expert Systems: Mimicking human expertise in a specific domain.
  • Machine Learning: Algorithms learning patterns from data.
  • Neural Networks: Mimicking the human brain's structure for learning and decision-making.
  • Natural Language Processing (NLP): Understanding and generating human language.


Level of Models: AI models vary in complexity:

  • Simple Rule-Based Models: Basic decision-making using predefined rules.
  • Machine Learning Models: Algorithms that learn patterns from data.
  • Deep Learning Models: Neural networks with multiple layers for complex tasks like image and speech recognition.


Criteria of Success: AI success is measured by:

  • Accuracy: How well the system performs tasks.
  • Efficiency: How quickly tasks are executed.
  • Adaptability: The ability to learn and adapt to new information or environments.


Intelligent Agents: Intelligent Agents are entities that:

  • Perceive: Collect data from their environment.
  • Reason: Make decisions based on collected data.
  • Act: Take actions to achieve goals.


Nature of Agents:

  • Simple Reactive Agents: Act based on current perceptions.
  • Agents with Memory: Maintain an internal state for decision-making.
  • Learning Agents: Improve performance over time through learning.


Learning Agents:

  • Learning Agents adapt and improve based on experience, data, or feedback.
  • Learning methods include supervised learning, unsupervised learning, and reinforcement learning.


Advantages and Limitations of AI:

Advantages:

  • Automation of repetitive tasks.
  • Increased efficiency.
  • 24/7 operation.

Limitations:

  • Lack of common sense.
  • Ethical concerns (bias in algorithms).
  • Potential job displacement.


Impact and Examples of AI:

Impact:

  • Revolutionizing industries (healthcare, finance, manufacturing).
  • Enhancing efficiency and decision-making.

Examples:

  • Virtual Assistants (Siri, Alexa).
  • Autonomous Vehicles.
  • Facial Recognition Systems.


Application Domains of AI:

Healthcare:

  • Diagnosis and treatment recommendation systems.
  • Drug discovery and development.

Finance:

  • Fraud detection and prevention.
  • Algorithmic trading.

Robotics:

  • Industrial automation.
  • Autonomous robots for various tasks.
  • Natural Language Processing:
  • Chatbots for customer support.
  • Language translation services.


State Space Search:
State Space Search is a fundamental concept in AI problem-solving. It involves exploring the possible states of a problem to find a solution. A state represents a configuration, and the search algorithms traverse the state space to reach a goal state.

Control Strategies:
  • Generate and Test: Systematically generates potential solutions and tests each one.
  • Hill Climbing: Iteratively moves towards a better solution in the neighborhood.
  • Best First Search: Expands nodes with the lowest heuristic cost.
  • A Search:* Evaluates nodes based on the sum of cost and heuristic function.

Heuristic Search:
Heuristic Search involves using heuristics (rules of thumb) to guide the search algorithms. It helps in selecting the most promising paths in the state space.

Problem Characteristics:
Understanding Problem Characteristics is crucial for selecting appropriate problem-solving techniques. Characteristics include the size of the state space, the complexity of transitions, and the nature of the goal.

Production System Characteristics:
Production Systems are rule-based systems with a set of production rules. Characteristics include the use of condition-action pairs, working memory, and the cycle of matching and executing rules.

Generate and Test:
Generate and Test is a problem-solving approach that systematically generates potential solutions and tests each one until a satisfactory solution is found.

Hill Climbing:
Hill Climbing is a local search algorithm that iteratively moves towards the direction of increasing elevation in the solution space, aiming to reach the peak (optimal solution).

Best First Search:
Best First Search is a search algorithm that selects nodes for expansion based on a heuristic function, prioritizing nodes with the lowest estimated cost.

A* Search:
A Search* is an informed search algorithm that uses both the cost to reach a node and a heuristic function to estimate the cost from the current node to the goal. It ensures optimality and efficiency.

Constraint Satisfaction Problem:
Constraint Satisfaction Problem (CSP) involves finding a solution that satisfies a set of constraints. It is widely used in scheduling, planning, and optimization problems.

Mean-End Analysis:
Mean-End Analysis is a problem-solving technique where the problem solver works backward from the goal state, identifying and achieving subgoals to reach the ultimate solution.

Min-Max Search:
Min-Max Search is a decision-making algorithm commonly used in game theory. It minimizes the possible loss for a worst-case scenario while maximizing potential gain.

Alpha-Beta Pruning:
Alpha-Beta Pruning is an optimization technique for the Min-Max algorithm, reducing the number of nodes evaluated by eliminating branches that cannot influence the final decision.

Propositional Logic:
Propositional Logic deals with propositions or statements that are either true or false. It uses logical operators (AND, OR, NOT) to express relationships between propositions.

Predicate Logic:
Predicate Logic extends propositional logic by introducing variables, predicates, and quantifiers (exists, for all), allowing more complex and expressive statements.

Resolution:
Resolution is a proof technique in logic and theorem proving. It involves deriving a conclusion from premises by resolving (combining) clauses.

Resolution in Propositional Logic and Predicate Logic:
Propositional Resolution: Involves resolving propositional clauses to derive new clauses.
Predicate Resolution: Extends resolution to handle predicate logic, including variable substitution.

Clause Form:
Clause Form represents a logical formula as a set of clauses. It is a standard format for applying resolution and simplifying logical expressions.

Unification Algorithm:
Unification Algorithm finds a substitution that makes two logical expressions identical. It is crucial for resolving predicate logic statements.


Image Classification:

Image Classification is a computer vision task where the goal is to assign a label or category to an input image. This is achieved by training a model, often a Convolutional Neural Network (CNN), to learn hierarchical features in the images. Image classification has a wide range of applications, including facial recognition, object detection, and medical image analysis.

Text Classification:

Text Classification involves assigning predefined categories or labels to text data. This is a common Natural Language Processing (NLP) task with applications such as spam detection, sentiment analysis, and topic categorization. Techniques like recurrent neural networks (RNNs) or transformers are commonly used for text classification.

Image Classification and Hyper-parameter Tuning:
Hyper-parameter Tuning: Involves optimizing the hyper-parameters of a machine learning model to improve its performance.
Grid Search and Random Search: Techniques for systematically exploring different combinations of hyper-parameters.
Cross-validation: Assessing model performance by dividing the dataset into subsets.

Emerging Neural Network Architectures: ResNet, AlexNet - Applications:
  • ResNet (Residual Networks): Introduced residual connections to address the vanishing gradient problem in deep neural networks. Applications include image recognition, object detection, and image generation. ResNet architectures allow training very deep networks effectively.

  • AlexNet: A pioneering deep convolutional neural network architecture that won the ImageNet Large Scale Visual Recognition Challenge in 2012. Applications include image classification and feature extraction. It played a key role in popularizing deep learning in computer vision.

Applications of ResNet and AlexNet:

ResNet Applications:
  • Image Recognition: State-of-the-art performance in image classification tasks.
  • Object Detection: Used as a backbone architecture for object detection models.
  • Image Generation: Employed in generative models for creating realistic images.
AlexNet Applications:
  • Image Classification: Recognizing objects in images with high accuracy.
  • Feature Extraction: Extracting meaningful features from images for downstream tasks.
  • Transfer Learning: Pre-trained AlexNet models used as a starting point for various computer vision tasks.


Recurrent Neural Networks (RNN):

Recurrent Neural Networks (RNNs) are a class of neural networks designed for processing sequential data. They have loops to allow information persistence, making them suitable for tasks like natural language processing, speech recognition, and time series analysis.

Building Recurrent NN:
  • Architecture: RNNs have hidden states that capture information from previous time steps.
  • Training: Backpropagation Through Time (BPTT) is used for training, allowing the network to learn temporal dependencies.
  • Challenges: Vanishing and exploding gradients can hinder training.

Long Short-Term Memory (LSTM):

LSTM is a type of RNN designed to address the vanishing gradient problem. It introduces memory cells and gates to selectively retain or discard information. LSTMs are effective for learning long-term dependencies.

Time Series Forecasting:

Time Series Forecasting involves predicting future values based on past observations. RNNs, especially LSTMs, are well-suited for this task as they can capture patterns and dependencies in sequential data.

Bidirectional RNNs:

Bidirectional RNNs process the input sequence in both forward and backward directions. This allows the network to capture information from both past and future time steps, enhancing its ability to understand context.

Encoder-Decoder Sequence to Sequence Architectures:
  • Encoder-Decoder: Consists of two parts – an encoder to process the input sequence and a decoder to generate the output sequence.
  • Applications: Widely used in machine translation, summarization, and sequence generation tasks.

BPTT for Training RNN:

Backpropagation Through Time (BPTT) is an extension of backpropagation used to train recurrent neural networks. It unfolds the network through time and applies backpropagation to update weights and biases.

Computer Vision - Speech Recognition - Natural Language Processing:
  • Computer Vision: RNNs can be applied to sequential image data, e.g., video analysis.
  • Speech Recognition: Processing sequential audio data for speech-to-text applications.
  • Natural Language Processing (NLP): Analyzing and generating human language, including tasks like sentiment analysis and text generation.

Case Studies in Classification, Regression, and Deep Networks:
  • Classification: RNNs are used for sequential data classification, e.g., sentiment analysis on text sequences.
  • Regression: Predicting a continuous value over time, such as stock prices.
  • Deep Networks: Stacking multiple layers of RNNs or combining them with other types of neural networks for more complex tasks.




NOTE: Dear readers,  please revisit your mid-semester notes (Unit 1 and 2), There are some key topics from Unit 1 and 2 in the AI syllabus haven't been fully covered, so i encourage you to review your older notes of mid-sem. The current blog specifically addresses topics outlined Unit3 and Unit4.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home