discussion, Chapter 2 q&a, chapter 3 q&a
discussion:
What’s simple random sampling? Is it possible to sample data instances using a distribution different from the uniform distribution? If so, give an example of a probability distribution of the data instances that is different from uniform (i.e., equal probability).
Chapter 2 Q&A: See the attachments
Chapet 2 video: https://www.screencast.com/t/swJSFMPlDU (Part 1)
https://www.screencast.com/t/TZZrWRgXu6f (Part 2)
Chapter 3 Q&A: See the attachments
Textbook : Read Tan, Steinbach, & Kumar – Chapter 3 – Exploring Data
Intro to Data Mining
Chapter 2 Assignment
1. What’s an attribute? What’s a data instance?
2. What’s noise? How can noise be reduced in a dataset?
3. Define outlier. Describe 2 different approaches to detect outliers in a dataset.
4. Describe 3 different techniques to deal with missing values in a dataset. Explain when each of these techniques would be most appropriate.
5. Given a sample dataset with missing values, apply an appropriate technique to deal with them.
6. Give 2 examples in which aggregation is useful.
7. Given a sample dataset, apply aggregation of data values.
8. What’s sampling?
9. What’s simple random sampling? Is it possible to sample data instances using a distribution different from the uniform distribution? If so, give an example of a probability distribution of the data instances that is different from uniform (i.e., equal probability).
10. What’s stratified sampling?
11. What’s “the curse of dimensionality”?
12. Provide a brief description of what Principal Components Analysis (PCA) does. [Hint: See Appendix A and your lecture notes.] State what’s the input and what the output of PCA is.
13. What’s the difference between dimensionality reduction and feature selection?
14. Describe in detail 2 different techniques for feature selection.
15. Given a sample dataset (represented by a set of attributes, a correlation matrix, a co-variance matrix, …), apply feature selection techniques to select the best attributes to keep (or equivalently, the best attributes to remove).
16. What’s the difference between feature selection and feature extraction?
17. Give two examples of data in which feature extraction would be useful.
18. Given a sample dataset, apply feature extraction.
19. What’s data discretization and when is it needed?
20. What’s the difference between supervised and unsupervised discretization?
21. Given a sample dataset, apply unsupervised (e.g., equal width, equal frequency) discretization, or supervised discretization (e.g., using entropy).
22. Describe 2 approaches to handle nominal attributes with too many values.
23. Given a dataset, apply variable transformation: Either a simple given function, normalization, or standardization.
24. Definition of Correlation and Covariance, and how to use them in data pre-processing (see pp. 76-78). Intro to Data Mining
Chapter 3 Assignment
[Your Name Here]
1. Obtain one of the data sets available at the UCI Machine Learning Repository and apply as many of the different visualization techniques described in the chapter as possible. The bibliographic notes and book Web site provide pointers to visualization software.
2. Identify at least two advantages and two disadvantages of using color to visually represent information.
3. What are thearrangement issues that arisewith respect tothree-dimensional plots?
4. Discuss the advantages and disadvantages of using sampling to reduce the number of data objects that need to be displayed. Would simple random sampling (without replacement) be a good approach to sampling? Why or why not?
5. Describe how you would create visualizations to display information that de-scribes the following types of systems.
a) Computer networks. Be sure to include both the static aspects of the network, such as connectivity, and the dynamic aspects, such as trac.
b) The distribution of specic plant and animal species around the world fora specic moment in time.
c) The use of computer resources, such as processor time, main memory, and disk, for a set of benchmark database programs.
d) The change in occupation of workers in a particular country over the last thirty years. Assume that you have yearly information about each person that also includes gender and level of education.
Be sure to address the following issues:
Representation. How will you map objects, attributes, and relation-ships to visual elements?
Arrangement. Are there any special considerations that need to be taken into account with respect to how visual elements are displayed? Specic examples might be the choice of viewpoint, the use of transparency, or the separation of certain groups of objects.
Selection. How will you handle a large number of attributes and data objects
6. Describe one advantage and one disadvantage of a stem and leaf plot with respect to a standard histogram.
7. How might you address the problem that a histogram depends on the number and location of the bins?
8. Describe how a box plot can give information about whether the value of an attribute is symmetrically distributed. What can you say about the symmetry of the distributions of the attributes shown in Figure 3.11?
9. Compare sepal length, sepal width, petal length, and petal width, using Figure3.12.
10. Comment on the use of a box plot to explore a data set with four attributes: age, weight, height, and income.
11. Give a possible explanation as to why most of the values of petal length and width fall in the buckets along the diagonal in Figure 3.9.
12. Use Figures 3.14 and 3.15 to identify a characteristic shared by the petal width and petal length attributes.
13. Simple line plots, such as that displayed in Figure 2.12 on page 56, which shows two time series, can be used to eectively display high-dimensional data. For example, in Figure 2.12 it is easy to tell that the frequencies of the two time series are dierent. What characteristic of time series allows the eective visualization of high-dimensional data?
14. Describe the types of situations that produce sparse or dense data cubes. Illustrate with examples other than those used in the book.
15. How might you extend the notion of multidimensional data analysis so that the target variable is a qualitative variable? In other words, what sorts of summary statistics or data visualizations would be of interest?
16. Construct a data cube from Table 3.14. Is this a dense or sparse data cube? If it is sparse, identify the cells that are empty.
17. Discuss the dierences between dimensionality reduction based on aggregation and dimensionality reduction based on techniques such as PCA and SVD. Data Mining: Data
Lecture Notes for Chapter 2
Introduction to Data Mining
by
Tan, Steinbach, Kumar
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
What is Data?
Collection of data objects and their attributes
An attribute is a property or characteristic of an object
Examples: eye color of a person, temperature, etc.
Attribute is also known as variable, field, characteristic, or feature
A collection of attributes describe an object
Object is also known as record, point, case, sample, entity, or instance
Attributes
Objects
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Attribute Values
Attribute values are numbers or symbols assigned to an attribute
Distinction between attributes and attribute values
Same attribute can be mapped to different attribute values
Example: height can be measured in feet or meters
Different attributes can be mapped to the same set of values
Example: Attribute values for ID and age are integers
But properties of attribute values can be different
ID has no limit but age has a maximum and minimum value
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Measurement of Length
The way you measure an attribute is somewhat may not match the attributes properties.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Types of Attributes
There are different types of attributes
Nominal
Examples: ID numbers, eye color, zip codes
Ordinal
Examples: rankings (e.g., taste of potato chips on a scale from 1-10), grades, height in {tall, medium, short}
Interval
Examples: calendar dates, temperatures in Celsius or Fahrenheit.
Ratio
Examples: temperature in Kelvin, length, time, counts
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Properties of Attribute Values
The type of an attribute depends on which of the following properties it possesses:
Distinctness: =
Order: < >
Addition: + –
Multiplication: * /
Nominal attribute: distinctness
Ordinal attribute: distinctness & order
Interval attribute: distinctness, order & addition
Ratio attribute: all 4 properties
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Attribute Type
Description
Examples
Operations
Nominal
The values of a nominal attribute are just different names, i.e., nominal attributes provide only enough information to distinguish one object from another. (=, )
zip codes, employee ID numbers, eye color, sex: {male, female}
mode, entropy, contingency correlation, 2 test
Ordinal
The values of an ordinal attribute provide enough information to order objects. (<, >)
hardness of minerals, {good, better, best},
grades, street numbers
median, percentiles, rank correlation, run tests, sign tests
Interval
For interval attributes, the differences between values are meaningful, i.e., a unit of measurement exists.
(+, – )
calendar dates, temperature in Celsius or Fahrenheit
mean, standard deviation, Pearson’s correlation, t and F tests
Ratio
For ratio variables, both differences and ratios are meaningful. (*, /)
temperature in Kelvin, monetary quantities, counts, age, mass, length, electrical current
geometric mean, harmonic mean, percent variation
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Attribute Level
Transformation
Comments
Nominal
Any permutation of values
If all employee ID numbers were reassigned, would it make any difference?
Ordinal
An order preserving change of values, i.e.,
new_value = f(old_value)
where f is a monotonic function.
An attribute encompassing the notion of good, better best can be represented equally well by the values {1, 2, 3} or by { 0.5, 1, 10}.
Interval
new_value =a * old_value + b where a and b are constants
Thus, the Fahrenheit and Celsius temperature scales differ in terms of where their zero value is and the size of a unit (degree).
Ratio
new_value = a * old_value
Length can be measured in meters or feet.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Discrete and Continuous Attributes
Discrete Attribute
Has only a finite or countably infinite set of values
Examples: zip codes, counts, or the set of words in a collection of documents
Often represented as integer variables.
Note: binary attributes are a special case of discrete attributes
Continuous Attribute
Has real numbers as attribute values
Examples: temperature, height, or weight.
Practically, real values can only be measured and represented using a finite number of digits.
Continuous attributes are typically represented as floating-point variables.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Types of data sets
Record
Data Matrix
Document Data
Transaction Data
Graph
World Wide Web
Molecular Structures
Ordered
Spatial Data
Temporal Data
Sequential Data
Genetic Sequence Data
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Important Characteristics of Structured Data
Dimensionality
Curse of Dimensionality
Sparsity
Only presence counts
Resolution
Patterns depend on the scale
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Record Data
Data that consists of a collection of records, each of which consists of a fixed set of attributes
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Tid
Refund
Marital
Status
Taxable
Income
Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced
95K
Yes
6
No
Married
60K
No
7
Yes
Divorced
220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
10
Data Matrix
If data objects have the same fixed set of numeric attributes, then the data objects can be thought of as points in a multi-dimensional space, where each dimension represents a distinct attribute
Such data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Document Data
Each document becomes a `term’ vector,
each term is a component (attribute) of the vector,
the value of each component is the number of times the corresponding term occurs in the document.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Document 1
season
timeout
lost
win
game
score
ball
play
coach
team
Document 2
Document 3
3
0
5
0
2
6
0
2
0
2
0
0
7
0
2
1
0
0
3
0
0
1
0
0
1
2
2
0
3
0
Transaction Data
A special type of record data, where
each record (transaction) involves a set of items.
For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
TID
Items
1
Bread, Coke, Milk
2
Beer, Bread
3
Beer, Coke, Diaper, Milk
4
Beer, Bread, Diaper, Milk
5
Coke, Diaper, Milk
Graph Data
Examples: Generic graph and HTML Links
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Chemical Data
Benzene Molecule: C6H6
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Ordered Data
Sequences of transactions
An element of the sequence
Items/Events
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Ordered Data
Genomic sequence data
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Ordered Data
Spatio-Temporal Data
Average Monthly Temperature of land and ocean
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Data Quality
What kinds of data quality problems?
How can we detect problems with the data?
What can we do about these problems?
Examples of data quality problems:
Noise and outliers
missing values
duplicate data
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Noise
Noise refers to modification of original values
Examples: distortion of a persons voice when talking on a poor phone and snow on television screen
Two Sine Waves
Two Sine Waves + Noise
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Outliers
Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Missing Values
Reasons for missing values
Information is not collected
(e.g., people decline to give their age and weight)
Attributes may not be applicable to all cases
(e.g., annual income is not applicable to children)
Handling missing values
Eliminate Data Objects
Estimate Missing Values
Ignore the Missing Value During Analysis
Replace with all possible values (weighted by their probabilities)
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Duplicate Data
Data set may include data objects that are duplicates, or almost duplicates of one another
Major issue when merging data from heterogeous sources
Examples:
Same person with multiple email addresses
Data cleaning
Process of dealing with duplicate data issues
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Data Preprocessing
Aggregation
Sampling
Dimensionality Reduction
Feature subset selection
Feature creation
Discretization and Binarization
Attribute Transformation
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Aggregation
Combining two or more attributes (or objects) into a single attribute (or object)
Purpose
Data reduction
Reduce the number of attributes or objects
Change of scale
Cities aggregated into regions, states, countries, etc
More stable data
Aggregated data tends to have less variability
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Aggregation
Standard Deviation of Average Monthly Precipitation
Standard Deviation of Average Yearly Precipitation
Variation of Precipitation in Australia
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sampling
Sampling is the main technique employed for data selection.
It is often used for both the preliminary investigation of the data and the final data analysis.
Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming.
Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sampling
The key principle for effective sampling is the following:
using a sample will work almost as well as using the entire data sets, if the sample is representative
A sample is representative if it has approximately the same property (of interest) as the original set of data
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Types of Sampling
Simple Random Sampling
There is an equal probability of selecting any particular item
Sampling without replacement
As each item is selected, it is removed from the population
Sampling with replacement
Objects are not removed from the population as they are selected for the sample.
In sampling with replacement, the same object can be picked up more than once
Stratified sampling
Split the data into several partitions; then draw random samples from each partition
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sample Size
8000 points 2000 Points 500 Points
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sample Size
What sample size is necessary to get at least one object from each of 10 groups.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Curse of Dimensionality
When dimensionality increases, data becomes increasingly sparse in the space that it occupies
Definitions of density and distance between points, which is critical for clustering and outlier detection, become less meaningful
Randomly generate 500 points
Compute difference between max and min distance between any pair of points
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Dimensionality Reduction
Purpose:
Avoid curse of dimensionality
Reduce amount of time and memory required by data mining algorithms
Allow data to be more easily visualized
May help to eliminate irrelevant features or reduce noise
Techniques
Principle Component Analysis
Singular Value Decomposition
Others: supervised and non-linear techniques
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Dimensionality Reduction: PCA
Goal is to find a projection that captures the largest amount of variation in data
x2
x1
e
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Dimensionality Reduction: PCA
Find the eigenvectors of the covariance matrix
The eigenvectors define the new space
x2
x1
e
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Dimensionality Reduction: ISOMAP
Construct a neighbourhood graph
For each pair of points in the graph, compute the shortest path distances geodesic distances
By: Tenenbaum, de Silva, Langford (2000)
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Dimensionality Reduction: PCA
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Feature Subset Selection
Another way to reduce dimensionality of data
Redundant features
duplicate much or all of the information contained in one or more other attributes
Example: purchase price of a product and the amount of sales tax paid
Irrelevant features
contain no information that is useful for the data mining task at hand
Example: students’ ID is often irrelevant to the task of predicting students’ GPA
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Feature Subset Selection
Techniques:
Brute-force approch:
Try all possible feature subsets as input to data mining algorithm
Embedded approaches:
Feature selection occurs naturally as part of the data mining algorithm
Filter approaches:
Features are selected before data mining algorithm is run
Wrapper approaches:
Use the data mining algorithm as a black box to find best subset of attributes
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Feature Creation
Create new attributes that can capture the important information in a data set much more efficiently than the original attributes
Three general methodologies:
Feature Extraction
domain-specific
Mapping Data to New Space
Feature Construction
combining features
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Mapping Data to a New Space
Two Sine Waves
Two Sine Waves + Noise
Frequency
Fourier transform
Wavelet transform
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Discretization Using Class Labels
Entropy based approach
3 categories for both x and y
5 categories for both x and y
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Discretization Without Using Class Labels
Data
Equal interval width
Equal frequency
K-means
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Attribute Transformation
A function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values
Simple functions: xk, log(x), ex, |x|
Standardization and Normalization
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Similarity and Dissimilarity
Similarity
Numerical measure of how alike two data objects are.
Is higher when objects are more alike.
Often falls in the range [0,1]
Dissimilarity
Numerical measure of how different are two data objects
Lower when objects are more alike
Minimum dissimilarity is often 0
Upper limit varies
Proximity refers to a similarity or dissimilarity
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Similarity/Dissimilarity for Simple Attributes
p and q are the attribute values for two data objects.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Euclidean Distance
Euclidean Distance
Where n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q.
Standardization is necessary, if scales differ.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Euclidean Distance
Distance Matrix
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1
Sheet2
Sheet3
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
p1
Sheet2
Sheet3
Minkowski Distance
Minkowski Distance is a generalization of Euclidean Distance
Where r is a parameter, n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Minkowski Distance: Examples
r = 1. City block (Manhattan, taxicab, L1 norm) distance.
A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors
r = 2. Euclidean distance
r . supremum (Lmax norm, L norm) distance.
This is the maximum difference between any component of the vectors
Do not confuse r with n, i.e., all these distances are defined for all numbers of dimensions.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Minkowski Distance
Distance Matrix
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
L2 p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0
p1
Sheet2
Sheet3
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
L2 p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0
p1
Sheet2
Sheet3
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
L2 p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0
p1
Sheet2
Sheet3
Sheet1
point x y
0 2
p2 2 0
p3 3 1
p4 5 1
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
L2 p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0
p1
Sheet2
Sheet3
Mahalanobis Distance
For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.
is the covariance matrix of the input data X
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Mahalanobis Distance
Covariance Matrix:
B
A
C
A: (0.5, 0.5)
B: (0, 1)
C: (1.5, 1.5)
Mahal(A,B) = 5
Mahal(A,C) = 4
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Common Properties of a Distance
Distances, such as the Euclidean distance, have some well known properties.
d(p, q) 0 for all p and q and d(p, q) = 0 only if
p = q. (Positive definiteness)
d(p, q) = d(q, p) for all p and q. (Symmetry)
d(p, r) d(p, q) + d(q, r) for all points p, q, and r.
(Triangle Inequality)
where d(p, q) is the distance (dissimilarity) between points (data objects), p and q.
A distance that satisfies these properties is a metric
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Common Properties of a Similarity
Similarities, also have some well known properties.
s(p, q) = 1 (or maximum similarity) only if p = q.
s(p, q) = s(q, p) for all p and q. (Symmetry)
where s(p, q) is the similarity between points (data objects), p and q.
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Similarity Between Binary Vectors
Common situation is that objects, p and q, have only binary attributes
Compute similarities using the following quantities
M01 = the number of attributes where p was 0 and q was 1
M10 = the number of attributes where p was 1 and q was 0
M00 = the number of attributes where p was 0 and q was 0
M11 = the number of attributes where p was 1 and q was 1
Simple Matching and Jaccard Coefficients
SMC = number of matches / number of attributes
= (M11 + M00) / (M01 + M10 + M11 + M00)
J = number of 11 matches / number of not-both-zero attributes values
= (M11) / (M01 + M10 + M11)
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
SMC versus Jaccard: Example
p = 1 0 0 0 0 0 0 0 0 0
q = 0 0 0 0 0 0 1 0 0 1
M01 = 2 (the number of attributes where p was 0 and q was 1)
M10 = 1 (the number of attributes where p was 1 and q was 0)
M00 = 7 (the number of attributes where p was 0 and q was 0)
M11 = 0 (the number of attributes where p was 1 and q was 1)
SMC = (M11 + M00)/(M01 + M10 + M11 + M00) = (0+7) / (2+1+0+7) = 0.7
J = (M11) / (M01 + M10 + M11) = 0 / (2 + 1 + 0) = 0
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Cosine Similarity
If d1 and d2 are two document vectors, then
cos( d1, d2 ) = (d1 d2) / ||d1|| ||d2|| ,
where indicates vector dot product and || d || is the length of vector d.
Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2
d1 d2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5
||d1|| = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481
||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.245
cos( d1, d2 ) = .3150
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Extended Jaccard Coefficient (Tanimoto)
Variation of Jaccard for continuous or count attributes
Reduces to Jaccard for binary attributes
(C) Vipin Kumar, Parallel Issues in Data Mining, VECPAR 2002
Correlation
Correlation measures the linear relationship between objects
To compute correlation, we standardize data objects, p and q, and then take their dot product
(C) Vipin Kumar