Thursday, November 3, 2011

11/01/2011

Feature Selection:
1. Removing irrelevant features
2. Picking up some of the features from original space
3. Done with respect to a class. 


- Archana

Tuesday, November 1, 2011

11/1/2011

Collaborative filtering faces the problem of cold start i.e not having a large user population.

--Shaunak Shah

11/1/2011

To improve probability estimates and avoid overflow errors, do addition of logarithms of probabilities instead of multiplication of probabilities.

- Elias

10/27/2011

There are times we need smoothing, such as when flipping a coin for the probability. And if we use samples, we know it's wrong because wemknow the prior probability.

-Ivan Zhou

--
Ivan Zhou
Graduate Student
Graduate Professional Student Association (GPSA) Assembly Member
School of Computing, Informatics and Decision Systems Engineering
Ira A. Fulton School of Engineering
Arizona State University

10/27/11

In many cases, compression techniques are good learning techniques..Effectively they parametrically summarize the data.

-James Cotter

10/27/2011

Naive Bayes makes assumption that all attributes are independent. If this assumption was not made it makes the computation harder. If a node has more than one parent than the probability of that node is calculated given each configuration of its parent. 

-Bharath

10/27/2011

When trying to perform classification, we can begin by assuming that
each object should have a uniform chance of being in each category. As
we gather more and more samples, we must be sure to understand that
this is not always the case. We do this by adding "virtual samples"
which means that we pretend we have received M samples that are
uniformly distributed between the categories. As the empirical sample
size approaches and passes the size of M, this model begins to "trust"
its empirical samples more than the virtual samples.
~Kalin Jonas