calvin mccarter

machine learning, computer science, and more

There are lots of caveats with hierarchical clustering, and it’s often used to draw unjustifiable conclusions. But I’ll save that discussion for later, and it’s at least occasionally useful. So far, I’ve mainly used it to reorder the columns/rows of a covariance or correlation matrix. For example, I recently generated synthetic sparse precision matrices, and I wanted to make sure that the corresponding covariance/correlation matrices were as I expected.

However, linkage/dendrogram in both Matlab and SciPy are really slow. In particular, the linkage algorithms they use are O(n^3). So instead we can use fastcluster, which is O(n^2). All I had to do was replace this:

Z = sch.linkage(P, method="average", metric="correlation")

with this:

Z = fastcluster.linkage(P, method="average", metric="correlation")

The second problem is that dendrogram does lots of unnecessary work when all I want is the reordered indices. SciPy actually implements it recursively, so it gives this error: “RuntimeError: maximum recursion depth exceeded”. So the second change is to replace this:

dendr = sch.dendrogram(Z,no_plot=True,distance_sort=True)
ix = dendr["leaves"]

with this (credit to a good denizen of StackOverflow):

n = len(Z) + 1
cache = dict()
for k in range(len(Z)):
    c1, c2 = int(Z[k][0]), int(Z[k][1])
    c1 = [c1] if c1 < n else cache.pop(c1)
    c2 = [c2] if c2 < n else cache.pop(c2)
    cache[n+k] = c1 + c2
ix = cache[2*len(Z)]

Then it’s as simple as:

Pclust = P[ix,:]
Pclust = Pclust[:,ix]
pyplot.imshow(Pclust, interpolation="nearest")

This was originally published here: https://calvinmccarter.wordpress.com/2014/02/20/scaling-up-hierarchical-clustering/

This is an example of using a pivot to find a confidence interval.

\[X_{1},...,X_{n} \sim \text{Uniform}(0,\theta).\]

1. Find a pivot.

Let \( Q=X_{(n)}/\theta\).

2. Find its distribution:

\[ P(Q \le t)= P(X_i \le t\theta)^n = t^n .\]

3. Find an expression involving an upper and lower bound on the pivot:

\[ P(a \le Q \le b) = b^n-a^n.\]

This implies

\[ P(a \le Q \le 1) = 1-a^n \]

4. Substitute the expression for the pivot from Step 1, and set the RHS to \(1-\alpha \).

\[ P(a \le X_{(n)}/\theta \le 1)=1-a^n \]

\[ P(1/a \ge \theta/X_{(n)} \ge 1) = 1-a^n \]

\[ P( X_{(n)} \le \theta \le \frac{X_{(n)}}{a} ) = 1-a^n \]

Let \( 1-\alpha = 1-a^n\). Then \(a=\alpha^{1/n}\).

\[P(X_{(n)} \le \theta \le \frac{X_{(n)}}{\alpha^{1/n}})=1-\alpha \]

This gives us \( [X_{(n)},\frac{X_{(n)}}{\alpha^{1/n}}]\) as a \( 1-\alpha\) confidence interval for \( \theta\).


This was originally published here: https://calvinmccarter.wordpress.com/2013/11/06/confidence-intervals-from-pivots/

#ML

Subscribe via email:

Subscribe via RSS: https://calvinmccarter.writeas.com/feed/

Find me elsewhere: http://calvinmccarter.com/ Twitter Github Google Scholar