Ongoing resaerch projects in SAMSI Program on Model Uncertainty.
To be announced in May, 2019.

Summary:
In fall 2018, I presented my previous work (collaborating with Ernest Fokoue) about music mining in the group meeting, and also submitted a paper [1] to introduce the idea of representing any given piece of music as a collection of “musical words” that we codenamed “muselets”, which are essentially musical words of various lengths. We specifically herein construct a naive dictionary featuring a corpus made up of African American, Chinese, Japanese and Arabic music, on which we perform both topic modelling and pattern recognition. Although some of the results based on the Naive Dictionary are reasonably good, we anticipate phenomenal predictive performances once we get around to actually building a full scale complete version of our intended dictionary of muselets. The idea of Data Fusion in this work is that we create uniform representation of music based on different sources and forms of musical data.

Read more »

Music and text are similar in the way that both of them can be regraded as information carrier and emotion deliverer. People get daily information from reading newspaper, magazines, blogs etc., and they can also write diary or personal journal to reflect on daily life, let out pent up emotions, record ideas and experience. Same power could come from music! Composers express their feelings through music with different combinations of notes, diverse tempo, and dynamics levels, as another version of language. All these similarities drive people to ask questions like:

  • Could music deliver information tantamount to text?
  • Can we efficiently use text mining approach in music field?
  • Why music from diverse culture can bring people so many different feelings?
  • What’s the similarity between music from different culture, or composers, or genres?
  • To what extend do people grasp the meaning behind each piece of music expressed by the composer?
Read more »


Ongoing resaerch projects in SAMSI Program on Model Uncertainty.
To be announced in May, 2019.

Mid-term summary:
In fall 2018, I mainly focused on input distribution subgroup demonstrated my initial exploratory analysis of KE synthetic storm tracks, and compared the simulated tracks with the real IBTrack storm data. One of the main discussions in this semester is how to improve the current practice with the technique of spatial statistics, such as using hierarchical model to improve the estimation of input distribution, or spatial-temporal point process modeling for the storm occurrence rate. Diverse existing projects giving by different researchers are shared and discussed in weekly group meetings.
For spring 2019, we will (1.) Measure the model of storm evolution (e.g. the sudden change of the characteristics when the storms landed based on the coastline and how simulation data handle this); (2.) Think about how to impose some structure to improve the estimation concerning input distribution.

Read more »

Use different Topic Modeling approaches on Political Blogs to see the performance of diverse methods.

Introduction

Types of Models in Comparison

Read more »

Continue the learning process of Convolutional Neural Networks and Recurrent Neural Networks with TensorFlow in Jupyter Notebook.

RNN with TensorFlow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
# class: create the data, generate the batches to send it back
class TimeSeriesData():
def __init__(self, num_points, xmin, xmax):
self.xmin = xmin
self.xmax = xmax
self.num_points = num_points
self.resolution = (xmax - xmin)/num_points
self.x_data = np.linspace(xmin, xmax, num_points)
self.y_true = np.sin(self.x_data)

def ret_true(self, x_series):
return np.sin(x_series)

def next_batch(self, batch_size, steps, return_batch_ts = False):
# grab a random starting point for each batch
random_start = np.random.rand(batch_size,1)
# convert the data to TS
ts_start = random_start * (self.xmax - self.xmin - (steps * self.resolution))
# create batch time series on the x axis
batch_ts = ts_start + np.arange(0.0,steps+1) * self.resolution
# create the Y data for the time series x axis from previous step
y_batch = np.sin(batch_ts)
# formatting for RNN
if return_batch_ts:
return y_batch[:,:-1].reshape(-1,steps,1), y_batch[:,1:].reshape(-1,steps,1), batch_ts
else:
return y_batch[:,:-1].reshape(-1,steps,1), y_batch[:,1:].reshape(-1,steps,1)
# original y_batch and y_batch shifted over 1 step in the future
Read more »

Here I’ll give the theory part of Neural Networks, sepecifically three kinds of NN: Normal Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks.

Neural Networks

In single neuron:

$$\begin{align*} z &= Wx + b \\ a &= \sigma (z) \end{align*}$$

Activation Function

  • Perceptron: binary classifier, small changes are not reflected.$$\begin{align*} f(x) = \begin{cases} 1 & \text{if } Wx+b>0\\ 0 & \text{otherwise} \end{cases} \end{align*}$$
Read more »

I’m learning how to use Google’s TensorFlow framework to create artificial neural networks for deep learning with Python from Udemy. I’m using Jupiter notebook to practice and blog my learning progress here.

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. This architecture allows users to deploy computation to one or more CPUs or GPUs, in a desktop, server, or mobile device with a single API (Application programming interface).

Read more »

As a graduate student in Statistics, I profoundly believe the power of programming skills to Statistics/ Data Science. I used R or Matlab as my default programming tools and they are pretty powerful to help me solve most mathematical related tasks. However I realize in Artificial Intelligence/ Machine Learning realm, Python is more prevelent. In order to better gain the gist of this mighty programming language, in the process of learning it, I will show my LeetCode practice in Python3 with relative explanations as well as pitfalls I encounter.

Read more »

This chapter talks about k Nearest Neighbors (kNN) as a metholodogy for both classification and regression problems. The kNN method serves a basic and easy to understand foundational machine learning and data mining technique. The kNN method is an excellent baseline machine learning technique, and also allows many extensions. It usually performs reasonable well or sometimes very well when compared to more sophisticated techniques.

kNN classification basically means the estimated class of a vector $\mathbf{x}$ is the most frequent class label in the neighborhood of $\mathbf{x}$. kNN classifiers are inherently naturally multi-class, and are used extensively in applications such as image processing, character recognition and general pattern recognition tasks. For kNN regression, the estimated response value of a vector $\mathbf{x}$ is the average of the response values in the neighborhood of $\mathbf{x}$.

Read more »

This chapter gives basic introduction of Vapnik-Chervonenkis Theory on Binary Classification. When it comes to binary classification task, we always want to know the functional relationship between $\mathbf{x}$ and $y$, or how to determine the “best” approach to determining from the available observaions such that given a new observation $\mathbf{x}^{new}$, we can predict its class $y^{new}$ as accurately as possible.

Universal Best

Constructing a classification rule that puts all the points in their corresponding classes could be dangerous for classifying new observations not present in the current collection of observations. So to find an classification rule that achieves the absolute very best on the present data is not enough since infinitely many more observations can be generated. Even the universally best classifier will make mistakes.

Read more »

0%