*RESEARCH STATEMENT*

**In the past, I have
published in diverse and broad areas of **

Ø __sensory processing, visual perception and
perceptual binding__;

§ **I started off studying how hyperacuity
may arise in sensory representation and how resolution and magnification of
cortical maps are inversely related through dynamic self-organization models. I
then devote significant effort into understanding motion perception, in
particular motion-based figure-ground segregation (as evidenced in random-dot kinematogram). Following up researches on the then-hot
“aperture problem”, I refined (with correct mathematical formulation) a method
for separating orientation and direction tuning in visual neuron’s response to
oriented drifting stimuli, explained an apparent puzzle (by mathematical
modeling) in directional selective neurons’ response to drifting random-dot
pattern, and demonstrated (via psychophysics experiments) that the “softness”
of an aperture determines whether motion integration vs
motion contrast is to occur. **

§ **The highlight of these vision-related
research (dating back to my Ph.D. years) is a mathematical model on
(motion-based) perceptual binding. Using the tools of differentiable manifold,
I proposed that figure-ground segregation and object “oneness”, in terms of computation, can
be characterized as a constant vector field (under Levi-Civita
connection), where vectors at each point of the manifold (2-D visual space)
represent the measurements of motion sensors that are susceptible to the
aperture problem. Conceptualizing visual perception as having the
(mathematical) structure of fiber bundles, with base manifold being the 2-D
frontal-parallel visual space, has many advantages that is my continued
research interest nowadays. **

** **

·
**Theory of Signal Detection is, in offering a theoretical framework for
separating sensitivity and decision bias aspects of simple detection and
discrimination tasks. The “non-parametric” estimate of sensitivity and bias
factors is a popular alternative to parametric estimate (based on the Gaussian
model); yet I found that the formula for calculating the so-called A’ index,
and even a purported correction of it, contains an error. I provided (with
computer graphical help from a student Shane Mueller) the definitive formula
for calculating what most people believe it should calculate, and have recently
extended to the case when experimental manipulations yield a second point in
the ROC space. **

·
**I have looked into the conceptualization of stimulus-response
compatibility by Sylvan Kornblum, and collaborated
with him and his associates (including his student Huazhong
Zhang) on developing a connectionist model of S-R compatibility effects. A
by-product is a cute mathematical proof that a linear plot in the so-called
“distributional analysis” of two RT distributions actually reveals that they
are members of a location-scale family (contrary to what the analysis was
intended to reveal). **

·
**Since 2007, I have developed a strong interest in understanding neural
processes that mediate sensor-motor “decisions”. The experimental setting is
that of trial-by-trial recording of neural activities (single neuron, ERP,
fMRI, etc) while a behaving animal or human is carrying out a simple SR
task. I advanced three methods, that can
be used in orthogonal fashions, to determine a recorded neural response (single
neuron, ERP, fMRI, etc) should be considered to related
to the online processing of stimulus, response, or the decision that mediate
the two. First, a TSD-based index for sensorimotor “locus” of a neuron can be
calculated, when sufficient error or “anti-target” trials are gathered. Second,
simultaneous orthogonal contrasts of neural responses may be constructed to
yield a spherical visualization that exhaustively maps out data patterns while
preserving the topology of interval-scale data type. (I termed it “Locus
Analysis” upon the recommendation of a UM colleague who believed such naming
would be good for my tenure case. The method was later applied to analyze other
neuronal population data.) Third, I have developed a time-series method to
separate stimulus and response waveforms in stimulus-locked and response-locked
ERP averages. With the help of a diligent student (Gang Yin), this method has
recently been refined to control for noise robustly as well as extended to
situations with more than two behavioral time-marks. **

§ **Reflecting my interest in preference ordering (ranking) and
its aggregation are two papers in JMP in which 1) I re-invented the well-known combinatoric object “permutahedron”
(after its first discovery by Schulz in 1911!), but this time gave the proper
interpretation of the space of Borda counts and
linked it with other choice paradigms; 2) we re-examined Chichilnilsky’s
topological approach to Arrow’s impossibility theorem and, invoking non-Hausdorff topology (due primarily to my student Matt
Jones), proved possibility results when null-preference is included in the
output (but not input) of social aggregation.
**

** **

§ **I collaborated with a highly motivated student Greg
Stevens (with whom I lost contact now unfortunately) in proposing a dynamic
system model of infant attachment characterizing the dyadic interaction between
the infant and primary care-giver based on an arousal
and a soothing neural chemical system. The insights were his and the math was
mine. **

§ **Game theory as normative theory of social interaction is
both fascinating and disappointing in illuminating the notion of rationality.
In joint work with my student Matt Jones, we showed how cooperation in a prisoners dilemma game can arise as an individually-optimal
strategy if players (with non-zero decision horizon) consider possible future
interactions in maximizing own reward. With respect to recursive (“I think you
think I think …” type or theory-of-mind) reasoning in games, my students (Trey Hedden and others) and I developed a series of three-step,
sequential move games that allow us to test shallow vs
deep (recursive) reasoning in subjects. Aside from laboratory studies, I also
invoked the notion of meta-games for modeling military strategic engagements
(in collaboration with UM student Alex Chavez).**

Ø reinforcement learning and kernel methods in
machine learning;

§ **My interests in reinforcement learning started around 1997
(owing to stimulating discussions with an exchange student Min Chang). Some
discoveries were made, but “perished” before they became “published”, due to
the rapid pace of the field around that time and some personal distraction. The
one finding that did not get sufficient attention by the field, but recently
published, is the observation that adaptive learning by individual agents via
Law of Effect (underlying reinforcement learning) is, on the ensemble level,
equivalent to naïve Bayesian dynamics (put alternatively, replica dynamics has
a Bayesian interpretation). In recent collaborations with Kent Berridge and
Wayne Aldridge, we looked at coding of ventral pallidal
neurons and examined the role of motivation (incentive salience) in RL.**

§ **With the help of an incredibly talented mathematics
postdoc, Dr. Haizhang Zhang (who got his from Yuesheng
Xu of Syracuse), who basically single-handedly
completed all technical proofs in our joint papers, we scrutinized the role of
inner product operator played in reproducing-kernel Hilbert space (RKHS)
method, and found that the entire reproducing-kernel framework can be
generalized to a Banach space B. The trick is to
invoke the notion of semi-inner-product on B, which is essentially the pull-back of the linear functional (in B*) via a generally
non-linear duality mapping between B and B*. The notion of s.i.p.
turns out to be useful for defining and studying
frames and Riesz basis in Banach
space. The notion of s.i.p can be further generalized
to that of a generalized s.i.p. reflecting
a generalized duality mapping. Etc, etc. This is an exciting area that we are
continuously working on. **

** **

Ø information
geometry, convexity and duality.

§ **Information geometry provides an elegant framework for
understanding asymmetric “distance” (called “divergence function”) with many
applications in statistics, engineering, optimization, and machine learning. In
work (that I am proud in providing), I show how a general class of divergence
functions (that encompass most known families) can be constructed from a
strictly convex function, and that such convex-based divergence functions
precisely induces the alpha-geometry of Amari et al. I then further elucidated
the connection of convex functions and the dual differential geometry they
induce. I also extended the information geometric construction, in terms of
formulae, of metric and affine connections to infinite-dimensional manifold
(admittedly, with non-trivial caveats related to topology of
infinite-dimensional spaces). **

§ **By carefully separating reference-duality and representational
duality in the alpha-geometry, I advance the notion of reference-representation
biduality that I believe is of fundamental importance
to information science. This work is in parallel to my strong urge for a deeper
appreciation of duality in functional analysis and kernel methods (via
semi-inner-product and duality mapping).
**

*(CLICK HERE IF YOU WANT TO SEE PUBLICATIONS LISTED IN
REVERSE CHRONOLOGICAL ORDER)*