# Research

My research is primarily concerned with the development of scalable, high
performance algorithms for applications in data mining and machine learning.
*Irregular* applications are of particular interest to me, such as those
that operate on sparse graphs, matrices, and tensors.

My thesis work is focused on large-scale sparse tensor factorization and is culminated in SPLATT, an open source software toolkit for tensor factorization and related kernels. SPLATT has been scaled to over 16,000 cores and is actively used by academic, industry, and government researchers.

## Awards & Honors

- ACM/IEEE-CS George Michael Memorial HPC Fellowship, 2017
- Euro-Par ‘17 Distinguished Paper,
“
*Accelerating the Tucker Decomposition with Compressed Sparse Tensors*”. - HPEC ‘17 GraphChallenge Finalist,
“
*Truss Decompositions on Shared-Memory Parallel Systems*” - HPEC ‘17 GraphChallenge Finalist,
“
*Exploring Optimizations on Shared-memory Platforms for Parallel Triangle Counting Algorithms*” - SC ‘16 Best Student Paper Finalist
“
*An Exploration of Optimization Algorithms for High Performance Tensor Completion*” - Doctoral Dissertation Fellowship, University of Minnesota, 2016-2017
- Outstanding Graduating Senior Award, University of Kentucky, 2012
- Student Internship Symposium Top Prize, Lexmark International, 2011

## Software Contributions

## Publications

### Book Chapters

- David C. Anastasiu, Jeremy Iverson,
**Shaden Smith**, and George Karypis. 2014. Big Data Frequent Pattern Mining. In*Frequent Pattern Mining*. Switzerland: Springer International Publishing, 225–260.

[paper] [bib]

### Journals

**Shaden Smith**Jongsoo and George Karypis. 2017. HPC formulations of optimization algorithms for tensor completion.*Parallel Computing*(2017).

[paper] [bib]- David C. Anastasiu, Evangelia Christakopoulou,
**Shaden Smith**, Mohit Sharma, and George Karypis. 2016. Big Data and Recommender Systems.*Novática: Journal of the Spanish Computer Scientist Association*, 240 (October 2016).

[paper] [bib]

### Conferences & Refereed Workshops

- Jee W. Choi, Xing Liu,
**Shaden Smith**, and Tyler Simon. 2018. Blocking Optimization Techniques for Sparse Tensor Computation.*32nd IEEE International Parallel & Distributed Processing Symposium (IPDPS’18)*(2018).

**Shaden Smith**, Kejun Huang, Nicholas D. Sidiropoulos, and George Karypis. 2018. Streaming Tensor Factorization for Infinite Data Sources.*Proceedings of the 2018 SIAM International Conference on Data Mining (SDM’18)*(2018).

**Shaden Smith**, Xing Liu, Nesreen K. Ahmed, Ancy Sarah Tom, Fabrizio Petrini, and George Karypis. 2017. Truss Decompositions on Shared-Memory Parallel Systems. In*IEEE High Performance Extreme Computing Conference (HPEC),***GraphChallenge Finalist**.

[paper] [slides] [bib]- Ancy Sarah Tom et al. 2017. Exploring Optimizations on Shared-memory Platforms for Parallel Triangle Counting Algorithms. In
*IEEE High Performance Extreme Computing Conference (HPEC),***GraphChallenge Finalist**.

[paper] [bib] **Shaden Smith**and George Karypis. 2017. Accelerating the Tucker Decomposition with Compressed Sparse Tensors. In*European Conference on Parallel Processing (Euro-Par ’17)*.**Distinguished Paper Award**

[paper] [slides] [bib]- Michael Anderson et al. 2017. Bridging the Gap Between HPC and Big Data Frameworks.
*Proceedings of the VLDB Endowment (PVLDB ’17)*(2017).

[paper] [bib] **Shaden Smith**, Alec Beri, and George Karypis. 2017. Constrained Tensor Factorization with Accelerated AO-ADMM. In*46th International Conference on Parallel Processing (ICPP ’17)*.

[paper] [slides] [bib]**Shaden Smith**, Jongsoo Park, and George Karypis. 2017. Sparse Tensor Factorization on Many-Core Processors with High-Bandwidth Memory. In*31st IEEE International Parallel & Distributed Processing Symposium (IPDPS’17)*.

[paper] [slides] [bib]**Shaden Smith**, Jongsoo Park, and George Karypis. 2016. An Exploration of Optimization Algorithms for High Performance Tensor Completion.*Proceedings of the 2016 ACM/IEEE Conference on Supercomputing (SC’16)*(2016).**Finalist, Best Student Paper**

[paper] [slides] [bib]**Shaden Smith**and George Karypis. 2016. A Medium-Grained Algorithm for Distributed Sparse Tensor Factorization. In*30th IEEE International Parallel & Distributed Processing Symposium (IPDPS’16)*.

[paper] [slides] [bib]**Shaden Smith**, Niranjay Ravindran, Nicholas D. Sidiropoulos, and George Karypis. 2015. SPLATT: Efficient and parallel sparse tensor-matrix multiplication. In*29th IEEE International Parallel & Distributed Processing Symposium (IPDPS’15)*.

[paper] [slides] [bib]**Shaden Smith**and George Karypis. 2015. Tensor-Matrix Products with a Compressed Sparse Tensor.*Proceedings of the 5th Workshop on Irregular Applications: Architectures and Algorithms (IA3’15)*(2015), 7.

[paper] [slides] [bib]- Niranjay Ravindran, Nicholas D. Sidiropoulos,
**Shaden Smith**, and George Karypis. 2014. Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition.*Proceedings of the Asilomar Conference on Signals, Systems, and Computers*(2014).

[paper] [slides] [bib] - Yuliya Lierler,
**Shaden Smith**, Miroslaw Truszczynski, and Alex Westlund. 2012. Weighted-sequence problem: ASP vs CASP and declarative vs problem-oriented solving.*Practical Aspects of Declarative Languages (PADL’12)*(2012), 63–77.

[paper] [slides] [bib]

### Invited Talks & Posters

**Shaden Smith**and George Karypis. 2018. Accelerating the Tucker Decomposition with Compressed Sparse Tensors.*SIAM Conference on Parallel Processing for Scientific Computing (PP’18), Minisymposium: Tensor Decomposition for High Performance Data Analytics*(2018).

- Ancy Sarah Tom,
**Shaden Smith**, and George Karypis. 2018. Triangle Counting and Truss Decomposition on Modern Parallel Architectures.*SIAM Conference on Parallel Processing for Scientific Computing (PP’18), Minisymposium: Architecture-Aware Graph Analytics*(2018).

**Shaden Smith**, Jongsoo Park, and George Karypis. 2017. An Exploration of Optimization Algorithms for High Performance Tensor Completion.*SIAM Conference on Computational Science and Engineering (CSE’17), Minisymposium: Tensor Decompositions: Applications and Efficient Algorithms*(2017).

**Shaden Smith**and George Karypis. 2016. High Performance Sparse Tensor Factorization.*Intel Research, invited talk*(2016).

[slides]**Shaden Smith**, Jongsoo Park, and George Karypis. 2016. An Exploration of Optimization Algorithms for High Performance Tensor Completion.*The 9th International Workshop on Parallel Matrix Algorithms and Applications (PMAA’16), Minisymposium: Sparse Matrix and Tensor Computations*(2016).

[slides]**Shaden Smith**and George Karypis. 2016. Efficient Factorization with Compressed Sparse Tensors.*SIAM Conference on Parallel Processing for Scientific Computing (PP’16), Minisymposium: Parallel Algorithms for Tensor Computations*(2016).

[slides]and George Karypis. 2016. SPLATT: Enabling Large-Scale Sparse Tensor Analysis. (2016).**Shaden Smith**

[paper]**Shaden Smith**and Peter Robinson. 2013. LULESH and OpenACC: To Exascale and Beyond!!!*PGI OpenACC Workshop*(2013).

[slides]**Shaden Smith**and Jerry Fish. 2012. 2010: A GPU Odyssey.*Lexmark Celebrate Success Seminar*(2012).

**Shaden Smith**and Jerry Fish. 2011. Particle Flow Modeling or: How I Learned to Stop Worrying and Love DEM.*Lexmark Student Symposium*(2011).