Pianist Zhai Qiongyan, who began her studies at the prestigious Curtis Institute of Music at age 12 and later earned her doctorate from the Manhattan School of Music, now divides her time between Shanghai and San Francisco. Known for her innovative approach to performances, she excels in interpreting contemporary composers and incorporates cutting-edge techniques such as augmented piano and AI technology, continuously pushing the boundaries of modern music.
On September 28, Zhai will join forces with Polish multimedia composer, pianist, and associate professor of composition at Stanford University, Jarosław Kapuściński, for a piano solo concert titled Searching for Chopin, featuring AI-enhanced audio-visual interaction. The concert will showcase five uniquely creative modern music pieces using the latest Steinway Spirio piano technology. Through high-definition visuals and a rich array of imagery, the concert aims to create a harmonious blend of sound and sight.
The friendship between Zhai and Kapuściński has spanned over a decade, built on mutual support and shared artistic vision. Their bond, which transcends time, has inspired not only significant professional achievements but also personal growth, reflected in their moving musical collaborations.
The first half of the concert features four highly imaginative and engaging compositions. Oli’s Dream explores the interaction between piano and typewriter keyboards, where the pianist brings letters, words, and music into dialogue. Juicy connects fruits to vibrant colors and geometric structures through music. Knowing Sound is inspired by Chinese cursive calligraphy, with the pianist’s hand movements mimicking brush strokes, reminiscent of the legendary tale of musical understanding between Bo Ya and Zhong Ziqi. Edge Effects, based on Kapuściński’s photography, comprises ten movements that delve into the intricate relationship between humanity and nature.
The second half features Searching for Chopin, composed by Kapuściński. This performance/installation art piece involves over 150 participants from 12 cities worldwide, all of whom were recorded while listening to Chopin’s Preludes, Op. 28. Their real-time facial expressions were captured and later reinterpreted and edited. During the live concert, the piano will perform autonomously, while the collected expressions will be displayed on a large screen, creating a unique fusion of music and human emotion.