Alejandro Pérez (former postdoc, now at Cambridge University), Philip J. Monahan (faculty), and colleague Matthew A. Lambon Ralph (Cambridge University) have a new paper in MethodsX, 8: "Joint recording of EEG and audio signals in hyperscanning and pseudo-hyperscanning experiments."
Hyperscanning is an emerging technique that allows for the study of brain similarities between interacting individuals. This methodology has powerful implications for understanding the neural basis of joint actions, such as conversation; however, it also demands precise time-locking between the different brain recordings and sensory stimulation. Such precise timing, nevertheless, is often difficult to achieve. Recording auditory stimuli jointly with the ongoing high temporal resolution neurophysiological signal presents an effective way to control timing asynchronies offline between the digital trigger sent by the stimulation program and the actual onset of the auditory stimulus delivered to participants via speakers/headphones. This configuration is particularly challenging in hyperscanning setups due to the general increased complexity of the methodology. In other designs using the related technique of pseudo-hyperscanning, combined brain-auditory recordings are also a highly desirable feature, since reliable offline synchronization can be performed by using the shared audio signal. Here, we describe two hardware configurations wherein the real-time delivered auditory stimulus is recorded jointly with ongoing electroencephalographic (EEG) recordings. Specifically, we describe and provide customized implementations for joint EEG-audio recording in hyperscanning and pseudo-hyperscanning paradigms using hardware and software from Brain Products GmbH.