VIENNATALK2020: FOURTH VIENNA TALK ON MUSIC ACOUSTICS
PROGRAM FOR MONDAY, SEPTEMBER 12TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-12:00 Session 2: woodwinds
09:00
Observations of hysteretic behavior in the recorder

ABSTRACT. It is known that flue instruments can exhibit hysteretic behavior, by which we mean that the tone produced by a particular fingering with a particular blowing pressure can depend on the prior history of the blowing pressure. However, the factors that determine and affect such behavior are not well understood. We report studies of this hysteresis using experiments and Navier-Stokes-based simulations of a simple instrument consisting of a commercial soprano recorder head (Yamaha YRS23) connected to a cylindrical resonator. This instrument had no tone holes, and a total length and fundamental frequency f1 (~520 Hz) equal to those of a soprano recorder. Our experimental and simulation studies in which the blowing pressure is swept either upwards or downwards with time both show that the thresholds for switching from C5 to C6 (dominant spectral component ~2xf1) or G6 (~3xf1) can be quite different for upward and downward sweeps. Both studies also find that at pressures between the upward and downward transition points the same final blowing conditions can produce steady and stable tones with very different spectral contents depending on the blowing history. The observation of similar hysteretic behavior in both experiments with, and simulations of, such a simple instrument geometry makes this an ideal system for understanding this behavior at a fundamental level.

Supported in part by U.S. National Science Foundation grant PHY1806231.

09:20
Lattice Boltzmann Modeling of a Single-Reed Instrument Using a Time-Domain Impedance Boundary Condition

ABSTRACT. The Lattice Boltzmann (LB) method has proven to be a useful tool for the aeroacoustic study of single-reed instruments. However, such aeroacoustic models tend to be computationally expensive, as they require both the instrument and the radiation domain to be discretized in order to capture the fully-coupled fluid-solid-acoustic interactions. Aiming at more efficient computational aeroacoustic simulations, a Characteristic-based Time-Domain Impedance Boundary Condition (C-TDIBC) is proposed in this paper with application to a single-reed instrument. The C-TDIBC allows a lumped representation of a linear acoustic system to be specified as a boundary condition of an LB model. The coupling between the acoustic system and LB domain is achieved by applying the characteristic boundary condition, which breaks down the lattice Boltzmann solutions at the boundary into separated characteristic waves, including the entropy wave, shear wave and the incoming and outgoing acoustic waves. The outgoing acoustic wave is input to a resonator model, which returns the reflected incoming acoustic wave fed to the LB domain to complete the boundary condition. In this particular work, the air column of the instrument and the radiation domain are characterized in terms of an input impedance, and the resonator model is implemented as a recursive parallel filter structure designed to fit either a theoretical or measured input impedance. The C-TDIBC is validated in an LB model of an ideal pipe by comparing the simulated input impedance in the LB domain to that applied to one end of the pipe. In the single-reed instrument simulation, a two-dimensional mouthpiece is implemented using the LB method with the reed modeled as a one-dimensional beam. The C-TDIBC is applied at the end of the mouthpiece to provide feedback from the acoustic field. The proposed method is applied to the single-reed instrument by assuming the bore of the instrument to be a linear acoustic resonator whose acoustic response is unaffected by the upstream flow field.

09:40
Measuring saxophone reed motion and strain with stroboscopic digital image correlation.

ABSTRACT. Several non-contact optical techniques have been used to investigate dynamic movement instrument reeds. Holographic techniques have the advantage that they are very sensitive, but for large reed motions (for instance at lower notes or high blowing pressures) the reed motion amplitudes are too large to measure directly, and rather complicated methods are needed to adapt the sensitivity range. Laser doppler velocimetry has a broad dynamic range, but it essentially is a single-point technique, so time-consuming scanning is needed to measure the motions on the full surface of the reed. Both techniques gather data in the reference frame of the apparatus, making it impossible to obtain strain data. Digital Image Correlation makes use of optical texture on the object, thus obtaining displacement data in the reference frame of the object itself. From such data, strain fields can be directly obtained. On objects such as instrument reeds, a speckle pattern needs to be applied to obtain the necessary optical texture to perform the DIC measurements. DIC allows to measure both the static deformation of the reed, caused by lip force and airflow, as well as the deformation as a function of time on a vibrating reed. In this contribution we will present results of stroboscopic DIC measurements on a vibrating saxophone reed, mounted on a mouthpiece which is coupled to a real instrument, and using an artificial lip. Using an advanced triggering technique, a large number of full-field deformation measurements can be obtained in just a few seconds, so it is possible to investigate spectral composition of the vibration pattern. We will demonstrate in what circumstances the data can be used to determine full field strain fields, and how the reed deformation relates to the internal acoustic pressure in the mouthpiece.

10:00
Mass Personalization of Saxophone Mouthpieces with Digital Manufacturing

ABSTRACT. Advancements in digital manufacturing technologies allow rapid changes in product design and manufacturing by integrating information systems with flexible manufacturing processes. Such advancements enable providing highly personalized and still affordable products to customers on demand. In this regard, personalization may add value and present new opportunities in musical instrument making, due to the personal nature of the instruments in terms of both ergonomics and performance. A prominent example of this personal need is the selection of saxophone mouthpieces, since saxophonists often seek the one that provides the sound they wish or fits their playing habits. The personalization of 3D-printed saxophone mouthpieces presents therefore a great opportunity to fine-tune the performance of the mouthpiece to the expectations of the musicians. A challenge encountered is yet the lack of quantitative knowledge on mouthpiece design. This study aims to develop a design template for alto saxophone mouthpiece that can be adjusted to the specific needs of musicians. This design template is based on a mass personalization design methodology, where the design parameters of the mouthpiece are connected to the performance needs of players. An acoustical analysis is carried out to obtain quantitative relations between parameters and mouthpiece performance. To that aim, twenty-seven 3D-printed mouthpieces with nine varying design parameters (such as the tip opening, the baffle height, etc.) are tested using an artificial blowing machine, to determine their effects on four selected performance aspects or mouthpiece features (loudness, brightness, resistance and flexibility). The experiment analysis reveals that seven of the tested design parameters affect the mouthpiece performance in varying amounts, the most prominent design parameters being the chamber size, the baffle height, the lay length and the tip opening. The influence of the design parameters on the mouthpiece features, based on statistical analysis, is implemented in a design template. According to the requirements of each player, an algorithm modifies the design parameters and generates a personalized design that is manufacturable. This personalization design methodology is then tested via a user study with five saxophonists. The players in the user study confirm the performance variance in seven out of ten cases, and they prefer the personalized mouthpieces in four out of five cases. The results of this study contribute to the understanding of mouthpiece design, while providing valuable insights for personalization and the use of digital manufacturing in instrument making.

10:20
The duduk: cylindrical oboe or double reed clarinet?

ABSTRACT. Where to classify the duduk, emblem of Armenian music? The analysis of its spectrum shows that it cannot be considered either as a clarinet because of a very present harmonic 2, nor as an oboe because the spectrum is dominated by the fundamental. A quick geometric analysis shows that although the bore of the instrument itself is cylindrical (i.e. without the reed), it cannot be considered as such because the reed has a much larger diameter than the pipe. Therefore the resonator presents strongly inharmonic resonances and the question which arises from this is then: how can it produce periodic oscillations and what is the link between the playing frequency and the frequencies of the pipe? We show that the particularity of the duduk is to have a reed whose resonant frequency is between the first and the second resonant frequency of the acoustic resonator (reed volume + pipe) together with a subcritical quality factor (Q<0.5). The playing frequency is then lower than the first acoustic resonance frequency. We show that the parameters of the reed and the resonator being known, the classical model used for reed instruments with a single acoustic mode makes it possible to predict the playing frequency.

10:40
A transfer matrix for the cone with losses

ABSTRACT. In musical wind instruments, acoustical pipes serve as resonators in coupled selfoscillations with the excitation mechanism. The knowledge of the pipes' acoustical behaviour is helpful to interpret the playing characteristics and necessary build physical models of such instruments. For a cylindrical pipe, the acoustical behaviour in the frequency domain can be described in terms of a transfer matrix expressing the pressure response to an imposed flow excitation. With regard to the frequency precision required in musical acoustics, and to the non-linearity of the excitation mechanism, it is important to take visco-thermal wall losses into account. For cylindrical pipes, the one-dimensional wave equation can be modified to encounter the effect of energy dissipation at the walls, which directly leads to a well known transfer matrix (Benade,JASA 1968 https://asa.scitation.org/doi/abs/10.1121/1.1911130). For conical pipes, however, the treatment of wall losses requires a much more complex theory. For many years this complexity has been circumvented by approaching the cone by piecewise cylindrical slices, with well-defined transfer matrices. Recent works by Thibault and Chabassier (https://hal.inria.fr/hal-02917351/) present excellent elaborations of the lossy cone theory beginning from the fundamental physical laws. These give rise to the widely accepted assumption, that the pragmatic slicing approach is fully sufficient for typical scenarios in musical acoustics. We want to add to this discussion a transfer matrix formulation of the cone which is derived from a linearized, one-dimensional wave equation for the lossy cone, presented in the pionieering work of Nederveen in 1969 (https://repository.tudelft.nl/islandora/object/uuid:01b56232-d1c8-4394-902d-e5e51b9ec223). In this contribution, we show Nederveens original formula rewritten in todays nomenclature and compare it to more recent approaches. With low computational effort, it provides a reasonably good agreement with other exisiting solutions.

11:00
Spectrum difference between the German Fagott and the French Basson

ABSTRACT. The French "Basson" and the German "Fagott" are two descendants of the baroque bassoon. They evolved differently by the addition of tone-holes, the elongation of the main bore, and the modification of the reed shape. These two cousins are played with different fingerings and have a slightly different sound color: the French Basson having the reputation to have a more nasal and less homogeneous timbre along the pitch range.

The aim of this presentation is to identify the sound differences between one Fagott and one Basson and to relate them to their geometry through acoustic considerations. The two studied instruments have been played by a professional Fagott player, familiar with the French Basson. Each bassoon is played with two reeds: its own cane reed and a unique plastic reed, all equipped with a pressure sensor. The external sound and the reed pressure are recorded for musical excerpts and chromatic scale allowing, for each signal, the computation of the mean spectrum over specific note ranges (e.g. first register). These recordings are completed by a set of geometric measurements (main bore and holes) and impedance measurements for each fingering of both instruments. Some notes are also played by an artificial mouth on both instruments with similar control parameters, avoiding the adaption of these parameters to the played instrument by the musician.

This set of measurements allows the quantification of the difference between the two instruments in terms of acoustic properties and radiated spectrum. In addition, it gives the possibility to compute the transfer function between the reed spectrum and the external sound spectrum. The evolution of this quantity along the frequency axis can be related to manufacture elements such as the length of the holes'chimney, the dimensions of the radiating openings and the associated radiation impedance, the tone-holes lattice, etc. This transfer function is also computed from the geometry of the instrument by using simple wave propagation model.

11:20
Acoustical Analysis of the Chinese Transverse Flute (dizi) Using the Transfer Matrix Method

ABSTRACT. The dizi is a flute-like traditional Chinese wind instrument with a cylindrical bore and a series of holes opening along its length. In addition to the common embouchure hole and six finger holes, there is a membrane hole located between the embouchure hole and the uppermost finger hole, and four extra toneholes placed near the bottom of the bore, which are always open to the air. The wrinkled membrane that covers the membrane hole is believed to contribute to the unique sound brightness of the dizi. In this paper, the transfer matrix method (TMM), as well as the transfer matrix method with external interactions (TMMI), are used to study the acoustic characteristics of the dizi. The TMM and TMMI models are validated by comparing the simulated input impedance of the dizi with measurements, both with and without a membrane. The TMM is used to generate the distribution map of normalized acoustic pressure and velocity along the main bore as a function of frequency for different fingerings. Different acoustic characteristics are discussed through the analysis of the pressure and flow maps. It is found that the four extra end holes compose a second tonehole lattice, which is independent of the one formed by the finger holes.

11:40
Regime-change-induced deflection of aeroacoustic flow in flute-like instruments with a hysteresis region

ABSTRACT. The aerodynamics of jet interactions with a strike edge coupled to an acoustic resonator have long been of interest to those seeking to understand the acoustics of flutes and recorders. Presented here is experimental (video1,2) evidence for what may be a newly observed phenomenon in these instruments. The peculiar behavior is observed when a flute or recorder powered by an artificial blower is set up to produce one of two tones at a blowing pressure at which both tones are stable. Due to hysteresis effects, the tone played depends on the pressure history of the system; as expected, a momentary increase or decrease in the blowing pressure is sufficient to induce a switch from one tone to the other.

However, the steady-state pressure recorded by an external pressure probe positioned downstream from the strike edge showed a surprising sensitivity to the tone being played despite the fact that the identical source pressure is used for both tones. The probe pressures associated with the two tones typically differed by a very reproducible 20-30%, with the exact value depending on the probe position, resonator type (flute or recorder), and resonator length. This suggests the regime change associated with switching the tone is accompanied by a macroscopic deflection in the time-averaged position of the jet, a possible a manifestation of the Coanda effect. It is hoped that these results will stimulate further measurements and modeling by others.

1. https://youtu.be/WH_xv55tdgA (flute) 2. https://youtu.be/28NdmTJvE8o, https://youtu.be/5AlVvSguVuM (recorder)

12:00-13:20Lunch Break
13:20-15:40 Session 3: plucked string instruments
13:20
Archive for the acoustical documentation of classical Spanish guitars, flamenco guitars and romantic guitars from private and public collections

ABSTRACT. In this archive, some 60 valuable guitars are documented in terms of their acoustical features, such as the bridge mobility, together with some data on geometry. The samples cover guitars from roughly 1900 until today, including reference instruments from Bellido, Campo, Conde, Contreras, DeVoe, Fleta, Martinez, Munoa, Ortega, Pages, Ramirez, Simplicio, and Torres. The instrumentation setup is documented, as well as the means to reduce impairment from echoes given the varying room acoustic conditions at the visited collections. The data covers (i) a matrix for the bridge response with different tapping and response points, for both vibration and radiation, (ii) a matrix of vibrational responses across the top plate, and (iii) reference sounds of plugged open strings. The data, gathered in a consistent way, encourages comparative studies. For instance, one might be interested to understand general trends across epochs of guitar making. This contribution will show examples of trends. The data matrix is, at the same time, specific enough to investigate specific questions on individual guitars. This contribution gives an example for matters of symmetric or asymmetric bracing in a guitar. The archive may also serve to reference physical models of guitars against a larger data corpus with enough parameters under variation. The data corpus is only somewhat limited to investigate matters of radiation, due to the sparse microphone setup and the varying room acoustic conditions. The archive is hosted at Zenodo and open and free for everyone.

13:40
Towards Acoustic Copies of Guitars – Predicting the Effect of Geometry Modifications on Guitar Soundboards

ABSTRACT. Musical instruments have fascinated people in all times. Real myths have grown around instruments such as the violins of Stradivari or the guitars of Torres. The sound of these instruments is still unrivaled for many musicians. Professional musicians can distinguish even seemingly identical instruments only by their sound. These audible differences can be attributed largely to the natural variability of the woods used in instruments of identical manufacture. This project aims to develop a methodology that would allow the replication of reference instruments based on their vibrational characteristics rather than solely on their geometry. Hence, luthiers could build instruments that sound like a reference instrument rather than merely looking the same. This is to be achieved by compensating for the unavoidable differences in vibrational properties between the original and the replica caused by the variability of the wood through geometry modifications. The challenge of such an approach is to reliably predict the necessary geometry modifications before the instrument is built. The novel contribution is that we aim to use experimentally validated computer models to quantitatively predict the effects of design variations. Quantitative predictions of the impact of design modifications would give instrument makers a powerful new tool to systematically improve instruments. As to eventually copy the acoustic response of the whole instrument, we first focus on matching the vibrational response of the individual elements like the soundboard in this contribution. Two spruce soundboards with initially equal geometry and Torres bracing were made. One is defined as the reference soundboard, while the other shall represent a copy to which geometric modifications are applied. The eigenfrequencies and eigenmodes, identified with experimental modal analysis, are used to compare the soundboards. The relative difference of the first four eigenfrequencies is about 5 % initially due to the variability of the tonewood. A detailed, experimentally validated numerical finite element model of the soundboards is developed to predict suitable geometry modifications that can reduce the difference of eigenfrequencies between the soundboards. The individual heights of the different braces are chosen as the geometric parameters to influence the modal behavior of the soundboards. Using the numerical model as a virtual prototype, the authors could find suitable modifications that significantly reduce the difference between the two soundboards' first eigenfrequencies. Finally, the mean eigenfrequency-difference of the first modes decreased to 2.5% when solely the height of four braces was reduced by 1mm.

14:00
A Self-Contained and Automated Tonewood Measurement Device

ABSTRACT. Choosing which piece of tonewood to use for a string instrument and how to work it is a difficult task, often relying on domain knowledge learned over many years. There exist some methods such as measuring the speed of sound, and other vibration methods, but they provide only small amounts of data or are time and resource-intensive. We present a self-contained tonewood measurement device that is fully automated outside of placing the tonewood on the device and running the software. The device was designed to be useful for researchers as well as luthiers in an active instrument shop setting, and to be relatively affordable, using only open-source software.

The measurement device is constructed using aluminum T-slot framing and uses a Teensy microprocessor and audio shield for all sensor inputs and serial control. The system is designed for dimensional rectangular tonewood meant for guitars and the wood is mounted on foam supports at the nodes of the first tangential and longitudinal modes. The mass of the tonewood is measured using a load cell under the mounting plate. Thickness is measured with a Hall-effect sensor and magnet, while length and width are measured using image processing of a photograph of the board from above. A 3D printed impact hammer with a piezoelectric sensor in the head is controlled by a stepper motor and used to provide impulse excitations to the tonewood. A second piezoelectric sensor is attached to the board to record the resulting vibrations.

The impulse excitation technique is used to approximate the longitudinal and tangential Young’s moduli from the dimensional measurements, mass, and resonant frequencies extracted from the vibration measurements. The radiation coefficients in each direction are then calculated. The signal energy and T60 decay time are then calculated in three frequency bands to give insight into the general vibrational properties of the tonewood.

It is our hope that the device can be used not only to characterize specific tonewood boards and help guide their use for instruments but to also build an extensive database of tonewood measurements.

14:20
On the use of thermally-aged wood for the top plate of plucked string instruments

ABSTRACT. Thermal ageing of tonewood is a process meant to resemble the natural ageing of wood with the benefit of reducing the waiting time and possibly making the wood more resistant to changes in environmental conditions. Musical instruments such as the concert kantele, which is a handcrafted zither-like instrument, are sensitive to humidity and temperature changes. In addition, the body of the concert kantele has to withstand the tension of 40 steel strings and the metallic level system. Thus, thermally-aged wood has particular potential in the concert kantele building. In this paper, three concert kanteles were built with thermally-treated top plates and for each, a kantele with a traditionally-aged top plate was built simultaneously for comparison. The resulting three pairs of kanteles were measured in the anechoic room and different acoustical and mechanical properties were extracted. This paper discusses the differences in those properties across and between the pairs of instruments. The results show whether thermally-aged top plates will introduce significant differences in the acoustical qualities of the fully-built instruments.

14:40
Material & Structural Properties of Guitar Soundboards : Optimization of a Bracing Pattern

ABSTRACT. The materials used for acoustic guitar-making have remained mostly unchanged through the history of the instrument : softwoods such as spruce are traditionally used for making the soundboard, mainly because of their high specific rigidity in longitudinal direction, and low density. Straight fibers and closely spaced and regular annual rings are preferred by instrument makers and consists in what is called tonewood. However, they are faced with a growing shortage in woods that meet their criteria especially for large parts, requiring them to adapt and transform their practices. The use in instrument making of alternative materials such as fiber-based composites has increased in recent years, as well as that of locally sourced woods. Using these alternatives for soundboards, sides and backs raises the question of their impact on the vibration and radiated sound of the resulting instrument, and the parameters that govern the choices of makers. The vibration of a soundboard is governed by different sets of properties : those related to its material, and those related to its geometry and assembly. Instrument makers can usually exert partial control on the material through the selection of a piece of wood, and on the geometry by giving it a shape, and by applying it a bracing pattern. The relationship between material and structural properties is explored in this study through the finite element modelling of a guitar soundboard. Two soundboards, identical except for the chosen material, are studied : bayesian optimisation methods are used to define the geometrical parameters of one of the soundboards’ braces, in order to minimize the difference in eigenfrequencies of the two soundboards. Time-domain simulations are then set up to evaluate the difference between each soundboard in a musical context, by relying on modal parameters derived from the finite element model and using a proportional damping hypothesis.

15:00
Wooden Metamaterials for Instrument Making

ABSTRACT. We have recently shown that patterns of holes in quasi 2D wooden plates can be used to tune their mechanical parameters over a large range of values. The applicability of such meta materials to instrument making is obvious: material variability and habitat reduction due to global warming are two of the biggest problems of contemporary instrument making. Having a way to tune the material parameters of the wood could revolutionize the way in which we make instruments. It could be used to reproduce old instruments whose wood has material characteristics that are impossible to find in today’s trees, as well as to standardize guitar production on an industrial scale. In a way, the material can become another design parameter for the instrument maker to play with. In particular, they can be used to obtain very low density without a significant loss in longitudinal stiffness thus obtaining wood-based materials with elastic properties seldom found in nature.

In this talk I will review our most recent results, from the general case of how patterns affect the metamaterial’s elastic constants for a rectangular plate, to its application in actual instruments both simulated and experimentally. The instruments chosen are the Cajon Peruano and a classical Torres guitar, both of which have a soundboard made with the metamaterial. The similarities and differences of the “traditional” and the “meta-instrument” are studied in terms of their vibrational response, frequency response functions and pressure radiation. The results show that metamaterials indeed can be used in instrument making without any major negative side effects, and can even improve several quality measures of the complete instrument like radiation efficiency and anisotropy ratio.

15:20
Investigation of string's influence on the acoustic signature of Central Africa harp

ABSTRACT. Central Africa harps are string instruments, often anthropomorphic, whose soundbox is built from a hollowed out tree trunk. While animal gut and plant fibre were formerly used to make strings, nowadays harp makers use fishing line. Usually 8 in number, they are bound to wooden tuning pegs on the neck and attached to a tailpiece placed under animal skin used as soundboard. Each instrument-making element can vary according to ethnic groups and material availability. Our work aims at understanding the vibro-acoustic behaviour of these instruments in order to determine relevant descriptors linked to their building process. To this end, a numerical model is developed, based on the Udwadia-Kalaba modal formulation, allowing for the instrument body to be coupled to the strings. The present study focuses on the string model and the influence of its features on the sound of Central Africa harps. Measurements of strings’ diameter are carried out on a corpus of instruments by means of a laser profilometer. These highlight the non-uniformity of strings’ diameter, over their entire length, which is thus added to the physical model. Strings’ displacement is described in three polarizations including geometrical non-linear effects induced by their high-amplitude excitation. Their damping properties are experimentally identified and extrapolated by fitting on an analytical model for two constitutive materials: nylon and orchid root. Numerical simulations allow assessing the influence of strings on the acoustic signature. This work, part of the project Ngombi, was funded by the Agence Nationale de la Recherche (French National research agency), grant number ANR-19-CE27-0013-01.

15:40-16:00Coffee Break
16:00-17:00 Session 4: Public Tutorial Lecture
16:00
Virtual-Acoustic Instruments

ABSTRACT. One of the key tasks in accomplishing further step changes in aural immersion involves replacing sample-based strategies with interactive, procedural source models based on numerical simulation of vibrating structures. Musical instruments are ideal case studies for such research due to the tight coupling between the musician's actions and the resulting sound. Accordingly, virtual-acoustic instruments can be conceptualised as instruments in which the sound production mechanism has been replaced with a real-time computer-based simulation, while preserving as much as feasible the instrument's natural affordances and `acousticality'. Such seamless virtualisation of the acoustic functioning offers extended ways to reconfigure, design, and develop instruments as well as new ways for studying the associated musician-instrument interaction. This talk will outline the concept of virtual-acoustic instruments, explain the diverse challenges that need to be overcome to realise them, and highlight some of the proposed solutions, early examples, and current challenges.

17:00-19:00 Session 5: Poster / Demo
Measurement Procedures for the Workshop - Speed of sound

ABSTRACT. Of great importance to the violin maker and other instrument makers, are the material properties of wood. Young's Modulus in transverse and longitudinal directions is commonly measured in the workshop by sending a signal through the material, measuring the time it takes to propagate through the object being measured. Dividing it's length - or width - by the time, one calculates the speed of sound, which together with density gives Young's Modulus. This method relies on an accurate device, known as a Lucchi Meter, which is usually quite expensive and comes with it's own set of caveats. An alternative idea is to measure the frequency of some longitudinal standing wave inside the object and calculate speed of sound from frequency and wavelength. This procedure is tested for 6 blanks of spruce tops, maple backs and necks for violins with contact as well as non-contact pick-ups. Results are compared with Lucchi Meter measurements, as well as Young's Modulus derived from modal analysis of violin blanks. For speed of sound we find generally good agreement between both data sets with results derived by Lucchi Meter method being a few percent higher in general. Error measures derived from repeated measures are similar for both methods around 1% deviation. Advantages and disadvantages of both methods are discussed and we conclude that the longitudinal standing wave measurement can effectively replace the much more expensive Lucchi Meter in the workshop in most cases, as long as appropriate test strips can be cut, which should be the case for all bowed string instruments. In certain cases such as acoustic modelling longitudinal standing wave method might even be preferable over the Lucchi Meter method as absolute derived values are closer to real life conditions. However for modelling of this sort usually many more parameters outside of Young's Modulus are needed. Ease of application for this method can be further improved through development of relatively simple specialised software.

Envelope functions as approximations for long-term average sound spectra of different pipe organ ranks and the influence of pitch on tonal timbre

ABSTRACT. Long-term average sound spectra (LTAS) of organ pipes from different flue and reed ranks have been studied with the intention to obtain pitch-dependent LTAS models for the steady part of the pipe sound. The measured LTAS of each rank of pipes cover up to eight octaves within the tonal range C0…C9. To keep the modelled envelope functions simple and to restrict the number of their fit parameters, all spectra were piecewise linearly parametrized. The resulting two slope lines serve as empirical approximations for each envelope function enfolding the sound pressure levels of the harmonic partials. The sound spectra modelled this way allow for extrapolating the pitch range of most ranks and thus to explore these spectra beyond the physically existing compass of a rank. The modelled envelope functions of the sound spectra depend not only on the shape of the pipe body but also vary upon pitch. This is expressed by two parameters calculated from the spectra: the first is a weighted average slope of the envelope function; the second is the spectral centroid of the sound spectrum. Both quantities are independent of the sound pressure level and allow for quantifying two dimensions of the tonal timbre the timbre apart from loudness. Each rank of pipes depicts as characteristic curve (from bass to treble) in a two-dimensional timbre chart, in which the different families of organ tone clearly separate from each other. These charts also indicate that organ builders aim for smooth changes of these timbre parameters upon pitch and avoid sudden jumps. On the other hand, the use of such timbre charts could assist in the work of pipe voicing by means of an electronic voicing device, which extracts the LTAS from the measured the pipe sound and calculates the timbre chart from it. In practice, it could help during pipe voicing, comparable to using an electronic tuner as an aid while tuning musical instruments.

Design principles of pipe organ Mixtures – viewed from a psycho-acoustic position

ABSTRACT. The term „Mixture“ refers to a compound stop of high-pitched ranks adding brilliance and loudness to the pipe organ sound. As Mixture stops and their sound have been refined over centuries, one can expect to find a set of correlating psycho-acoustic parameters in their long-term average sound spectra (LTAS) representing a well-balanced sound. Although the musical taste how a pipe organ Mixture should sound has varied over time, this study reveals how three main design principles of Mixtures refer to psycho-acoustical models. Firstly, the fact that Mixture stops shall appear as sound entity refers to minimal multiplicity in Parncutt´s theory of harmony. By minimizing the parameter “multiplicity” one simultaneously generates maximal pitch salience at unison pitch of the Mixture, which is usually a virtual pitch. This supports that Mixtures do support the fundamental pitch. Secondly, the pitches of the individual ranks contributing to a Mixture stop fall into separate critical bands in most cases. Consequently, Mixtures increase the loudness while avoiding acoustical roughness between their neighbored pitches. This explains the selection of certain pitches from the harmonic series, but also why higher adjacent partials are avoided. Thirdly, the common practice of introducing breakpoints into the ranks contributing to a Mixture adjusts the course of their acoustic brightness over the key compass. It reduces from bass to treble in a similar manner, as it is the case in bright brass instruments or as the Trumpet stop on a pipe organ does. The trumpet sound might thus have been influenced the development of pipe organ Mixtures. The correlation of LTAS with psycho-acoustical parameters reveals some of the tonal ideas for Mixture stops, which organ builders pursued over centuries.

Material Parameter Identification of Complete Guitars – Accomplishments and Limitations

ABSTRACT. Typically, the different parts of a guitar are made of different types of wood. The variability of the used woods' material parameters causes the vibrational behavior of even seemingly identical instruments to vary, and thus each instrument develops an individual sound. The unique characteristics of the wood are also one thing that is often named when it comes to explaining the uniqueness of famous instruments like the violins of Stradivari or the guitars of Torres. Finding out more about the characteristics of these woods might reveal more information on what makes these instruments so special. Furthermore, instrument makers might be interested in a method that allows identifying material parameters of instruments in order to analyze the variability of their instruments in an end-of-line test. Once the instrument is finished, identifying the material parameters of the wooden parts is, alas, currently a merely impossible task. The only feasible way towards solving this task is to apply numerical model updating techniques to detailed numerical models of an instrument. However, multiple problems arise in this process. First and foremost, even with the most modern equipment, the enormous computational effort makes it extremely difficult to solve the problem of identifying the material parameters of complete instruments. The computational effort is caused by the demanding model itself and many unknown parameters. Another problem is that even if the issue of computational effort is solved, it would be improbable to find the "real" material parameters with this approach. Firstly, one might find multiple possible combinations of parameters yielding similar behavior. On the other hand, even a seemingly perfect solution does most likely not represent the actual material parameters due to approximations in the numerical model. Efficient surrogate models and a technique for uncertainty quantification based on possibility theory are proposed to overcome the problems mentioned above. Hence, a method is presented that allows the non-destructive identification of a complete guitar's most influential material parameters. The material parameters are identified by updating a detailed finite element model with experimentally determined modal parameters of existing guitars. Applied to the problem, this leads to the more meaningful identification of possible parameter regions instead of only one crisp parameter value that might lead to wrong conclusions. While it is shown that the technique can distinguish geometrically identical guitars by their material parameters, still limitations are present.

The Sound of Bells in Data Cells - Perceived quality and pleasantness of church bell chimes

ABSTRACT. Background: Since the 1950s, the first modern sets of guidelinesfor the evaluation of church bell timbres have been established [1][2][3][4]. In those rulebooks, particular attention is paid – usually only via verbal descriptions – to tonal brightness, the partial structure, pitch salience, and the attack and release transients of the bell sound. It’s generally accepted that the timbre of a well-sounding bell is characterized by a soft, full, bright and clear tone with a strong fundamental, and a low-noise sound spectrum containing a “luminously” clear chime with as little striking noise as possible [5]. Research Questions: With the help of audio signal analysis methods, descriptions such as "soft, full, bright, and clear tone" (see above) can be represented more objectively in numbers, which leads to the following questions: • Can numerical audio features be used to create a model of the perceived pleasantness and quality of the sound of church bell chimes? • Which sound features contribute to a perceived increase in timbre quality in a before-and-after comparison of church bells that were subjected to sound optimization procedures? Method: In a pilot experiment, 11 bell experts and 26 laypersons evaluated the chimes in 40 loudness-matched bell recordings in terms of tonal quality, pleasantness, salience of the fundamental, clarity, and softness of the attack transient. Via signal analysis, the chime sounds were analyzed in regard to 127 extracted audio features. Correlation and regression analyses were used to determine relationships between the timbre ratings and the calculated audio features. Results: Preliminary results suggest that the perceived quality is particularly related to the salience of the minor third in the sound spectrum, as well as to the velocity (the slower the better) and the softness of the attack (the softer the better). The same applies to the perceived pleasantness, which also correlates strongly and negatively with tonal sharpness (see also [6]). More in-depth analyses and modeling will be presented at the conference. Literature: [1] Limburger Richtlinien für die klangliche Beurteilung neuer Glocken (1951) in K. Kramer (1986), Glocken in Geschichte und Gegenwart. Karlsruhe: Badenia. [2] Thienhaus, E. (1952). Definitionen zur Glockenprüfung. Acustica 2, p. 251-253. [3] Ellerhorst, W., Klaus, G. (1957). Handbuch der Glockenkunde. Weingarten: Martinus. [4] Weissenbäck, A., Pfundner, J. (1961). Tönendes Erz. Graz, Köln: Böhlhaus. [5] Wernisch, J. (2006). Glockenkunde von Österreich. Lienz: Journal (p. 8-11). [6] Aures, W. (1981). Wohlklangsbeurteilung von Kirchenglocken. Fortschritte der Akustik, DAGA81 (p. 733-736), Berlin: VDI.

Measuring Audio-Visual Latencies in Virtual Reality Systems

ABSTRACT. Virtual reality (VR) systems, as an emerging simulation technique, become more actively used across multiple research fields with diverse purposes. In such systems, various delays may occur while signal passes through hardware and software components, thus causing asynchronies or even cybersickness, as a result. To better understand and control the role of delays in VR research experiments, we tested an accessible method for measuring audio and visual end-to-end latency between two popular game engines (Unity and Unreal) and VR head-mounted displays (Oculus Rift & Oculus Quest 2). The measuring setup consisted of the microcontroller, a dedicated serial port, a microphone, a light sensor, and an oscilloscope. The results of our particular set-up showed that Unreal Engine with Oculus Rift had ≈16 ms less visual delay and ≈33 ms less audio delay in comparison to Oculus Quest 2. The Unity Engine with Oculus Rift had ≈22 ms less visual delay and ≈39 ms less audio delay in comparison to Oculus Quest 2. Although actual values may differ slightly between system set-ups, the observed values are above the discrimination threshold and are not negligible. Overall, Unreal Engine showed lower visual latency performance in comparison to Unity Engine, however, no differences were found in terms of audio latency. In addition, audio and visual latency performance of Oculus Rift had lower delays than Oculus Quest 2, therefore usage of Oculus Rift is more advisable in VR research where audio-visual latencies play an important role, even if the latter (Oculus Quest 2) is more powerful as a newer version of VR head-mounted display (HMD). Our approach provides a convenient way to measure audio-visual end-to-end latency in VR without the need for a strong engineering background.

A “smart mouthpiece”, for the autonomous analysis and improvement of brass musician performances

ABSTRACT. While playing, the only feedback brass musicians get is the sound generated. They don't know or can only guess why, as an example, a piece of music turned out well the first time, while the second attempt failed. For the periodic oscillation of the lips of brass musicians, a minimal threshold blowing pressure is required [1, 2, 3]. Furthermore, there must be an airtight connection between the lips and the mouthpiece, which is achieved through the force applied on the mouthpiece [4, 5]. To help musicians improve their performances by themselves, a wireless “smart mouthpiece” with embedded sensors was developed. The “smart mouthpiece” includes two pressure sensors that measure the upstream and downstream pressure, and three load cells that measure the force applied by the lower and upper lips on the mouthpiece. The recorded sensor data is transmitted in real time to a computer, via Bluetooth or Wifi. The data can then be processed and plotted, in real time. The “smart mouthpiece” also offers interesting perspectives for online teaching.

[1] R. Mattéoli et al., “Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments,” Acta Acust., vol. 5, p. 57, 2021, doi: 10.1051/aacus/2021049. [2] H. Boutin, N. Fletcher, J. Smith, and J. Wolfe, “Relationships between pressure, flow, lip motion, and upstream and downstream impedances for the trombone,” The Journal of the Acoustical Society of America, vol. 137, no. 3, pp. 1195–1209, 2015, doi: 10.1121/1.4908236. [3] M. Campbell, J. Gilbert, and A. Myers, The Science of Brass Instruments. Cham: Springer International Publishing, 2021. [4] J. C. Barbenel, P. Kenny, and J. B. Davies, “Mouthpiece forces produced while playing the trumpet,” Journal of Biomechanics, vol. 21, no. 5, pp. 417–424, 1988, doi: 10.1016/0021-9290(88)90147-9. [5] T. Grosshauser, G. Tröster, M. Bertsch, and A. Thul, “Sensor and Software Technologies for Lip Pressure Measurements in Trumpet and Cornet Playing - from Lab to Classroom,” 2015.

The listening eye: pupil dilation reflects Bayesian believe updating in dynamic auditory scenes

ABSTRACT. Pupillometry effectively reflects changes in the arousal system, which is postulated to play a notable role in optimizing perceptual inference. Bayesian inference is optimal in a probabilistic sense and has been used successfully to explain how listeners integrate prior information with current auditory evidence. Relationships between latent variables of such a Bayesian observer model and pupil dilation measures have previously been shown in a rather obscured way, using many linear regressors and sequential fitting to first behavioral and then physiological data. Here, we propose a more holistic approach based on a refined Bayesian observer model that simultaneously predicts behavioral responses and pupil dilations by explicitly defining an interpretable linking function between model variables and physiological outcomes. We analyzed data from a dynamic auditory localization task. Our approach not only resulted in improved behavioral fits but also yielded stronger links to pupil measures. Most importantly, evoked pupil dilations were clearly related to the learning rate of Bayesian belief updating, consistent with the postulated role of the arousal system in optimizing perception. As Bayesian models have also been successful in predicting music perception, the probabilistic model-based analysis approach proposed here seems promising to study the physiology behind musical surprise in the future.

Non-rigid registration of photogrammetrically reconstructed pinna point clouds for the calculation of head-related transfer functions

ABSTRACT. Listener-specific head-related transfer functions (HRTFs) are an essential part of personalised binaural audio. They can be numerically calculated given the individual geometry of a listener’s head and pinnae in a high spatial accuracy. The geometry can be obtained by, e.g., photogrammetry, yielding 3D point clouds. Currently, an extensive manual processing of such point clouds is required to obtain a mesh suitable for calculating perceptually valid HRTFs. In this work, we aim to reduce the amount of said manual work with the help of non-rigid registration (NRR) algorithms, i.e., by registering a perfect and high-resolution pinna point cloud (template) to the noisy point cloud (target) obtained from the geometry acquisition. We investigated this approach by means of two NRR algorithms tested in two ways. First, in order to exclude potential artefacts from the geometry acquisition, the algorithms were applied to targets which were identical to the template but systematically distorted by Euclidean transformations (translation, rotation, scaling), subsampling, and addition of noise and outliers. Second, in order to test the algorithms' robustness on actual data from listeners, NRR was applied to photogrammetrically reconstructed target point clouds. The registrations were evaluated in the geometric domain by means of the balanced average Hausdorff distance and in the psychoacoustic domain by means of calculating HRTFs and applying an auditory model simulating sound-localisation performance. Our results indicate that NRR algorithms are able to yield point clouds for the calculation of perceptually valid HRTFs, making it attractive for user-friendly HRTF acquisition methods such as, e.g., photogrammetry.

Describing minimum bow force using Impulse Pattern Formulation (IPF) – an empirical validation

ABSTRACT. With bowed string instruments, the necessary minimum bow force necessary to produce stable Helmholtz motion has often been discussed over the last decades. If the bowing force is too small or the bowing velocity too large, no stable tone is produced, and bifurcations or noise occurs. The Impulse Pattern Formulation (IPF) is a top-down method proposed previously (Bader, R.: Nonlinearities and Synchronization in Musical Acoustics and Music Psychology, 2013), which can explain such transitions between regular periodicity at a nominal pitch, bifurcation scenarios, and noise. The proposed recursive equation is based on the idea that impulses are produced at a generator entity within a musical instrument. Impulses then travel through the instrument, are reflected at various positions, are exponentially dampened, and will finally trigger or at least interact with succeeding impulses of the generator while returning there. Real bowing is measured on an experimental pendulum for self-organized bowing. Bowing pressure and velocity are recorded during the transitions from bifurcation to stable tone production and back. Then the IPF is used under a simulated annealing paradigm to reproduce this behavior, using the measured bowing pressure and velocity as input parameters. The model predicts the minimum bow force with much higher precision compared to previous models derived from empirics. However, while previous works derived a single equation describing a quasi-stationary transition into Helmholtz motion, the IPF provides a dynamical model that describes a bowed string's complex transient behavior.

A virtual-acoustic slide-string instrument

ABSTRACT. A virtual-acoustic instrument(VAI) can be defined as “a computer-based system which generally comprises a physical modelling synthesis algorithm and a control interface, each subject to a number of design criteria, many of which overlap with those that apply more widely to digital musical instruments.” (S. Mehes, M. van Walstijn, and P. Stapleton, NIME 2017). Such a synthesis model and control interface together would enable the performer to utilise embodied understanding of the processes that he/she has gained over years of practice, thereby reducing the need to master new interaction. In addition, this design allows for tuning – in this case the exploration of the space of spatially global model parameters, e.g: fundamental frequency and damping constants.

This demo introduces one such VAI dubbed a virtual-acoustic slide-string instrument (VASSI). It involves simulation of acoustic instruments like the slide guitar and chitraveena, whose articulation is characterised by time-varying position of contact between a cylindrical slide object and a string, primarily to produce a continuously varying pitch. VASSI is based on a finite-difference model (A. Bhanuprakash, M. van Walstijn and P. Stapleton, DAFx 2020) that captures nonlinear dynamics in slide-string articulation - particularly the rattling caused by slide-string collisions - as well as linear aspects, e.g: the resonance and damping of the slide-hand system. Time-varying control inputs are taken into account in the modelling and energy analysis, enabling the simulation of typical articulatory gestures like glissandi and vibrato. Regional damping and pluck forces from finger-string contact are also modelled, thereby allowing the performer to excite and damp the instrument in the same manner as the mechano-acoustic counterpart.

VASSI’s control interface is in the works, and would consist of: (a) a taut string damped at one end to suppress its resonances, (b) a piezo-electric sensor placed at one end to capture the excitation signal, (c) force sensors placed at both the ends to estimate the excitation position and the finger forces, and (d) a Leap Motion Controller (LMC) to track the slide with stereo infrared images for reduced ambient light sensitivity. A major sensing challenge here is to track slide position even when it is partly occluded by the performer’s hand. To achieve this, a novel contour line-based cylinder pose estimation algorithm has been developed. Ongoing work deals with applying this algorithm on the LMC-captured images of contour lines on the slide to estimate its pose and thereby track its position in real time.

Experimental Sonatas, both Ancient and Modern: Comparing and Conjoining Early Acoustic Instruments to Electronic Audio Technologies

ABSTRACT. At a superficial level, most so-called early music instruments share little in common with the latest computer- and electronic-music technologies. Consider the cornettos, natural trumpets, or trombones that once resounded throughout early-modern Europe. These culturally resonant brasswinds, with their idiomatic timbral properties, seem worlds away from today’s commercial-music studios, replete with frequency filters, FX units, mixing consoles, and a centralized Digital Audio Workstation. Yet upon further analysis, one can discover sonic connections between these two chronologically distinct musical domains: Ancient instruments regularly meet the above-mentioned technologies through modern recording practices.

Some argue that the historically informed performance movement (HIP) owes much to advances in audio equipment and engineering expertise. Many early-music enthusiasts first heard a Gabrieli canzona, for example, not within the intended acoustic of Basilica di San Marco, but instead through a digitally processed and mastered binaural recording. The early-music album itself—an engineered media-document, sonically detached from any definitive time or place—is not performed by early-music instrumentalists; it is played-back through headphones or loudspeakers in homes, classrooms, and automobiles. With these technocultural anachronisms in mind, we can recognize that historical acoustic instruments, once they are recorded (i.e. audio-sampled) and produced (i.e. audio-processed), share their virtual sonic-space with technologies that have likewise produced modern—and modernist—electronic music.

This lecture-demonstration explores and expands upon the links between modern audio technology and today’s enactments of early-music instrumentality. A broad, comparative survey of “experimental” repertoires will underscore those spatial, timbral, and psychoacoustic dimensions strikingly shared amongst certain ancient acoustic and modern electronic traditions. Imaginative motets and sonatas by Monteverdi, Schütz, Kuhnau, and Biber are correlated with avant-garde acousmatic works by Edgard Varèse, Bruno Maderna, Otto Luening, Milton Babbitt, and Karlheinz Stockhausen.

After uncovering sonic and psychoacoustic theories common to these diverse musical examples, this lecture-demo presents a proof-of-concept performance—a merger of past and present traditions, wherein an early-music instrument is virtually transformed in real time by state-of-the-art audio technologies. Guided by historical improvisation principles, Dr. Bonus reinterprets and reimagines cornetto-performance, alongside the instrument’s idiomatic timbre, through analog synthesis and digital signal processing. This live electro-acoustic sonata for cornetto, computer, and modular synthesizer actively integrates two distinct musical traditions, as it places one arcane brasswind upon a psycho-acoustically potent, virtual soundstage.

Ultimately, this intentionally anachronistic musical offering might verify that early instruments need not be bound to a specific historical repertoire or sonic framework in order to achieve some degree of creative and cultural relevance.

Identification of violin timbre by neural network using acoustic features

ABSTRACT. Identification of the timbre of violins by a neural network was performed, where the computer program for machine learning was developed using Python and Keras library. More than 30 violins are recorded, which contains old Italian violins made by Stradivari to contemporary violins, and the spectrum envelope and the mel-Frequency cepstrum coefficients were used for the training data and the test data. The accuracy of the identification experiment in the case of open strings was more than 90%. Furthermore, some experiments which predict the similarity of the timbre of an unknown violin versus that of violins trained will be shown in the presentation.

AURA3D - An Ambisonics System at ESML

ABSTRACT. The project AURA3D aims at implementing a third order 3D Ambisonics auralization system in a reverberant space, the Small Auditorium of ESML, serving as an acoustic demonstrator through auditory performance. An acoustical assessment of this space was carried out for the first time with the purpose of developing the necessary 16 dereverberation filters for the auralization system. These filters are developed through a multiple-input, multiple-output inverse theorem method (MINT).

How are vocal harmonics, octave equivalence, and consonance preference connected?

ABSTRACT. Most animals produce sounds that have overtones occuring at integer multiples of the fundamental frequency. This “harmonic series” contains all musical intervals that are perceived as consonant (i.e. pleasant). The most consonant interval is the octave, occuring in the doubling of frequency between the fundamental frequency and the first overtone. Notes separated by an octave are perceived as similar by humans, a phenomenon dubbed “octave equivalence“. As such, it has been hypothesized that the harmonic series forms a biological basis for human octave equivalence and consonance preference. This hypothesis is called the “vocal similarity hypothesis“. Cross-species studies can help us to uncover potential biological roots of music. Such studies allow for control for potential enculturation effects, as well as for inferences about why and how certain musical traits evolved. Our group has conducted such comparative research with budgerigars and humans. In one study budgerigars did not show octave equivalence in an operant paradigm where humans did. Humans and birds learned to respond exclusively to the middle four notes (and not the remaining notes) of octave four. Subsequently, novel notes from octave 5 were presented in a generalization test. Humans responded more to the middle four notes of octave 5. This means that notes an octave apart were treated as similar. The birds, however, responded significantly less to octave-transposed rewarded notes. In another study, humans showed preference for musical consonance in a place preference paradigm that allowed them to freely choose whether to listen to consonance or dissonance. Budgerigars showed no preference for consonance. The findings from these studies align with the vocal similarity hypothesis: In budgerigars' vocal output, harmonics are obscured compared to humans', meaning harmonics are often difficult or impossible to perceive. Thus, it is no surprise that harmonic-related perceptions appear to be diminished or absent in budgerigars. We further reviewed previous studies on consonance and octave equivalence in humans and non-human animals. Together with our results the existing literature suggests that a more complex interplay of multiple factors is at the root of octave equivalence and consonance preference. We describe how four biological traits – namely harmonic clarity, vocal learning, differing vocal ranges, and simultaneous vocalizing – primarily appear to constrain consonance and octave equivalence. Our hypotheses on how these traits interact to shape human musicality allow for predictions and suggestions for further cross-species studies that can help illuminate the biological roots of human music.

Musical podia: A evaluation method for measurements of the acoustic properties

ABSTRACT. A procedure was presented to measure the acoustic properties of musical podia. With this method, an objective and repeatable measurement is possible independent of influences of the instruments and musicians playing on the podium. The common feature, providing a raised position for musicians, is realised with different concepts and constructions. Manufacturers advertise an improvement in sound for instruments with endpin, like the double bass or the violoncello, placed and playing on these musical podia. For an objective assessment of the different presented systems, a standard of evaluation has to be found and applied. Based on the evaluation method for musical instruments, developed by Ziegenhals at IfM Zwota and regularly applied at the German Musical Instrument Award as part of the objective acoustic measurement process, the method for musical instruments was adapted for measurements of musical podia. With frequency curves, measured with pulse excitations using an impulse force hammer, features are identified and merged into suitable characteristic values. With these values, an objective evaluation of the measured podia is conducted. The identified characteristic values and the resulting evaluation of the studied podia will be introduced.

Towards a piano tuning tool

ABSTRACT. Tuning a piano is a complex task that requires a high level of expertise. Two main reasons explain this observation. First of all, the gesture of the piano tuner on the key requires a long learning period to be able to achieve tunings that “hold” over time. Then, mainly because of the inharmonicity of the strings (different according to the pianos, and the tessitura), complex perceptual compromises have to be made by the piano tuner during the tuning (checking the beats of multiple intervals), leading to "Railsback stretch", which causes the frequencies of the notes to deviate strongly from the tempered scale. In this work, we are interested in this second cause, with the aim of providing a tool to help the tuning of a piano. We first present a review of the literature of works focusing on piano tuning, as well as the identification of string inharmonicity from the recording of notes. After having implemented a method for identifying the inharmonicity parameter, we tested two assisted tuning strategies: equal temperament in natural fifths (by Serge Cordier) as well as a minimization of the dissonance of intervals based on the model of Plomp and Levelt. The results highlight the various compromises made.

Cross-validation of acquisition methods for near-field Head-Related Transfer Functions with a high distance resolution

ABSTRACT. Due to the distance dependent characteristics of near-field Head-Related Transfer Functions (HRTFs), their acquisition procedure is much more complex than that of far-field HRTFs, which is already rather tedious. Recently, an approach to efficiently measure near-field HRTFs with a high distance resolution, using a continuously moving sound source and a Least Mean Square (LMS) adaptive filtering algorithm, has been proposed. Using this approach, we obtain a dataset containing horizontal-plane HRTF data of the Neumann KU100 dummy head for source distances from 19 cm to 119 cm with a spacing of 1 cm. Among the various choices for evaluation and validation, direct comparison with another HRTF dataset is a straightforward solution. Since the distance resolution of all existing near-field HRTF databases is much lower than the one of interest, we apply the Finite Element Method (FEM) to numerically compute the HRTFs at the same acquisition positions as the measurement on the KU100 dummy head geometry obtained through laser scanning. We then evaluate the similarity and discrepancy between the two HRTF datasets by assessing their binaural and spectral characteristics. The causes for disparities between the datasets and the limitations of each method are discussed.

Pilot study on the perceptual quality of close-mic recordings in auralization of a string ensemble

ABSTRACT. To achieve a perceptually convincing auralization of joint musical performances, an accurate representation of individual sound sources is required. It is essential to capture clean signals of each musical instrument in the joint performance with minimized microphone cross-talk and room acoustic feedback for this purpose. Recording instruments in anechoic environments is a widely used method, but it typically lacks the natural and intrinsic characteristics of a joint performance due to limited room acoustic and inter-musician feedback. An alternative to this is to use close miking techniques to capture individual sound sources in a joint performance. Although these recordings have the potential to recreate a joint performance, the challenge here is to improve the quality of recordings by minimizing the microphone cross-talk and room acoustic contribution in reverberant environments. This study investigates the perceptual quality of auralization of an ensemble using clip-on microphone recordings in comparison to a binaural recording of musical performance.

The performance of a string ensemble with a different number of violins (from 1 to 9) was recorded at Detmold concert house using a binaural head, and clip-on microphones. The auralization of the ensemble was performed by the convolution of clip-on microphone recordings with the individual binaural room impulse responses (BRIRs) of sound sources obtained from two independent methods; a geometrical room acoustic simulation model of the concert hall, and BRIR measurements from the concert hall using a studio monitor. Real and synthesized sound samples of the ensemble having different numbers of violins were generated, and their natural impression and similarity were perceptually evaluated by a group of expert listeners. The results show that, although the binaural recordings were not always rated to be highly natural, the spot microphone signals auralized using BRIR measurements have a similar distribution of naturalness impression to that of binaural recordings whereas the room acoustic simulation model shows a lower naturalness impression. This demonstrates that clip-on microphone recordings are suitable for auralization related applications. Furthermore, samples generated using measured BRIRs were consistently rated to be more similar to the real recording than to the simulated BRIRs, which indicates the need for further improvement of auralization of geometry-based room acoustic simulations.

19:00-20:00Snack Break (Jause)
20:00-21:00 Concert - Tom Beghin, pianoforte

Concert: Beethoven and His French Piano by Tom Beghin (Duration approx 70 min.)