Exposure to loud sound during leisure time is identified as a significant risk factor for hearing by health authorities worldwide. The current standard that defines unsafe exposure rests on the equal-energy hypothesis, according to which the maximum recommended exposure is a tradeoff between level and daily exposure duration, a satisfactory recipe except for strongly non-Gaussian intense sounds such as gunshots. Nowadays, sound broadcast by music and videoconference streaming services makes extensive use of numerical dynamic range compression. By filling in millisecond-long valleys in the signal to prevent competing noise from masking, it pulls sound-level statistics away from a Gaussian distribution, the framework where the equal-energy hypothesis emerged. Auditory effects of a single 4 hour exposure to the same music were compared in two samples of guinea pigs exposed either to its original or overcompressed version played at the same average level of 102 dBA allowed by French regulations. Apart from a temporary shift of otoacoustic emissions at the lowest two frequencies 2 and 3 kHz, music exposure had no detectable cochlear effect, as monitored at 1, 2 and 7 days post-exposure. Conversely, middle-ear muscle strength behaved differentially as the group exposed to original music had fully recovered one day after exposure whereas the group exposed to overcompressed music remained stuck to about 50% of baseline even after 7 days. Subsamples were then re-exposed to the same music as the first time and sacrificed for density measurements of inner-hair-cell synapses. No difference in synaptic density was found compared to unexposed controls with either type of music. The present results show that the same music piece, harmless when played in its original version, induces a protracted deficit of one auditory neural pathway when overcompressed at the same level. The induced disorder does not seem to involve inner-hair cell synapses.