DigitalMusicology.jl
All exported names of the submodules that are listed here are reexported by DigitalMusicology
.
Pitches
Pitch can be represented in many different ways, for example, as frequencies, piano keys, or the vertical position and the accedentals of written notes (spelled pitches). Representations of pitches are collected in the submodule Pitches
. They are subtypes of the abstract Pitch
type, support additive operations (+
, -
, zero
), and have an order (via isless
).
Currently, only MIDI pitches are implemented, other representations will follow.
Pitches represented as chromatic integers. 60 is Middle C.
DigitalMusicology.Pitches.Pitch
— Type.Any pitch type should be a subtype of Pitch
.
DigitalMusicology.Pitches.midi
— Method.Creates a MidiPitch
from an integer.
DigitalMusicology.Pitches.midis
— Method.Maps midi()
over a collection of integers.
DigitalMusicology.Pitches.@midi
— Macro.@midi expr
Replaces all Int
s in expr
with a call to midi(::Int)
. This allows the user to write integers where midi pitches are required. Does not work when expr
contains integers that should not be converted.
Pitch Operations
Common operations on pitches and pitch-based structures.
DigitalMusicology.PitchOps.allpcs
— Function.allpcs(P)
Returns a list of all pitch classes of pitch type P.
DigitalMusicology.PitchOps.pc
— Function.Turn a pitch (or pitch collection) into a pitch class (collection)
DigitalMusicology.PitchOps.transposeby
— Function.Transpose a pitch (collection) by some directed interval.
DigitalMusicology.PitchOps.transposeto
— Function.Transpose a pitch (collection) to a new reference point.
Pitch Collections
The module PitchCollections
provides structurs build out of pitches and pitch classes.
Represents notes as a bass pitch with (a set of) figures.
An abstract supertype for pitch collections. Since a pitch collection should contain only one type of pitches, PitchCollection
is parametric on a subtype of Pitch
.
DigitalMusicology.PitchCollections.bass
— Function.Returns the bass pitch of a figured bass representation.
figuredp(pitches)
Represents pitches as a bass pitch and remaining pitch classes relative to the bass.
figuredpc(pitches)
Represents pitches as a bass pitch class and remaining pitch classes relative to the bass.
DigitalMusicology.PitchCollections.figures
— Function.Returns the figure pitch classes of a figured bass representation. (including 0 for the bass note)
DigitalMusicology.PitchCollections.pbag
— Method.Represents pitches as a bag of pitches.
DigitalMusicology.PitchCollections.pcbag
— Method.Represents pitches as a bag (vector) of pitch classes.
DigitalMusicology.PitchCollections.pcset
— Method.Represents pitches as a set of pitch classes.
pitches(pcoll)
Returns a vector of all pitches in pcoll
to the degree they can be reconstructed from the representation used by pcoll
.
DigitalMusicology.PitchCollections.pitchiter
— Function.pitchiter(pitchcoll)
If the collection has an inner collection of all pitches, this function returns an iterator over the inner collection. The outer collection does not have to implement the iterator interface, since the default implementation for PitchCollection
s falls back to the inner iterator.
DigitalMusicology.PitchCollections.pset
— Method.Represent pitches as a set of absolute pitches.
DigitalMusicology.PitchCollections.refpitch
— Function.refpitch(pitchcoll)
Returns a unique reference pitch for the pitch collection. This reference should behave consistent with transposeto
and transposeby
transposeto(coll, 0) == transposeby(coll, -refpitch(coll))
transposeequiv(pitchcoll)
Turns a pitch collection to a representative of its transpositional equivalence class.
Notes
Notes are pitches with some kind of time information. In its most simple form, a note consists of a pitch, an onset, and an offset. In a more complicated context, time information might be represented differently.
DigitalMusicology.Notes.Note
— Type.Notes are combinations of pitch and time information.
A simple timed note. Pitch + onset + offset.
DigitalMusicology.Notes.pitch
— Function.pitch(note)
Returns the pitch of a note
Timing
The timing interface provides methods for querying information on timed objects. A timed object may have an onset
, an offset
, and a duration
. As not every object has all of these properties, hasonset
, hasoffset
, and hasduration
should be used to indicate, which pieces of information are available. It is usually sufficient to define either onset
and offset
or onset
and duration
.
Furthermore, simple distance measures based on time are provided as skipcost
and onsetcost
.
DigitalMusicology.Timed.duration
— Function.duration(x)
Returns the duration of some timed object x.
DigitalMusicology.Timed.hasduration
— Function.hasduration(T)
Returns true if T is a timed object with a duration.
DigitalMusicology.Timed.hasoffset
— Function.hasoffset(T)
Returns true if T is a timed object with an offset.
DigitalMusicology.Timed.hasonset
— Function.hasonset(T)
Returns true if T is a timed object with an onset.
DigitalMusicology.Timed.offset
— Function.offset(x)
Returns the offset of some timed object x.
DigitalMusicology.Timed.onset
— Function.onset(x)
Returns the onset of some timed object x.
DigitalMusicology.Timed.onsetcost
— Method.onsetcost(timed1, timed2)
Returns the distance between the onsets of timed1 and timed2.
DigitalMusicology.Timed.skipcost
— Method.skipcost(timed1, timed2)
Returns the distance between the offset of timed1 and the onset of timed2.
Meter
Time signatures and Meter
TimeSignature(num, denom)
A simple time signature consisting of numerator and denomenator.
DigitalMusicology.Meter.barbeatsubb
— Method.barbeatsubb(ts::Vector, timesigmap)
Returns a (bar, beat, subbeat)
tuple for every time point in ts
in the context of timesigmap
. ts
must be sorted in ascending order.
DigitalMusicology.Meter.barbeatsubb
— Method.barbeatsubb(t, timesigmap)
Returns a triple (bar, beat, subbeat)
that indicates bar, beat, and subbeat of t
in the context of timesigmap
. The first bar is 0
, the first beat in each bar is also 0
. Subbeats are given as fractions of a beat, so 0
means on the beat and 1/2
means in the middle between two beats. Upbeats have a negative bar (usually -1
) but non-negative beat and subbeat.
DigitalMusicology.Meter.defaultmeter
— Method.defaultmeter(timesig [, warning=true])
For a time signature with sufficiently clear meter, returns the meter of the time signature. The meter is given as a list of group sizes in beats, i.e., only the numerator matters. For example, 2/2 -> [1], 4/4 -> [2,2], 3/4 -> [3], 3/8 -> 3, 6/8 -> [3,3], 12/8 -> [3,3,3,3].
DigitalMusicology.Meter.inbar
— Method.inbar(t, timesigmap)
Returns the time point t
relative to the beginning of the bar it lies in.
DigitalMusicology.Meter.metricweight
— Method.metricweight(barpos, meter, beat)
Returns the metric weight of a note starting at barpos
from the beginning of a bar according to a meter. The meter
is provided as a vector of group sizes in beat
s. E.g., a 4/4 meter consists of 2 groups of two quarters, so meter
would be [2,2]
and beat
would be 1/4
. The total length of the bar should be a multiple of beat
. Each onset on a beat gets weight 1, the first beat of each group gets weight 2, and the first beat of the bar gets weight 4 (except if there is only one group, then 2). The weight of each subbeat is 1/2^p, where p is the number of prime factors needed to express the subbeat relative to its preceding beat and the beat
unit. This way, tuplet divisions can be handled properly.
DigitalMusicology.Meter.metricweight
— Method.metricweight(barpos, timesig)
Tries to guess meter and beat from timesig
. Otherwise identical to metricweight(barpos, meter, beat)
.
DigitalMusicology.Meter.metricweight
— Method.metricweight(t, timesigmap [, meter [, beat]])
Returns the metric weight at time point t
in the context of timesigmap
. Optionally, meter
, and beat
may be supplied as in metricweight(barpos, meter, beat)
to override the default values inferred from the time signature at t
.
DigitalMusicology.Meter.parsebbs
— Method.parsebbs(str [, convert=true])
Parses a bar-beat-subbeat string of form "<bar>.<beat>(.<subb>)?"
, where <bar>
and <beat>
are integers and <subb>
is a fraction n/d
or 0
. Returs bar, beat, and subbeat as a triple of numbers (Int, Int, Rational{Int})
.
If convert
is true
(default), bar and beat are decreased by 1
so that the first bar is represented as 1._
in the input but as (0, ...)
in the output.
DigitalMusicology.Meter.@time_str
— Macro.time"num/denom"
Creates a TimeSignature object with numerator num
and denominator denom
.
Slices
A piece of music might be represented as a list of slices by "cutting" it whenever a note starts or ends. A slice then has and onset, an offset, and a duration, and contains a collection of pitches that sound during the slice.
DigitalMusicology.Slices.Slice
— Type.Slice(onset::N, duration::N, content::T) where {N<:Number, T}
A slice of a pitches in a piece. Timing information (type N
) is encoded as onset and duration with methods for obtaining and modifying the offset directly. The content of a slice is typically some representation of simultaneously sounding pitches (type T
).
DigitalMusicology.Slices.setcontent
— Method.setcontent(ps, s)
Returns a new slice with content ps
.
DigitalMusicology.Slices.setduration
— Method.setduration(dur::N, s)
Returns a new slice with duration dur
.
DigitalMusicology.Slices.setoffset
— Method.setoffset(off::N, s)
Returns a new slice with offset off
.
DigitalMusicology.Slices.setonset
— Method.setonset(s, on)
Returns a new slice with onset on
.
DigitalMusicology.Slices.sg_sumdur
— Method.Returns the sum of slice durations in a slice n-gram (excluding skipped time)
DigitalMusicology.Slices.sg_totaldur
— Method.Returns the total duration of a slice n-gram (including skipped time)
DigitalMusicology.Slices.unwrapslices
— Method.Returns the pitch representations in a vector of slices.
DigitalMusicology.Slices.updatecontent
— Method.updatecontent(f::Function, s::Slice)
Returns a new slice with content f(content(s))
.
DigitalMusicology.Slices.updateduration
— Method.updateduration(f::Function, s)
Returns a new slice with duration f(duration(s))
.
DigitalMusicology.Slices.updateoffset
— Method.updateoffset(f::Function, s)
Returns a new slice with offset f(offset(s))
.
DigitalMusicology.Slices.updateonset
— Method.updateonset(f::Function, s)
Returns a new slice onset f(onset(s))
.
Events
General containers for events. Events can be either based on time points or on time intervals. Both types of intervals
IntervalEvent(onset::T, offset::T, content::C)
An event that spans a time interval. Has onset, offset, and duration.
PointEvent(time::T, content::C)
An event that happens at a certain point in time. Has an onset but no offset or duration.
TimePartition(breaks::Vector{T}, contents::Vector{C})
Partitions a time span into half-open intervals [t0,t1), [t1,t2), ..., [tn-1,tn), where each interval has a content. The default constructor takes vectors of time points [t0...tn]
and content [c1...cn]
. There must be one more time point than content items. The whole partition has a total onset, offset, and duration.
A TimePartition
may be iterated over (as IntervalEvent
s) and subintervals can be accessed by their indices. While getting an index returns a complete IntervalEvent
, setting an index sets only the content of the corresponding interval.
tp[2] -> IEv<0.5-1.0>("foo")
tp[2] = "bar"
DigitalMusicology.Events.content
— Function.content(event)
Returns the event's content.
DigitalMusicology.Events.events
— Method.events(timepartition)
Returns a vector of time-interval events that correspond to the subintervals and their content in timepartition
.
DigitalMusicology.Events.findevent
— Method.findevent(timepartition, time)
Returns the index of the interval in timepartition
that contains the timepoint time
.
DigitalMusicology.Events.movepoint!
— Method.movepoint!(timepartition, index, distance)
Moves the time point at index
by a (positive or negative) distance
, shrinkening or removing intervals that lie between the point's old and new position.
DigitalMusicology.Events.setpoint!
— Method.setpoint!(timepartition, index, newpos)
Moves the time point at index
to a new position, shrinkening or removing intervals that lie between the point's old and new position.
DigitalMusicology.Events.split!
— Method.split!(timepartition, at, before, after)
Splits the subinterval [ti,ti+1) of timepartition
that contains at
into [ti,at
) with content before
and [at
,t2] with content after
.
Grams
Functions for generating n-grams, scapes, and skipgrams on streams.
In order to generate classical skipgrams, use indexskipgrams
. skipgrams
provides more general variant, which allows a custom cost function and a compatibility predicate over pairs of input tokens. While the cost function generalizes the amount of skip
from indices to arbitrary costs, the compatibility predicate allows, for example, to ensure non-overlapping skipgrams on overlapping input or early filtering of undesired skipgrams.
DigitalMusicology.Grams.grams
— Method.grams(arr, n)
Return all n
-grams in arr
. n
must be positive, otherwise an error is thrown.
Examples
julia> grams([1,2,3], 2)
2-element Array{Array{Int64,1},1}:
[1, 2]
[2, 3]
DigitalMusicology.Grams.indexskipgrams
— Method.indexskipgrams(itr, k, n)
Return all k
-skip-n
-grams over itr
, with skips based on indices. For a custom cost function, use skipgrams
.
Examples
julia> indexskipgrams([1,2,3,4,5], 2, 2)
9-element Array{Any,1}:
Any[1, 2]
Any[1, 3]
Any[2, 3]
Any[1, 4]
Any[2, 4]
Any[3, 4]
Any[2, 5]
Any[3, 5]
Any[4, 5]
DigitalMusicology.Grams.mapscapes
— Method.mapscapes(f, arr)
Map f
over all n
-grams in arr for n=1:size(arr, 1)
.
DigitalMusicology.Grams.scapes
— Method.scapes(arr)
Return all n
-grams in arr
for n=1:size(arr, 1)
.
Examples
julia> scapes([1,2,3])
3-element Array{Array{Array{Int64,1},1},1}:
Array{Int64,1}[[1], [2], [3]]
Array{Int64,1}[[1, 2], [2, 3]]
Array{Int64,1}[[1, 2, 3]]
DigitalMusicology.Grams.skipgrams
— Function.skipgrams(input, k, n, cost [, pred] [, element_type=type] [, stable=false] [, p=1.0])
Returns an iterator over all generalized k
-skip-n
-grams found in input
.
Instead of defining skips as index steps > 1, a general cost
function is used. k
is then an upper bound to the sum of all distances between consecutive elements in the gram.
The input needs to be iterable and monotonous with respect to the cost to a previous element:
∀ i<j<l: cost(input[i], input[j]) ≤ cost(input[i], input[l])
From this we know that if the current element increases the skip cost of some unfinished gram (prefix) to more than k
, then all following elements will increase the cost at least as much, so we can discard the prefix.
An optional predicate function can be provided to filter potential skipgrams early. The predicate takes a PersistentList
of input elements in reverse order (i.e., starting with the element that was added last). The predicate is applied to every prefix, so the list will have <=n elements. By default, all sequences of input elements are valid.
If element_type
is provided, the resulting iterator will have a corresponding eltype
. If not, it will try to guess the element type based on the input's eltype
.
If stable
is true
, then the skipgrams will be ordered with respect to the position of their first element in the input stream. If stable
is false
(default), no particular order is guaranteed.
The parameter p
allows to decide randomly (with probability p) whether a skipgram is included in the output in cases where the full list of skipgrams is to long. A coin with bias p^(1/n) will be flipped for every prefix applying to all completions of that prefix. Only if the coin flip for every prefix is positive, the skipgram will be included. This allows to save computation time by throwing away all completions of a discarded prefix, but it might introduce artifacts for the same reason.
Examples
function indexskipgrams(itr, k, n)
cost(x, y) = y[1] - x[1] - 1
grams = skipgrams_itr(enumerate(itr), k, n, cost)
map(sg -> map(x -> x[2], sg), grams)
end
Viewing
Helpers for viewing music.
Midi files in a corpus can be viewed using MuseScore. (This function will probably be moved to the corpora package.)
In Jupyter notebooks, Humdrum **kern
strings can be viewed (and played) using Verovio (in fact, the branch of Verovio that is used in the Verovio HumDrum Viewer). Therefore, a musical structure can be visualized by translating it to a HumDrumString
.
For example, the Humdrum string
**kern **kern
*clefF4 *clefG2
*k[f#] *k[f#]
*M4/4 *M4/4
=- =-
8GL 8ddL
8AJ 8ccJ
16BLL 2.b;
16A .
16G .
16F#JJ .
2G; .
== ==
*- *-
will be displayed as
As Verovio can display other formats than Humdrum, corresponding types might be added in the future.
HumDrumString("some humdrum")
A wrapper class that enables rendering and midi playback of humdrum code in the browser using verovio. When a HumDrumString is the result of a jupyter notebook cell, its content will be rendered to the output cell automatically.
DigitalMusicology.External.musescore
— Function.musescore(id, [corpus])
Opens the midi file of the piece that id
refers to using Musescore. If corpus
is not supplied, the current default corpus is used.
DigitalMusicology.External.verovio
— Method.verovio()
Set up display of HumDrumString
s in Jupyter Notebooks.
Corpora
Musical corpora contain pieces in various file formats and additional metadata. As different corpora have a different internal layout, DM.jl provides an interface that can be implemented for each type of corups that is used. A single piece is identified by a piece id and can be loaded in different representations that may contain different pieces of information about the piece, e.g. as a note list from MIDI files or as Metadata from JSON or CSV files. The implementation of a corpus must provide methods to list all possible piece ids. Piece ids may be organized hierarchically, e.g., in order to reflect the directory structure of the corpus.
Each corpus implements its own subtype of Corpus
, on which the implementation of the general interface dispatches. For convenience, a currently active corpus can be set using setcorpus
. Corpus interface methods called without the corpus argument default to this currently active corpus. Each corpus implementation should provide a convenience function useX
that creates a corpus object and sets it as active.
DigitalMusicology.Corpora._getpiece
— Function._getpiece(id, Val{form}(), corpus)
This function is responsible for actually loading a piece. New corpus implementations should implement this method instead of getpiece
, which is called by the user.
DigitalMusicology.Corpora.allpieces
— Function.allpieces([corpus])
Returns all piece ids in corpus
.
allpieces(dir, [corpus])
Returns all piece ids in and below dir
.
DigitalMusicology.Corpora.dirs
— Function.dirs([corpus])
Returns all top-level piece directories in corpus
.
dirs(dir, [corpus])
Returns all direct subdirectories of dir
.
DigitalMusicology.Corpora.findpieces
— Function.findpieces(searchstring[, corpus])
Searches the corpus for pieces matching searchstring. Returns a dataframe of matching rows.
DigitalMusicology.Corpora.getcorpus
— Method.Get the currently set corpus. Throws an error, if the corpus is not set.
DigitalMusicology.Corpora.getpiece
— Function.getpiece(id, form, [corpus])
Loads a piece in some representation. Piece ids are strings, but their exact format depends on the given corpus.
Forms are identified by keywords, e.g.
:slices
:slices_df
:notes
but the supported keywords depend on the corpus.
DigitalMusicology.Corpora.getpieces
— Function.getpieces(ids, form, [datadir])
Like getpiece
but takes multiple ids and returns an iterator over the resulting pieces.
DigitalMusicology.Corpora.ls
— Function.ls([corpus])
Returns all top-level pieces and directories in corpus
at once.
ls(dir, [corpus])
Returns all subdirectories and pieces in dir
at once.
DigitalMusicology.Corpora.piecepath
— Function.piecepath(id, cat, ext, [corpus])
Returns the full path to the file of piece id
in category cat
with extension ext
in corpus
.
DigitalMusicology.Corpora.pieces
— Function.pieces(dir, [corpus])
Returns the piece ids in dir
.
DigitalMusicology.Corpora.setcorpus
— Method.Set the current corpus.
DigitalMusicology.Corpora.supportedforms
— Function.supportedforms([corpus])
Returns a list of symbols that can be passed to the form
parameter in piece loading functions for the given corpus.
DigitalMusicology.Corpora.topdir
— Function.topdir([corpus])
Returns the main piece directory of corpus
.
DigitalMusicology.Corpora.unsetcorpus
— Method.Reset the current corpus to NoCorpus()
.
Large Archive Corpus
A "LAC" contains an index CSV file and a set of toplevel directories according to different representations of the content of the corpus. Each of these "type"-directories contains the same folder hierarchy below it, including the names of the actual data files, except the file extension. The id of a piece is therefore its path in this common substructure, separated with /
and ending in the filename without extension. The actual file of a certain type can then be retrieved from the id by prepending the name of the type-directory and appending the appropriate file extension.
DigitalMusicology.Corpora.LAC.meta
— Function.meta([crp::LACCorpus])
Returns the corpus' meta-dataframe.
Kern Corpus (WIP)
A Kern corpus provides access to the Humdrum **kern
corpora provided by Craig Sapp like the Mozart Piano Sonatas. Note that running some extra commands like make midi-norep
might be required first.
Currently, the files can only be read from MIDI, not directly from Humdrum, but this is being worked on.
DigitalMusicology.Corpora.Kern.kerncrp
— Method.kerncrp(dir)
Creates a new KernCorpus with data directory dir
.
DigitalMusicology.Corpora.Kern.usekern
— Method.usekern(dir)
Creates a new KernCorpus and sets it as the default corpus.