6.3.2 BeamForming

6.3.2.1 Outline of the node

BeamForming node performs sound source separation based on the following methods:

Node inputs are:

Note outputs are a set of complex spectrum of each separated sound.

6.3.2.2 Necessary files

Table 6.36: Necessary files for BeamForming 

Corresponding parameter name

Description

TF_CONJ_FILENAME

Transfer function of microphone array

6.3.2.3 Usage

When to use

Given a sound source direction, the node separates a sound source originating from the direction with a microphone array. As a sound source direction, either a value estimated by sound source localization or a constant value may be used.

Typical connection

Figure 6.49 shows connection examples of the BeamForming . The node has three inputs as follows:

  1. INPUT_FRAMES takes a multi-channel complex spectrum containing the mixture of sounds coming from for example MultiFFT ,

  2. INPUT_SOURCES takes the results of sound source localization coming from for example LocalizeMUSIC or ConstantLocalization ,

  3. INPUT_NOISE_SOURCES takes the results of sound source localization for noise sources coming from for example LocalizeMUSIC or ConstantLocalization .

The output is the separated signals.

\includegraphics[width=.8\textwidth ]{fig/modules/Beamforming.eps}
Figure 6.49: Example of Connections of BeamForming 

6.3.2.4 Input-output and property of the node

Input

INPUT_FRAMES

: Matrix<complex<float> > type. Multi-channel complex spectra. Rows correspond to channels, i.e., complex spectra of waveforms input from microphones, and columns correspond to frequency bins.

INPUT_SOURCES

: Vector<ObjectRef> type. A Vector array of the Source type object in which Source localization results are stored. It is typically connected to the SourceTracker node and SourceIntervalExtender node and its outputs are used.

INPUT_NOISE_SOURCES

: Vector<ObjectRef> type. A Vector array of the Source type object where noise source localization results are stored. The type is the same as INPUT_SOURCES.

Output

OUTPUT

: Map<int, ObjectRef> type. A pair containing the sound source ID of a separated sound and a 1-channel complex spectrum of the separated sound
(Vector<complex<float> > type).

Parameter

LENGTH

: int type. Analysis frame length [samples], which must be equal to the values at a preceding node (e.g. AudioStreamFromMic or the MultiFFT node). The default is 512.

ADVANCE

: int type. Shift length of a frame [samples], which must be equal to the values at a preceding node (e.g. AudioStreamFromMic or the MultiFFT node). The default is 160.

SAMPLING_RATE

: int type. Sampling frequency of the input waveform [Hz]. The default is 16000.

LOWER_BOUND_FREQUENCY

: int type. This parameter is the minimum frequency used when BeamForming processing is performed. Processing is not performed for frequencies below this value and the value of the output spectrum is zero then. The user designates a value in the range from 0 to half of the sampling frequency.

UPPER_BOUND_FREQUENCY

: int type. This parameter is the maximum frequency used when BeamForming processing is performed. Processing is not performed for frequencies above this value and the value of the output spectrum is zero then. LOWER_BOUND_FREQUENCY $<$ UPPER_BOUND_FREQUENCY must be maintained.

TF_CONJ_FILENAME

: string type. The file name in which the transfer function database of your microphone array is saved. Refer to Section 5.3.1 for the detail of the file format. This valid for all BF_METHOD.

SS_METHOD

: string type. Select a stepsize calculation method based for a blind source separation. This is valid only when BF_METHOD=GICA. If GICA, the stepsize of ICA (Independent Component Analysis) is determined. Select one of SS_METHOD=FIX, LC_MYU, and ADAPTIVE. If FIX, a fixed designated value specified by SS_MYU is the stepsize. If LC_MYU, SS_MYU=LC_MYU. If ADAPTIVE, the stepsize is adaptively determined.

SS_MYU

: float type. Designate the stepsize to be used when updating a separation matrix based on blind source separation. The default value is 0.001. This is valid only when BF_METHOD=GICA. When SS_METHOD=FIX, SS_MYU is the designated value for the stepsize. When SS_METHOD=LC_MYU, this parameter is ignored. When SS_METHOD=ADAPTIVE, SS_MYU is multiplied by the adaptive stepsize, resulting the final stepsize. By setting this value and LC_MYU to zero and passing a separation matrix of delay-and-sum beamformer type as INITW_FILENAME, processing equivalent to delay-and-sum beamforming is performed when BF_METHOD=GICA.

LC_METHOD

: string type. Select a stepsize calculation method for separation based on geometric constraints. This is valid only when BF_METHOD=LCMV,GJ,GICA. This parameter affects the stepsize for the source separation based on geometric constraints (GC). Select one of LC_METHOD=FIX and ADAPTIVE. If FIX, a fixed designated value specified by LC_MYU is the stepsize. If ADAPTIVE, the stepsize is adaptively determined.

LC_MYU

: float type. Designate the stepsize to be used when updating a separation matrix based on geometric constraints. The default value is 0.001. This is valid only when BF_METHOD=LCMV,GJ,GICA. When LC_METHOD=FIX, LC_MYU is the designated value for the stepsize. When LC_METHOD=ADAPTIVE, LC_MYU is multiplied by the adaptive stepsize, resulting the final stepsize. By setting this value and SS_MYU to zero and passing a separation matrix of delay-and-sum beamformer type as INITW_FILENAME, processing equivalent to delay-and-sum beamforming is performed when BF_METHOD=GICA.

ALPHA

: float type. The stepsize for updating correlation matrices when BF_METHOD=MSNR. The default value is 0.99.

NL_FUNC

: string type. The function for computing higher-order correlation matrices. Currently, only TANH (hyperbolic tangent) is supported. This is valid only whrn BF_METHOD=GICA.

SS_SCAL

: float type. The default value is 1.0. Designate the scale factor of a hyperbolic tangent function (tanh) in calculation of the higher-order correlation matrix when BF_METHOD=GICA. A positive real number greater than zero must be designated. The smaller the value is, the less non-linearity, which makes the calculation close to a normal correlation matrix calculation.

REG_FACTOR

: int type. The default value is 0.0001. The scaling factor of spatially white noise added to the noise correlation matrix when BF_METHOD=ML. See below for the detail.

BF_METHOD

: string type. Designate the sound source separation method. Currently, this node supports the following separation methods:

  • DS : Delay-and-Sum beamforming [1]

  • WDS : Weighted Delay-and-Sum beamforming [1]

  • NULL : NULL beamforming [1]

  • ILSE : Iterative Least Squares with Enumeration [2]

  • LCMV : Linearly Constrained Minimum Variance beamforming [3]

  • GJ : Griffiths-Jim beamforming [4]

  • GICA : Geometrically constrained Independent Component Analysis [7]

ENABLE_DEBUG

: bool type. The default value is false. If true, this node prints the status of separation as a standard output.

Table 6.37: Parameter list of BF_METHOD=DS,WDS,NULL,ILSE

Parameter name

Type

Default value

Unit

Description

LENGTH

int 

512

[pt]

Analysis frame length.

ADVANCE

int 

160

[pt]

Shift length of frame.

SAMPLING_RATE

int 

16000

[Hz]

Sampling frequency.

LOWER_BOUND_FREQUENCY

int 

0

[Hz]

The minimum value of the frequency used for separation processing

UPPER_BOUND_FREQUENCY

int 

8000

[Hz]

The maximum value of the frequency used for separation processing

TF_CONJ_FILENAME

string 

   

File name of transfer function database of your microphone array.

ENABLE_DEBUG

bool 

false

 

Enabling debug output

Table 6.38: Parameter list of BF_METHOD=LCMV

Parameter name

Type

Default value

Unit

Description

LENGTH

int 

512

[pt]

Analysis frame length.

ADVANCE

int 

160

[pt]

Shift length of frame.

SAMPLING_RATE

int 

16000

[Hz]

Sampling frequency.

LOWER_BOUND_FREQUENCY

int 

0

[Hz]

The minimum value of the frequency used for separation processing

UPPER_BOUND_FREQUENCY

int 

8000

[Hz]

The maximum value of the frequency used for separation processing

TF_CONJ_FILENAME

string 

   

File name of transfer function database of your microphone array.

LCMV_LC_METHOD

string 

ADAPTIVE

 

A stepsize calculation method based on geometric constraints. Select FIX or ADAPTIVE. FIX indicates fixed values for the stepsize specified by LC_MYU.

LCMV_LC_MYU

float 

0.001

 

The stepsize when updating a separation matrix based on geometric constraints. If LC_METHOD=FIX, LC_MYU is the value for the fixed stepsize. If LC_METHOD=ADAPTIVE, the stepsize is determined based on an adaptive stepsize method which is multiplied by LC_MYU.

ENABLE_DEBUG

bool 

false

 

Enabling debug output

Table 6.39: Parameter list of BF_METHOD=GJ

Parameter name

Type

Default value

Unit

Description

LENGTH

int 

512

[pt]

Analysis frame length.

ADVANCE

int 

160

[pt]

Shift length of frame.

SAMPLING_RATE

int 

16000

[Hz]

Sampling frequency.

LOWER_BOUND_FREQUENCY

int 

0

[Hz]

The minimum value of the frequency used for separation processing

UPPER_BOUND_FREQUENCY

int 

8000

[Hz]

The maximum value of the frequency used for separation processing

TF_CONJ_FILENAME

string 

   

File name of transfer function database of your microphone array.

GJ_LC_METHOD

string 

ADAPTIVE

 

A stepsize calculation method based on geometric constraints. Select FIX or ADAPTIVE. FIX indicates fixed values for the stepsize specified by LC_MYU.

GJ_LC_MYU

float 

0.001

 

The stepsize when updating a separation matrix based on geometric constraints. If LC_METHOD=FIX, LC_MYU is the value for the fixed stepsize. If LC_METHOD=ADAPTIVE, the stepsize is determined based on an adaptive stepsize method which is multiplied by LC_MYU.

ENABLE_DEBUG

bool 

false

 

Enabling debug output

Table 6.40: Parameter list of BF_METHOD=GICA

Parameter name

Type

Default value

Unit

Description

LENGTH

int 

512

[pt]

Analysis frame length.

ADVANCE

int 

160

[pt]

Shift length of frame.

SAMPLING_RATE

int 

16000

[Hz]

Sampling frequency.

LOWER_BOUND_FREQUENCY

int 

0

[Hz]

The minimum value of the frequency used for separation processing

UPPER_BOUND_FREQUENCY

int 

8000

[Hz]

The maximum value of the frequency used for separation processing

TF_CONJ_FILENAME

string 

   

File name of transfer function database of your microphone array.

GICA_SS_METHOD

string 

ADAPTIVE

 

A stepsize calculation method based on blind source separation. Select FIX, LC_MYU or ADAPTIVE. FIX indicates fixed values for the stepsize specified by SS_MYU. LC_MYU indicates that SS_MYU=LC_MYU. ADAPTIVE adaptively tunes the stepsize.

GICA_SS_MYU

float 

0.001

 

The stepsize when updating a separation matrix based on blind source separation. If SS_METHOD=FIX, SS_MYU is the value for the fixed stepsize. If SS_METHOD=LC_MYU, this parameter is ignored. If SS_METHOD=ADAPTIVE, the stepsize is determined based on an adaptive stepsize method which is multiplied by SS_MYU.

GICA_LC_METHOD

string 

ADAPTIVE

 

A stepsize calculation method based on geometric constraints. Select FIX or ADAPTIVE. FIX indicates fixed values for the stepsize specified by LC_MYU.

GICA_LC_MYU

float 

0.001

 

The stepsize when updating a separation matrix based on geometric constraints. If LC_METHOD=FIX, LC_MYU is the value for the fixed stepsize. If LC_METHOD=ADAPTIVE, the stepsize is determined based on an adaptive stepsize method which is multiplied by LC_MYU.

SS_SCAL

float 

1.0

 

The scale factor in a higher-order correlation matrix computation.

ENABLE_DEBUG

bool 

false

 

Enabling debug output

6.3.2.5 Details of the node

Technical details: Basically, the technical detail of each separation method can be found in the references below.

Bried explanation of sound source separation:

Table 6.41 shows the notation of variables used in sound source separation problems. Since the source separation is performed frame-by-frame in the frequency domain, all the variable is computed in a complex field. Also, the separation is performed for all $K$ frequency bins ($1 \leq k \leq K$). Here, we omit $k$ from the notation. Let $N$, $M$, and $f$ denote the number of sound sources and the number of microphones, and the frame index, respectively.

Table 6.41: Notation of variables

Variables

Description

$\boldsymbol {S}(f) = \left[S_1(f), \dots , S_ N(f)\right]^ T$

Complex spectrum of target sound sources at the $f$-th frame

$\boldsymbol {X}(f) = \left[X_1(f), \dots , X_ M(f)\right]^ T$

Complex spectrum of a microphone observation at the $f$-th frame, which corresponds to INPUT_FRAMES.

$\boldsymbol {N}(f) = \left[N_1(f), \dots , N_ M(f)\right]^ T$

Complex spectrum of added noise

$\boldsymbol {H} = \left[ \boldsymbol {H}_1, \dots , \boldsymbol {H}_ N \right] \in \mathbb {C}^{M \times N}$

Transfer function matrix from the $n$-th sound source ($1 \leq n \leq N$) to the $m$-th microphone ($1 \leq m \leq M$)

  $\boldsymbol {K}(f) \in \mathbb {C}^{M \times M}$

Correlation matrix of known noise

$\boldsymbol {W}(f) = \left[ \boldsymbol {W}_1, \dots , \boldsymbol {W}_ M \right] \in \mathbb {C}^{N \times M}$

Separation matrix at the $f$-th frame

$\boldsymbol {Y}(f) = \left[Y_1(f), \dots , Y_ N(f)\right]^ T$

Complex spectrum of separated signals

We use the following linear model for the signal processing:

  $\displaystyle \boldsymbol {X}(f) $ $\displaystyle = $ $\displaystyle \boldsymbol {H}\boldsymbol {S}(f) + \boldsymbol {N}(f)~ .\label{eq:beamforming_ observation} $   (28)

The purpose of the separation is to estimate $\boldsymbol {W}(f)$ based on the following equation:

  $\displaystyle \boldsymbol {Y}(f) $ $\displaystyle = $ $\displaystyle \boldsymbol {W}(f)\boldsymbol {X}(f) \label{eq:Beamforming-separation} $   (29)

so that $\boldsymbol {Y}(f)$ is getting closer to $\boldsymbol {S}(f)$. After separation, the estimated $\boldsymbol {W}(f)$ can be saved by setting EXPORT_W=true and put a certain name in EXPORT_W_FILENAME.

TF_CONJ_FILENAME specifies the transfer function matrix $\boldsymbol {H}$ which is pre-measured or pre-calculated. Hereinafter, we denote this pre-measured transfer function as $\hat{\boldsymbol {H}}$ to distinguish from $\boldsymbol {H}$.

Separation by BF_METHOD=DS,WDS,NULL,ILSE: $\boldsymbol {W}(f)$ is directly determined using $\hat{\boldsymbol {H}}$ and corresponding directions of target and noise sources coming from INPUT_SOURCES and INPUT_NOISE_SOURCES.

Separation by BF_METHOD=LCMV,GJ: The cont function $J_{\textrm{L}}(\boldsymbol {W}(f))$ for updating the separation matrix is defined by the directions of target and noise sources coming from INPUT_SOURCES and INPUT_NOISE_SOURCES. The equation for updating the separation matrix is described simply as follows:

  $\displaystyle \boldsymbol {W}(f+1) $ $\displaystyle = $ $\displaystyle \boldsymbol {W}(f) + \mu \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{L}}(\boldsymbol {W})(f)~ ,\label{eq:LCMV_ GJ_ J} $   (30)

where $\nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{L}}(\boldsymbol {W}) = \frac{\partial \boldsymbol {J}_{\textrm{L}}(\boldsymbol {W})}{\partial \boldsymbol {W}}$. LC_MYU specifies the value of $\mu $. If LC_METHOD=ADAPTIVE, this node computes the adaptive stepsize based on the following equation.

  $\displaystyle \mu $ $\displaystyle = $ $\displaystyle \left. \frac{\boldsymbol {J}_{\textrm{L}}(\boldsymbol {W})}{\left| \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{L}}(\boldsymbol {W})\right|^2} \right|_{\boldsymbol {W} = \boldsymbol {W}(f)}\label{eq:LCMV_ GJ_ mu} $   (31)

BF_METHOD=GICA: The cont function $J_{\textrm{G}}(\boldsymbol {W}(f))$ for updating the separation matrix is defined by the directions of target and noise sources coming from INPUT_SOURCES and INPUT_NOISE_SOURCES.

  $\displaystyle J_{\textrm{G}}(\boldsymbol {W}(f)) $ $\displaystyle = $ $\displaystyle J_{\textrm{SS}}(\boldsymbol {W}(f)) + J_{\textrm{LC}}(\boldsymbol {W}(f))~ , \label{eq:GHDSS_ J} $   (32)

where $J_{\textrm{SS}}(\boldsymbol {W}(f))$ is the cost function for the blind source separation, $J_{\textrm{LC}}(\boldsymbol {W}(f))$ is the cost function for the source separation based on geometric constraints. The equation for updating the separation matrix is described simply as follows:

  $\displaystyle \boldsymbol {W}(f+1) $ $\displaystyle = $ $\displaystyle \boldsymbol {W}(f) + \mu _{\textrm{SS}} \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{SS}}(\boldsymbol {W})(f) + \mu _{\textrm{LC}} \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{LC}}(\boldsymbol {W})(f)~ , \label{eq:GHDSS_ W} $   (33)

where $\nabla _{\boldsymbol {W}}$ means the partial derivative in respect of $\boldsymbol {W}$ same as Eq. (). The $\mu _{\textrm{SS}}$ and $\mu _{\textrm{LC}}$ in the equation can be specified by SS_MYU and LC_MYU, respectively. If SS_METHOD=ADAPTIVE, $\mu _{\textrm{SS}}$ is adaptively determined by

  $\displaystyle \mu _{\textrm{SS}} $ $\displaystyle = $ $\displaystyle \left. \frac{\boldsymbol {J}_{\textrm{SS}}(\boldsymbol {W})}{\left| \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{SS}}(\boldsymbol {W})\right|^2} \right|_{\boldsymbol {W} = \boldsymbol {W}(f)}~ .\label{eq:GHDSS_ SS_ mu} $   (34)

If LC_METHOD=ADAPTIVE, $\mu _{\textrm{LC}}$ is adaptively determined by

  $\displaystyle \mu _{\textrm{LC}} $ $\displaystyle = $ $\displaystyle \left. \frac{\boldsymbol {J}_{\textrm{LC}}(\boldsymbol {W})}{\left| \nabla _{\boldsymbol {W}}\boldsymbol {J}_{\textrm{LC}}(\boldsymbol {W})\right|^2} \right|_{\boldsymbol {W} = \boldsymbol {W}(f)}~ .\label{eq:GHDSS_ LC_ mu} $   (35)

Trouble shooting: Basically, follow the trouble shooting of the GHDSS node.

6.3.2.6 References