Abstract
This article presents a new approach to the problem of simultaneous tracking of several people in lowresolution sequences from multiple calibrated cameras. Redundancy among cameras is exploited to generate a discrete 3D colored representation of the scene, being the starting point of the processing chain. We review how the initiation and termination of tracks influences the overall tracker performance, and present a Bayesian approach to efficiently create and destroy tracks. Two Monte Carlobased schemes adapted to the incoming 3D discrete data are introduced. First, a particle filtering technique is proposed relying on a volume likelihood function taking into account both occupancy and color information. Sparse sampling is presented as an alternative based on a sampling of the surface voxels in order to estimate the centroid of the tracked people. In this case, the likelihood function is based on local neighborhoods computations thus dramatically decreasing the computational load of the algorithm. A discrete 3D resampling procedure is introduced to drive these samples along time. Multiple targets are tracked by means of multiple filters, and interaction among them is modeled through a 3D blocking scheme. Tests over CLEARannotated database yield quantitative results showing the effectiveness of the proposed algorithms in indoor scenarios, and a fair comparison with other stateoftheart algorithms is presented. We also consider the realtime performance of the proposed algorithm.
1 Introduction
Tracking multiple objects and keeping record of their identities along time in a cluttered dynamic scene is a major research topic in computer vision, basically fostered by the number of applications that benefit from the retrieved information. For instance, multiperson tracking has been found useful for automatic scene analysis [1], humancomputer interfaces [2], and detection of unusual behaviors in security applications [3].
A number of methods for camerabased multiperson 3D tracking have been proposed in the literature [47]. A common goal in these systems is robustness under occlusions created by the multiple objects cluttering the scene when estimating the position of a target. Singlecamera approaches [8] have been widely employed, but they are vulnerable to occlusions, rotation, and scale changes of the target. In order to avoid these drawbacks, multicamera tracking techniques exploit spatial redundancy among different views and provide 3D information at the actual scale of the objects in the real world. Integration of data extracted from multiple cameras has been proposed in terms of a fusion at feature level as image correspondences [9] or multiview histograms [10] among others. Information fusion at data or raw level has been achieved by means of voxel reconstructions [11], polygon meshes [12], etc.
Most multicamera approaches rely on a separate analysis of each camera view, followed by a feature fusion process to finally generate an output. Exploiting the underlying epipolar geometry of a multicamera setup toward finding the most coherent feature correspondence among views was first tackled by Mikič et al. [13] using algebraic methods together with a Kalman filter, and further developed by Focken et al. [14]. Exploiting epipolar consistency within a robust Bayesian framework was also presented by CantonFerrer et al. [9]. Other systems rely on detecting semantically relevant patterns among multiple cameras to feed the tracking algorithm as done in [15] by detecting faces. Particle filtering (PF) [16] has been a commonly employed algorithm because of its ability to deal with problems involving multimodal distributions and nonlinearities. Lanz et al. [10] proposed a multicamera PF tracker exploiting foreground and color information, and several contributions have also followed this path: [4,7]. Occlusions, being a common problem in feature fusion methods, have been addressed in [17] using HMM to model the temporal evolution of occlusions within a PF algorithm. Information about the tracking scenario can also be exploited toward detecting and managing occlusions as done in [18] by modeling the occluding elements, such as furniture, in a training phase before tracking. It must be noted that, in this article, we assume that all cameras will be covering the area under study. Other approaches to multicamera/multiperson tracking do not require maximizing the overlap of the field of view of multiple cameras, leading to the nonoverlapped multicamera tracking algorithms [19].
Multicamera/multiperson tracking algorithms based on a data fusion before doing any analysis was pioneered by Lopez et al. [20] by using a voxel^{a }reconstruction of the scene. This idea was further developed by the authors in [5,21] finally leading to the present article. Up to our knowledge, this is the first approach to multiperson tracking exploiting data fusion from multiple cameras as the input of the algorithms. In this article, we first introduce a methodology to multiperson tracking based on a colored voxel representation of the scene as the start of the processing chain. The contribution of this article is twofold. First, we emphasize the importance of the initiation and termination of tracks, usually neglected in most tracking algorithms, that has indeed an impact on the performance of the overall system. A general technique for the initiation/termination of tracks is presented. The second contribution is the filtering step where two techniques are introduced. The first technique applies PF to input voxels to estimate the centroid of the tracked targets. However, this process is far from realtime performance and an alternative, that we call Sparse Sampling (SS). SS aims at decreasing computation time by means of a novel tracking technique based on the seminal PF principle. Particles no longer sample the state space but instead a magnitude whose expectancy produces the centroid of the tracked person: the surface voxels. The likelihood evaluation relying on occupancy and color information is computed on local neighborhoods, thus dramatically decreasing the computation load of the overall algorithm. Finally, effectiveness of the proposed techniques is assessed by means of objective metrics defined in the framework of the CLEAR [22] multitarget tracking database. Computational performance is reviewed toward proving the realtime operation of the SS algorithms. Fair comparisons with stateoftheart methods evaluated using the same database are also presented and discussed.
2 Tracker design methodology
Typically, a multitarget tracking system can be depicted as in Figure 1 and comprises a number of elementary modules. Although most articles present techniques that contribute to filtering module, the overall architecture is rarely addressed assuming that some blocks are already available. In this section, this scheme will be analyzed and some proposals for each module will be presented. The filtering step, being our major contribution, will be addressed in a separate section.
Figure 1. Multiperson tracking scheme.
2.1 Input and output data
When addressing the problem of multiperson tracking within a multicamera environment, a strategy about how to process this information is needed. Many approaches perform an analysis of the images separately, and then combine the results using some geometric constraints [10]. This approach is denoted as an information combination by fusion of decisions. However, a major issue in this procedure is dealing with occlusion and perspective effects. A more efficient way to combine information is data fusion [23]. In our case, data fusion leads to a combination of information from all images to build up a new data representation, and to apply the algorithms directly on these data. Several data representations aggregating the information of multiple views have been proposed in the literature such as voxel reconstructions [11,24], level sets [25], polygon meshes [12], conexels [26], depth maps [27], etc. In our research, we opted for a colored voxel representation due to both its fast computation and accuracy.
For a given frame in the video sequence, a set of N_{C }images are obtained from the N_{C }cameras (see a sample in Figure 2(a)). Each camera is modeled using a pinhole camera model based on perspective projection with camera calibration information available. Foreground regions from input images are obtained using a segmentation algorithm based on StaufferGrimson's background learning and subtraction technique [28] as shown in Figure 2(b).
Figure 2. Input data generation example. (a) A sample of the original images. (b) Foreground segmentation of the input images employed by the SfS algorithm. (c) Example of the binary 3D voxel reconstruction. (d) The final colored version shown over a background image.
Redundancy among cameras is exploited by means of a ShapefromSilhouette (SfS) technique
[11]. This process generates a discrete occupancy representation of the 3D space (voxels).
A voxel is labeled as foreground or background by checking the spatial consistency
of its projection on the N_{C }segmented silhouettes, and finally obtaining the 3D binary reconstruction shown in
Figure 2(c). We will denote this raw voxel reconstruction as
The resulting colored 3D scene reconstruction is fed to the proposed system that assigns a tracker to each target and the obtained tracks are processed by a higher semantic analysis module. Information about the environment (dimensions of the room, furniture, etc.) allows assessing the validity of tracked volumes and discarding false volume detections.
Finally, the output of the overall tracking algorithm will be a number of hypotheses for the centroid position of each of the targets present in the scene.
2.2 Tracker state and filtering
One of the major challenges in multitarget tracking is the estimation of the number
of targets and their positions in the scene, based on a set of uncertain observations.
This issue can be addressed from two perspectives. First, extending the theory of
singletarget algorithms to multiple targets. This approach defines the working state
space
Multitarget tracking can also be tackled by tracking each target independently, that
is to maintain N_{T }trackers with a state space
2.3 Track initiation and termination
A crucial factor in the performance of a tracking system is the module that addresses the initiation and termination of tracks. The initiation of a new tracker is independent of the employed filtering technique and only relies on the input data and the current state (position) of the tracks in the scene. On the other hand, the termination of a new tracking filter is driven by the performance of the tracker.
The initialization of a new filter is determined by the correct detection of a person in the analyzed scene. This process is crucial when tracking, and its correct operation will drive the overall system's accuracy. However, despite the importance of this step, little attention is paid to it in the design of multiobject trackers in the literature. Only few articles explicitly mention this process such as [30] that employs a face detector to detect a person or [31] that uses scout particle filters to explore the 3D space for new targets. Moreover, it is assumed that all targets in the scene are of interest, i.e., people, not accounting for spurious objects, i.e., furniture, shadows, etc. In this section, we introduce a method to properly handle the initiation and termination of filters from a Bayesian perspective.
2.3.1 Track initiation criteria
The 3D input data
Let
We will consider the region of influence of a target with centroid x as the ellipsoid
A mapping is defined such that for every x_{j }∈ X^{GT }a component
that is to assign x_{j }to the component with the largest volume enclosed in the region of influence. It must
be noted that some x_{j }might not have any
Finally, we have grouped the set of connected components
Table 1. Features employed by the person/noperson classifier where magnitude
In order to characterize the objects to be tracked and to decide the best classifier system, we have performed an exploratory data analysis [32], which will allow us to contrast the underlying hypotheses of the classifiers with the actual data. Histograms of these features are computed as shown in Figure 3 and scatter plots depicting the cross dependencies among all features are computed. Observing Figure 3, we see that some variables are easily separable, i.e., weight, height, and bounding box. Moreover, they show a low cross dependency with other features.
Figure 3. Normalized histograms of the variables conforming the feature vector employed by the person/nonperson classifier.
A number of standard binary classifiers has been tested and their performances have been evaluated, namely Gaussian, Mixture of Gaussians, Neural Networks, KMeans, PCA, Parzen and Decision Trees [33,34]. Due to the aforementioned properties of the statistic distributions of the features, some classifiers are unable to obtain a good performance, i.e., Gaussian, PCA, etc. Other classifiers require a large number of characterizing elements, such as KMeans, MoG, or Parzen. Decision trees [33] have reported the best results. Separable variables such as height, weight, and bounding box size are automatically selected to build up a decision tree that yields a high recognition rate with a precision of 0.98 and a recall of 0.99 in our test database.
Another complementary criterium employed in the initiation of new tracks is based on the current state of the tracker. It will not be allowed to create a new track if its distance to the closest target is below a threshold.
2.3.2 Track termination criteria
A target will be deleted if one of the following conditions is fulfilled:
 If two or more tracks fall too close to one another, this indicates that they might be tracking the same target, hence only one will be kept alive while the rest will be removed.
 If tracker's efficiency becomes very low it might indicate that the target has disappeared and should be removed.
 The person/noperson classifier is applied to the set of features extracted from the voxels assigned to a target. If the classifier outputs a noperson verdict for a number of frames, the target will be considered as lost.
3 Voxelbased solutions
The filtering block shown in Figure 1 addresses the problem of keeping consistent trajectories of the tracked objects, resolving crossings among targets, mergings with spurious objects (i.e., shadows) and producing an accurate estimation of the centroid of the target based on the input voxel information. Although there is a number of papers addressing the problem of multicamera/multiperson tracking, very few contributions have been based on voxel analysis [20,21].
3.1 PF tracking
PF is an approximation technique for estimation problems where the variables involved do not hold Gaussianity uncertainty models and linear dynamics. The current tracking scenario can be tackled by means of this algorithm to estimate the 3D position of a person x_{t }= (x, y, z)_{t }at time t, taking as observation a set of colored voxels representing the 3D scene up to time t denoted as z_{1: t}. For a given target x_{t}, PF approximates the posterior density p(x_{t}z_{1:t}) as a sum of N_{p }Dirac functions:
where
SIR PF avoids the particle degeneracy problem by resampling at every time step. In
this case, weights are set to
Hence, the weights are proportional to the likelihood function that will be computed over the incoming volume z_{t}.
Finally, the best state at time t,
Basically, in the PF operation loop two steps must be defined: likelihood evaluation and particles propagation. In the following, we present our proposal for the PF implementation.
3.1.1 Likelihood evaluation
Binary and color information contained in z_{t }will be employed to define the likelihood function
Factor λ controls the influence of each term (foreground and color information) in the overall likelihood function. Empirical tests have shown that λ = 0.8 provides satisfactory results. A more detailed review of the impact of color information in the overall performance of the algorithm is addressed in Section5.1.
Likelihood associated to raw data is defined as the ratio of overlap between the input
data
as
For a given target k, an adaptive reference histogram
where B(·) is the Bhattacharya distance and H(·) stands for the color histogram extraction operation of the enclosed volume. Update of the reference histogram is performed in a linear manner following the rule:
where
3.1.2 Particle propagation
The propagation model has been chosen to be a Gaussian noise added to the state of
the particles after the resampling step:
3.1.3 Interaction model
Let us assume that there are N_{T }independent tracked targets. However, they are not fully independent since each tracker can consider voxels from other targets in both the likelihood evaluation and the 3D resampling step, resulting in target merging or identity mismatches. In order to achieve the most independent set of trackers, a blocking method to model interactions is considered. Some blocking proposals can be found in 2D tracking related studies [6] and an extension to the 3D domain is proposed. Blocking methods rely on penalizing particles whose associated ellipsoid model overlaps with other targets' ellipsoid as shown in Figure 4. Hence, blocking information can also be considered when computing the particle weights for the kth target as
Figure 4. Particles from the tracker A (yellow ellipsoid) falling into the exclusion zone of tracker B (green ellipsoid) will be penalized by a multiplicative factor α ∈ [0, 1].
where
where
3.2 SS tracking
In the presented PF tracking algorithm, likelihood evaluation can be computationally expensive, thus rendering this approach unsuitable for realtime systems. Moreover, data are usually noisy and may contain merged blobs corresponding to different targets. A new technique, SS, is proposed as an efficient and flexible alternative to PF.
Assuming a homogeneous 3D object, it can be proved that its centroid can exactly be computed based only on the surface voxels, since the interior voxels do not provide any relevant information. Hence, this centroid can be estimated through a discrete version of Green's theorem on the surface voxels [35,36], while other approaches obtain an accurate approximation of the centroid using feature points (see [37] for a review). A common assumption of these techniques is the availability of surface data extracted beforehand, hence a labeling of the voxels in the scene should be available. By assuming that the object under study presents a central symmetry in the xy plane, the computation of the centroid can be done as an average of the positions of the surface voxels:
3.2.1 Degree of mass and degree of surfaceness
Let us model the human body as an ellipsoid as previously done in the PF approach. In order to test the robustness of the centroid computation of Equation13 against missing data, we studied the error committed when only a fraction of these input data is employed. A number of voxels (surface or interior voxels in each case) is randomly selected and employed to compute the centroid. Then, the error is computed showing that the surfacebased estimation is more sensitive than the estimation using interior voxels (see Figure 5). However, this proves that the centroid can be computed from a number of randomly selected surface voxels still achieving a satisfactory performance. This idea is the underlying principle of the SS algorithm.
Figure 5. Centroid's estimation error when computed with a fraction of surface or interior voxels. The employed ellipsoid had a radii s = (30, 30,100) cm, and voxels with sv = 2 cm were used.
Let us estimate the centroid of an object by analyzing a randomly selected number
of voxels from the whole scene, denoted as
where
where
where
3.2.2 Difference with particle filters
There is an obvious similarity between these representation and the formulation of
particle filters but there is a significant difference. While particles in PF represent
an instance of the whole body, our samples
The presented concepts are applied to define the SS algorithm. Let
where N_{s }is the number of sampling points. When using SS we are no longer sampling the state
space since
4 SS implementation
In order to define a method to recursively estimate
4.1 Pseudolikelihood evaluation
Associated weight
Partial likelihoods will be computed on a local domain centered in the position
For a given sample
Ideally, when the sample
Function
Since
One of the advantages of the SS algorithm is its computational efficiency. The complexity
to compute
The parameters defining the neighborhood were set to q = 26 and r = 2 yielding to satisfactory results. Larger values of the radius r did not significantly improve the overall algorithm performance but increased its computational complexity.
4.2 Sample propagation and 3D discrete resampling
A sample
Given the discrete nature of the 3D voxel space, it will be assumed that every sample is constrained to occupy a single voxel or discrete 3D coordinate and there cannot be two samples placed in the same location. Resampling is mimicked from PF so a number of replicas proportional to the normalized weight of the sample are generated. Then, these new samples are propagated and some discrete noise is added to their position meaning that their new positions are also constrained to occupy a discrete 3D coordinate (see an example in Figure 6). However, two resampled and propagated particles may fall in the same 3D voxel location as shown in Figure 6. In such case, one of these particles will randomly explore the adjacent voxels until reaching an empty location; if there is not any suitable location for this particle, it will be dismissed.
Figure 6. Example of discrete resampling and propagation (in 2D). (a) A sample is resampled and its replicas are randomly placed occupying a single voxel. (b) Two resampled samples fall in the same position (red cell) and one of them (blue) performs a random search through the adjacent voxels to find an empty location.
The choice of sampling the surface voxels of the object instead of its interior voxels to finally obtain its centroid is motivated by the fact that propagating samples along the surface rapidly spread them all around the object as depicted in Figure 7. Propagating samples on the surface is equivalent to propagate them on a 2D domain, hence the condition of not placing two samples in the same voxel will make them to explore the surface faster (see Figure 6). On the other hand, interior voxels propagate on a 3D domain, thus having more space to explore and therefore becoming slower to spread all around the volume (see Figure 6). Although both (pseudo)likelihoods should produce a fair estimation of the object's centroid, both sampling sets must fulfill the condition to be randomly spread around the object volume, otherwise the centroid estimation will be biased.
Figure 7. Sample positions evolution and centroid estimation. Likelihood based on: (a) interior voxels, or (b) surface voxels.
4.2.1 Interaction model
The flexibility of a samplebased analysis may, sometimes, lead to situations where
particles spread out too much from the computed centroid. In order to cope with this
problem, a intratarget samples' interaction model is devised. If a sample is placed
in a position such that
The interaction among targets is modeled in similar way as in the PF approach. Formulas in Equations 11 and 12 are applied to samples with the appropriate scaling parameter k.
5 Results and evaluation
In order to assess the performance of the proposed tracking systems, they have been tested on the set of benchmarking image sequences provided by the CLEAR Evaluation Campaigns 2007 [22]. Typically, these evaluation sequences involved up to five people moving around in a meeting room. This benchmarking set was formed by two separate datasets, development, and evaluation, containing sequences recorded by five of the participating partners. A sample of these data can be seen in Figure 8. The development set consisted in 5 sequences of an approximate duration of 20 min each, while the evaluation set was formed by 40 sequences of 5min each, thus adding up to 5 h of data. Each sequence was recorded with four cameras placed in the corners of the room and a zenithal camera placed in the ceiling. All cameras were calibrated and had resolutions ranging from 640 × 480 to 756 × 576 pixels at an average frame rate of f_{R }= 25fps. The test environments were a 5 × 4 m rooms with occluding elements such as tables and chairs. Images of the empty rooms were also provided to train the background/foreground segmentation algorithms.
Metrics proposed in [4] for multiperson tracking evaluation have been adopted, namely the Multiple Object Tracking Precision (MOTP), which shows tracker's ability to estimate precise object positions, and the Multiple Object Tracking Accuracy (MOTA), which expresses its performance at estimating the number of objects, and at keeping consistent trajectories. MOTP scores the average metric error when estimating multiple target 3D centroids, while MOTA evaluates the percentage of frames where targets have been missed, wrongly detected or mismatched.
The aim of a tracking system would be to produce high values of MOTA and low values of MOTP thus indicating its ability to correctly track all targets and estimate their positions accurately. When comparing two algorithms, there will be a preference to choose the one outputting the highest MOTA score.
5.1 Results
To demonstrate the effectiveness of the proposed multiperson tracking approaches, a set of experiments were conducted over the CLEAR 2007 database. The development part of the dataset was used to train the initiation/termination of tracks modules as described in Section 2.3 and the remaining test part was used for our experiments.
First, the multicamera data are preprocessed performing the foreground and background
segmentations and 3D voxel reconstruction algorithm. In order to analyze the dependency
of the tracker's performance with the resolution of the 3D reconstruction, several
voxel sizes were employed
In both types of filters, SS or PF, three parameters drive the performance of the
algorithm: the voxel size
Figure 9. MOTP and MOTA scores for the SS and the PF techniques using raw and colored voxels. Several voxel sizes sv = {2, 5, 10, 15} cm have been used in the experiments.
 Number of samples/particles: There is a dependency between the MOTP score and the number of particles/samples, especially for the SS algorithm. The contribution of a new sample to the estimation of the centroid in the SS has less impact than the addition of a new particle in the PF, hence the slower decay of the MOTP curves for the SS than for the PF. Regarding the MOTA score, there is not a significant dependency with N_{s }or N_{p}. Two factors drive the MOTA of an algorithm: the track initiation/termination modules, that mainly contributes to the ratio of misses and false positives, and the filtering step that has an impact to the mismatches ratio. The low dependency of MOTA with N_{s }or N_{p }shows that most of the impact of the algorithm in this score is due to the particle/sample propagation and interaction strategies rather than the quantity of particles/samples itself. Moreover, the influence in the MOTA score is tightly correlated with the track initiation/termination policy. This assumption was experimentally validated by testing several classification methods (mixture of Gaussians, PCA, Parzen, and KMeans) in the initiation/termination modules yielding to a drop in the MOTA score proportional to their ability to correctly classify a blob as person/noperson.
 Voxel size: Scenes reconstructed with a large voxel size do not capture well all spatial details and may miss some objects thus decreasing the performance of the system (both in SS and PF). It can be observed that MOTP and MOTA scores improve as the voxel size decrease.
 Color features: Color information improves the performance of SS and PF in both MOTP and MOTA scores. First, there is an improvement when using color information for a given voxel size, specially for the SS algorithm. Moreover, the smaller the voxel size the most noticeable difference between the experiments using raw and color features. This effect is supported by the fact that color characteristics are better captured when using small voxel sizes. The performance improvement when using color in the SS algorithm is more noticeable since samples are placed in the regions with a high likelihood to be part of the target. For instance, this effect is more evident in cases where the subject is sitting and the particles concentrate in the upper body part, disregarding the part of the chair. In the SS algorithm, MOTP score benefits from this efficient sample placement. PF algorithm is constrained to evaluate the color likelihood in the ellipsoid defined in Equation 9 thus not being able to differentiate between parts of the blob that do not belong to the tracked target. Color information used within the filtering loop leads to a better distinguishability among blobs, thus reducing the mismatch ratio and slightly improving the MOTA score. Merging of adjacent blobs or complex crossing among targets is also correctly resolved. An example of the impact of color information is shown in Figure 10 where the usage of color avoids the mismatch between two targets. This effect is more noticeable when targets in the scene are dressed in different colors.
Figure 10. Zenithal view of two comparative experiments showing the influence of color in the SS algorithm. The crossover between two targets is correctly tackled when using color information whereas using only raw features leads to a mismatch and, afterwards, a track loss (white ellipsoid) and the initiation of a new one (cyan ellipsoid).
We can compare the results obtained by SS and PF with other algorithms evaluated using the same CLEAR 2007 database whose scores are reported in Table 2. Most of these methods exploited multiview information with the exception of [31] that only used the zenithal camera facing the associated distortion and perspective problems. PF is the most employed technique due to its suitability to the characteristics of this problem although Kalman filtering used by [15] provided fair results when fed by higher semantical features extracted from the input data (in this case, faces). Note the low FP score for this system as a consequence of the unlikely event of detecting a face in a spurious object. A 3D voxel reconstruction was used as the input data in [5] together with a simple track management system. The rest of the methods [7,31] relied on a fixed human body appearance model similar to the ellipsoidal region of interest used in our PF proposal. However, the novelty of these methods is the strategies to combine the information coming from the analysis of different views without performing any 3D reconstruction. Comparing the best proposed tracking system [31]^{c }with our two approaches, we obtain a relative improvement of Δ(MOTP, MOTA)_{ss }= (7.63,17.13)% and Δ(MOTP, MOTA)_{PF }= (5.16,7.15)%.
In order to visually show the performance of the SS algorithm, some videos corresponding to the most challenging tracking scenarios have been made available at http://www.cristiancanton.org webcite.
5.2 Computational performance
Comparing obtained metrics among different algorithms can give an idea about their performance in a scenario where computational complexity is not taken into account. An analysis of the operation time of several algorithms under the same conditions and the produced MOTP/MOTA metrics might give a more informative and fairer comparison tool. Although there is not a standard procedure to measure the computational performance of a tracking process, we devised a method to assess the computational efficiency of our algorithms to present a comparative study.
The RTFfactor associated with a performance measure MOTP/MOTA (in both vertical axes) of the SS and PF algorithms when dealing with raw and colored input voxels is presented in Figure 11. This factor indicates a proportional measure of the speed of the algorithm where RTF = 1 stands for realtime operation while RTF > 1 and RTF < 1 indicate a faster or slower performance, respectively. Each point of every curve is the result of an experiment conducted over all the CLEAR data set associated to a number of samples/particles of each algorithm.
Figure 11. Computational performance comparison between PF and SS using several voxel sizes
The first noticeable characteristic of these charts is that, due to the computational
complexity of each algorithm, when comparing SS and PF algorithms under the same operation
conditions, the RTF associated with SS is always higher than the associated with PF.
Similarly, the computational load is higher when analyzing colored than raw inputs.
All the plotted curves attain lower RTF performance values as the size of the voxel
The observation of these results yields the conclusion that the SS algorithm is able
to produce a similar and, in some cases, better results than the PF algorithm with
a lower computational cost. For example, using
6 Conclusions
In this article, we have presented a number of contributions to the multiperson tracking task in a multicamera environment. A block representation of the whole tracking process allowed to identify the performance bottlenecks of the system and address efficient solutions to each of them. Realtime performance of the system was a major goal hence efficient tracking algorithms have been produced as well as an analysis of their performance.
The performance of these systems has thoroughly been tested over the CLEAR database and quantitatively compared through two scores: MOTP and MOTA. A number of experiments have been conducted toward exploring the influence of the resolution of the 3D reconstruction and the color information. Results have been compared with other stateoftheart algorithms evaluated with the same metrics using the same testing data.
The relevance of the initiation and termination of filters have been proved, since these modules have a major impact on the MOTA score. However, most articles in the literature do not specifically address the operation of these modules. We proposed a statistical classifier based on classification trees as a way to discriminate blobs between the person/noperson classes. Training of this classifier was done using data available in the development part of the employed database and a number of features (namely weight, height, top in zaxis, bounding box size) were extracted and provided as the input to the classifier. Another criterium such as a proximity to other already existing tracks was employed to create or destroy a track. Performance scores in Table 2 for the PF and SS systems present the lowest values for the false positives (FP) and missed targets (Miss) ratios hence supporting the relevance of the initiation and termination of tracks modules.
Two proposals for the filtering step of the tracking system have been presented: PF and SS. An independent tracker was assigned to every target and an interaction model was defined. PF technique proved to be robust and leaded to stateoftheart results but its computational load was unaffordable for small voxel sizes. As an alternative, SS algorithm has been presented achieving a similar and, in some occasions, better performance than PF at a smaller computational cost. Its samplebased estimation of the centroid allowed a better adaptation to noisy data and distinguishability among merged blobs. In both PF and SS, color information provided a useful cue to increase the robustness of the system against track mismatches thus increasing the MOTA score. In the SS, color information also allowed a better placement of the samples allowing to distinguish among parts belonging to the tracked object and parts of a merging with a spurious object, leading to a better MOTP score.
Future research within this topic involves multimodal data fusion with audio data toward improving the precision of the tracker, MOTP, and avoid mismatches among targets, thus improving the MOTA score.
End notes
^{a}Analogously to the pixel definition (picture element) as the minimum information unit in a discrete image, the voxel (volume element) is defined as the minimum information unit in a 3D discrete representation of a volume.
^{b}For the sake of simplicity in the notation, pseudolikelihood functions will be denoted as p(·) instead of defining a specific notation for it.
^{c}When selecting the best system, the MOTA score is regarded as the most significant value.
The authors declare that they have no competing interests
References

S Park, MM Trivedi, Understanding human interactions with track and body synergies captured from multiple views. Comput Vis Image Understand 111(1), 2–20 (2008). Publisher Full Text

Project CHILComputers in the Human Interaction Loop. [http://chil.server.de] webcite

I Haritaoglu, D Harwood, LS Davis, W^{4}: realtime surveillance of people and their activities. IEEE Trans Pattern Anal Mach Intell 22(8), 809–830 (2000). Publisher Full Text

K Bernardin, A Elbs, R Stiefelhagen, Multiple object tracking performance metrics and evaluation in a smart Room environment. Proceedings of IEEE International Workshop on Visual Surveillance (2006)

C CantonFerrer, J Salvador, JR Casas, Multiperson tracking strategies based on voxel analysis. Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop (Lecture Notes on Computer Science, 2007) 4625, pp. 91–103

Z Khan, T Balch, F Dellaert, Efficient particle filterbased tracking of multiple interacting targets using an MRFbased motion model. Proceedings of International Conference on Intelligent Robots and Systems 1(1), 254–259 (2003)

O Lanz, P Chippendale, R Brunelli, An appearancebased particle filter for visual tracking in smart rooms. Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop (Lecture Notes on Computer Science, 2007) 4625, pp. 57–69

A Yilmaz, O Javed, M Shah, Object tracking: a survey. ACM Comput Surv 38(4), 1–45 (2006)

C CantonFerrer, JR Casas, M Pardàs, Towards a Bayesian approach to robust finding correspondences in multiple view geometry environments. Proceedings of 4th International Workshop on Computer Graphics and Geometric Modelling (Lecture Notes on Computer Science, 2005) 3515, pp. 281–289

O Lanz, Approximate Bayesian multibody tracking. IEEE Trans Pattern Anal Mach Intell 28(9), 1436–1449 (2006). PubMed Abstract  Publisher Full Text

GKM Cheung, T Kanade, JY Bouguet, M Holler, A real time system for robust 3D voxel reconstruction of human motions. IEEE Conference on Computer Vision and Pattern Recognition 2, 714–720 (2000)

J Isidoro, S Sclaroff, Stochastic refinement of the visual hull to satisfy photometric and silhouette consistency constraints. Proceedings of IEEE International Conference on Computer Vision 2, 1335–1342 (2003)

I Mikič, S Santini, R Jain, Tracking objects in 3D using multiple camera views, in. Proceedings of Asian Conference on Computer Vision (2000)

D Focken, R Stiefelhagen, Towards visionbased 3D people tracking in a Smart Room. Proceedings of IEEE International Conference on Multimodal Interfaces, 400–405 (2002)

N Katsarakis, F Talantzis, A Pnevmatikakis, L Polymenakos, The AIT 3D audiovisual person tracker for CLEAR 2007. Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop (Lecture Notes on Computer Science, 2007) 4625, pp. 35–46

MS Arulampalam, S Maskell, N Gordon, T Clapp, A tutorial on particle filters for online nonlinear/nonGaussian Bayesian tracking. IEEE Trans Signal Process 50(2), 174–188 (2002). Publisher Full Text

K Lien, C Huang, Multiviewbased cooperative tracking of multiple human objects. EURASIP J. Image Video Process 8(2), 1–13 (2008)

T Osawa, X Wu, K Sudo, K Wakabayashi, H Arai, MCMC based multibody tracking using full 3D model of both target and environment. Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance, 224–229 (2007)

J Black, T Ellis, P Rosin, Multi view image surveillance and tracking. Proceedings of Workshop on Motion and Video Computing, 169–174 (2002)

A López, C CantonFerrer, JR Casas, Multiperson 3D tracking with particle filters on voxels. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 1, 913–916 (2007)

C CantonFerrer, R Sblendido, JR Casas, M Pardàs, Particle Filtering and sparse sampling for multiperson 3D tracking. Proceedings of IEEE International Conference on Image Processing, 2644–2647 (2008)

CLEARClassification of Events, Activities and Relationships Evaluation and Workshop. [http://www.clearevaluation.org] webcite

DL Hall, SAH McMullen, Mathematical Techniques in Multisense Data Fusion. Artech House (2004)

KN Kutulakos, SM Seitz, A theory of shape by space carving. Int J Comput Vis 38(3), 199–218 (2000). Publisher Full Text

O Faugeras, R Keriven, Variational principles, surface evolution, PDE's, level set methods and the stereo problem. Proceedings of 5nd IEEE EMBS International Summer School on Biomedical Imaging (2002)

JR Casas, J Salvador, Imagebased multiview scene analysis using conexels. Proceedings of HCSNet Workshop on Use of Vision in HumanComputer Interaction, 19–28 (2006)

V Kolmogorov, R Zabin, What energy functions can be minimized via graph cuts? IEEE Trans Pattern Anal Mach Intell 26(2), 147–159 (2004). PubMed Abstract  Publisher Full Text

C Stauffer, W Grimson, Adaptive background mixture models for realtime tracking, in. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 252–259 (1999)

E Maggio, E Piccardo, C Regazzoni, A Cavallaro, Particle PHD filtering for multitarget visual tracking. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 1, 1101–1104 (2007)

F Talantzis, A Pnevmatikakis, AG Constantinides, Audiovisual active speaker tracking in cluttered indoors environments. IEEE Trans Syst Man Cybern B 38(3), 799–807 (2008)

K Bernardin, T Gehrig, R Stiefelhagen, Multilevel particle filter fusion of features and cues for audiovisual person tracking. Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop (Lecture Notes on Computer Science, 2007) 4625, pp. 70–81

L Breiman, JH Friedman, RA Olshen, CJ Stone, Classification and Regression Trees. Chapman and Hall (1993)

RO Duda, PE Hart, DG Stork, Pattern Classification. WileyInterscience (2000)

JJ Crisco, RD McGovern, Efficient calculation of mass moments of intertia for segmented homogeneous threedimensional objects. J Biomech 31(1), 97–101 (1998). PubMed Abstract  Publisher Full Text

JG Leu, Computing a shape's moments from its boundary. Pattern Recogn 24(10), 116–122 (1991)

L Yang, F Albregtsen, Fast and exact computation of Cartesian geometric moments using discrete Green's theorem. Pattern Recogn 29(7), 1061–1073 (1996). Publisher Full Text