Batch Cov Est
Batch Cov Est
net/publication/328643146
CITATIONS READS
9 1,136
4 authors:
All content following this page was uploaded by Ryan Watson on 01 November 2018.
Abstract
The factor graph has become the standard framework for representing a plethora of robotic navigation problems. One primary
reason for this adoption by the community is the fast and efficient inference that can be conducted over the graph when a
unimodal Gaussian noise model is assumed. However, the unimodal Gaussian noise model assumption does not reflect reality in
many situations, particularly measurements that may include gross outliers (e.g. feature tracking between images, place recognition,
or GNSS multipath). To combat this issue, several methodologies have been proposed for conducting robust inference on factor
graphs. These models work by reducing the contribution of constraints that do not adhere to the specified noise model by scaling
the corresponding elements of the information matrix. A unifying assumption shared by the proposed robust graph inference
algorithms is that the measurement noise model is known a priori and that the specified noise model does not vary with time. In
the situation where the measurement model is not fully known, rejecting the outliers can become far more difficult. To overcome this
issue, a novel method is proposed that utilizes a non-parametric soft clustering algorithm to iteratively estimate the measurement
error covariance matrix. The estimated covariance mixture model is then used within the max-mixtures framework to mitigate
the effect of false constraints. The proposed methodology provides robust optimization in the face of faulty measurements where
little or no information is provided about the measurement uncertainty.
I. I NTRODUCTION
The study of sensor fusion for navigation applications is a well-researched field. The techniques used to solve this class of
problems can be broadly classified into two categories: those that marginalize prior information to estimate the current state
(i.e., filtering) and those that retain the prior information in an attempt to solve for the entire trajectory (i.e., smoothing).
Problems that fall within the domain of filtering are generally addressed through the application of a variant of the traditional
Kalman filter [1]. Since the seminal paper on the subject, [2], research on smoothing has been dominated by graph based
methodologies.
When [2] was published, graph-based smoothing was not widely utilized due to computation complexity of solving the
initial formulation. However,
√ quickly thereafter, methods were proposed to greatly reduce complexity through the utilization
of factor graphs [3]. The SAM formulation as presented in [4] was particularly influential as it drew connections between
the factor graph formulation and sparse linear algebra. This idea was later extended to an incremental inference framework in
[5] and [6].
Recent advances in efficient graph-based smoothing formulations have been directed towards robust inference over factor
graphs. These methods can be summarized as attempting to mitigate the effect of erroneous constraints by checking the residual
attributed to a constraint against the measurement noise model. If a constraint does not match the model sufficiently well, then
the corresponding elements of the information matrix are scaled according to the proposed algorithm’s methodology. Some
particularly influential robust models include: switchable constraints [7], max-mixtures [8], and dynamic covariance scaling
[9]. These robust models will be described in greater detail later in this paper.
In this paper we provide a novel extension to robust factor graph inference by addressing the inherent assumption made
within all of the robust factor graph inference methodologies discussed above, which is the notion that the true measurement
model is known a priori . The inference method introduced in this paper relaxes that assumption through the utilization of
an iterative soft-clustering algorithm that estimates the measurement covariance model after each optimization iteration. In
addition, this model has the added benefit of an evolving measurement uncertainty model, which could play a pivotal role
in many safety-critical navigation applications [10]. For example, consider a platform that utilizes GNSS observables driving
from a clear-sky environment (i.e., the interstate) into an urban environment. In urban environments, GNSS observables are
known to be degraded; however, the model proposed in this paper could evolve to minimize the effect of erroneous GNSS
constraints due to multipath of poor satellite geometry.
The remainder of this paper is organized in the following manner. In Section II, we first provide a brief overview of the
current state of robust factor graph optimization. Section III presents our developed methodology to handle robust optimization
when little-to-no information is known about the measurement covariance. In Section IV the proposed algorithm is validated
∗ Ph.D. Student, Department of Mechanical and Aerospace Engineering at West Virginia University, Morgantown, WV
† Senior Research Electronics Engineer, The US Air Force Research Laboratory, Dayton, OH
‡ Research Assistant Professor, Autonomy and Navigation Technology Center at the Air Force Institute of Technology, Dayton, OH
∗∗ Assistant Professor, Department of Mechanical and Aerospace Engineering at West Virginia University, Morgantown, WV
on a simulated pose graph and a GNSS flight data-set. Finally, the article ends with concluding remarks and a discussion on
future research directions.
A. Factor Graph
The factor graph, as proposed in [3], provides a convenient way to factorize a function of several variables into the product
of simplified 1 functions, as represented in Eq. 1.
Y
g(X1 , . . . , Xn ) = fi (Xi ) (1)
i∈I
This factorization can be represented graphically as a bipartite graph, G = (X , F, E). In this factorization model, there are
two types of vertices: the state vertices, X , which represent the quantities to be estimated, and the factor vertices, F, which
represent constraints on the state vertices. An edge, Ei , only exists between a state vertex, Xi , and a factor vertex, Fi , if factor
vertex, Fi , constrains the state estimate, Xi .
where fi is the system dynamics model, hk is the measurement jacobian, and Σi , Λj , Ξk are noise models characterizing the
prior state uncertainty, the system dynamics uncertainty, and the measurement update uncertainty, respectively. When inference
over a factor graph simplifies to a least squares optimization problem, there are several methods of optimization available, such
as Levenberg-Marquart [11], dog-leg [12] or RISE [13].
C. Robust Methods
Many situations arise where measurements becomes degraded and the unimodal Gaussian noise model assumption is not valid
due to corrupting erroneous constraints. One of the most commonly used methods to handle this scenario is the M-estimator,
which was proposed in [14] and later extended into the seminal text on the subject in [15]. This class of techniques attempts
to make the optimization process more robust to erroneous constraints by replacing the L2 cost function with a modified cost
function that minimizes the effect of a measurement whose residuals fall outside of the user-defined kernel width.
Our discussion now shifts to more recent advances in the field of robust optimization. We will begin the discussion with the
concept of a switchable constraint, which was proposed in [7] as a method to reject false loop-closure constraints. Conceptually,
switchable constraint can be understood as allowing the optimization process to modify the topology of the factor graph through
the concurrent optimization of observation weights and the state estimates. The modified cost function is shown in Eq. 4,
I
X J
X K
X L
X
X̂, Ŝ = argmin ||xo − xi ||2Σ + 2
||xj − f (xj−1 )||Λ + 2
||ψ(sk )(hk (xk ) − zk )||Ξ + 2
||γl + sl ||Ξ (4)
X,S i=1 j=1 k l
where s, is the switchable constraint, γi is the initial estimate for the switch variable, and ψ() is a real-valued function such
that ψ() → {1, 0}.
1 Simplified in this context refers to the domain reduction of the individual functions.
3
One obvious pitfall of the switchable constraints method is the increased size of the search space due to the inclusion of
additional latent variables for each erroneous constraint, which can increase the time required for an optimizer to convergence
[16]. To directly combat this issue, a closed form approximate solution to the switchable constraints method was proposed [9].
This methods is known as dynamic covariance scaling, and can be implemented like a traditional M estimator, as shown in
Eq. 5,
2Φ
s = min(1, ), (5)
Φ + χ2
where Φ is the inverse of the initial uncertainty on the constraint and χ is the initial factor residual.
The three methods of robust factor graph optimization discussed above are confined to a unimodal noise model, which in
many cases does not capture the true complexity of the measurement covariance. One common way to extend a unimodal
Gaussian model is through a linear combination of multiple Gaussian distributions; however, this greatly complicates the MAP
estimation as the logarithm cannot be “pushed” inside of the Gaussian summation operation. One method of addressing this
issue was proposed in [8], where it is shown that the summation operator can easily be approximated, as shown in Eq. 6,
X
P (Y |X) = ωi N (µi , Σi ) ≈ max ωi N (µi , Σi ) (6)
i
where the max operator is utilized as a Gaussian component selector to approximate the true Gaussian summation. This
approximation framework is utilized heavily in the algorithm described in the next section.
While all of the previously mentioned methods confront the issue of robust estimation — with varying results — no method
addresses the issue of a poor a priori measurement covariance. Specifically, no proposed method addresses the issue of robust
optimization in a data-degraded environment where the provided measurement covariance does not accurately characterize the
true measurement covariance distribution.
A. Model Overview
The data model utilized in this study, as depicted in Figure 1, assumes that there is a set of measurements {yi }N
i=1 , where
the measurements can be partitioned into sets of similar observables. Each partition of similar observations can then be fully
characterized by a single Gaussian distribution. Together, each of the Gaussian distributions, used to characterize a partition
of measurements form a Gaussian mixture model (GMM) that is utilized within the max-mixtures algorithm.
πk β
∞
zi θk
∞
yi
N
Fig. 1: Graphical representation of the data model with a Dirichlet process.
4
Within this model, the prior probability that an individual measurement belongs to a Gaussian mixture component is given
by πk = p(zi = k). The component weights, πk , are distributed as a Dirichlet process, where the Dirichlet process is an
infinite dimensional 2 extension of the Dirichlet distribution. The Dirichlet distribution is defined as
Pn n
Γ( i (αi )) Y αi −1
π ∼ Dir(α) if p(π|α) = n Q θi I(θ ∈ S), (7)
i (Γ(αi )) i=1
B. Algorithm Overview
Now, we can proceed to describe the proposed algorithm developed to address the issue of robust optimization when
confronted with little-to-no information about the measurement uncertainty. To aid in this discussion, a graphical depiction of
the proposed method is provided in Fig. 2.
As depicted in the Fig 2, the initialization of the algorithm is compose of two sections. The first step is the construction
of the pose graph using the given the initial state estimate and the set of constraints. This can easily be done through the
utilization of one of the commonly used software libraries (i.e., Georgia Tech Smoothing And Mapping (GTSAM) [21] or
general graph optimization (G2O) [22]). The second step is an initial iteration of L2 optimization is conducted.
Using the residuals from the initial iteration of optimization, the collapsed Gibb’s sampling algorithm described in Algorithm
1 is used to estimate a mixture model to characterize the true measurement covariance. With the estimated mixture model that
characterizes the current iterations residuals, the max-mixtures frame-work is used in conjunction to minimize the effect of
erroneous constraints through the scaling of the covariance and Jacobian matrices. This process is continued in an iterative
fashion with the most recent residuals until an exit criterion is achieved by optimizer.
This framework is not only robust to erroneous constraints but also to poor estimates of the initial measurement covariance.
The robustness to erroneous constraints is due to the max-mixtures methodology. The additional robustness to a poor initial
2 By selecting the infinite dimensional extension of the Dirichlet distribution, there is no need to specify the number of mixture components.
5
Generate Graph
L2 Opt.
Calculate Residuals
Soft Clustering
Max-Mixtures Opt.
no
Exit Cond.
yes
Write Results
estimate of the measurement covariance is gained through the utilization of Gibb’s sampling to estimate the true measurement
covariance in an iterative manner.
Fig. 3: The Manhattan 3500 data set. The pose graph on the left is composed of only accurate state constraints. The pose
graph on the right is corrupted by several erroneous constraints, where the erroneous constraints are represented in red.
In conjunction with the proposed methodology, both the switch constraint and the (unmodified) max-mixture approaches
were utilized. As described in the previous sections, both switch constraints and max-mixtures have user-defined parameters.
For the switch constraints approach, the parameters γ and Ξ, the prior and covariance of the switch constraint, respectively,
was set to 1 as provided in Table 1. For the max-mixtures approach, the distribution weighting and scaling parameters are also
defined in Table 1. The user-defined parameters are defined to be consistent with the recommended values in literature [24].
TABLE I: Robust Optimization Parameter Definition
Methodology Parameter Value
Max-Mix weighting 0.01
Max-Mix scale factor 1e − 12
Switch Constraint Ξ 1
Switch Constraint γ 1
2) Results:
a) Robustness to Erroneous Constraints: Utilizing the corrupted Manhattan dataset, the proposed methodology is evaluated
along side two commonly used approaches. To quantify the accuracy of the optimzier, the median of the residual sum of squares
(RSOS) of the X − Y positioning error is reported, as in Fig. 4.
From Fig. 4, it can be seen that the max-mixtures approach, with a pre-defined mixture model, performs considerable worse
than the switchable constraint and clustering technique as the number of erroneous constraints increases. Additionally, it should
be noted that both switchable constraints and the clustering technique stay relatively constant, with respect to the median RSOS
error, as the number of false constraints is increased; however; the clustering optimization technique provides a smaller bias.
Fig. 4: Median RSOS positioning error as a function of the number of erroneous constraints included in the pose graph.
The proposed clustering approach provides the lowest median RSOS error, compared to the Switchable Constraints [7] or the
unmodified Max-Mixtures [8] approaches, regardless of the number of false constraints.
With it shown that the proposed method is as robust other state-of-the-art optimization techniques, the discussion can proceed
to the principle benefit of the proposed approach, which is that accurate knowledge of the a priori measurement covariance is
not required. To expand upon this idea, the pose graph corrupted by 1500 erroneous constraints is examined in greater detail.
7
First, we can extract the residuals from the optimized graph and visually evaluate the performance of the estimated covariance.
A scatter plot of the residuals is provided in the left-hand side of Fig. 5, where the black cluster represent the inlier distribution
and the red scatter represents the residuals of the erroneous constraints. On the right-hand side of Fig. 5, we can see our
estimated inlier covariance encapsulating the inlier residuals. Additionally, the true and estimated measurement covariance
models are provided in Eq. 9, were it can be seen the the estimated distribution closely approximates the true model.
Fig. 5: Estimated measurement inlier distribution for the Manhattan 3500 data set corrupted by 1500 erroneous constraints.
An identity a priori measurement covariance was provided.
0.02 0.0 0.0 0.017 0.0 0.0
PT = 0.0 0.02 0.0 PE = 0.0 0.015 0.0 (9)
0.0 0.0 0.01 0.0 0.0 0.013
We can extend the analysis of the optimizer’s ability to accurately estimate the measurement covariance model by evaluating
the performance as the number of erroneous constraints varies. To quantify the accuracy of the estimated covariance model,
the Frobenius norm, as provided in Eq. 10, of the difference between the true covariance matrix and the estimated matrix
is utilized, as depicted in Fig. 6. From Fig. 6, it should be noted that the error in the estimated covariance distribution is
essentially constant with respect to the number of erroneous constraints.
v
uM N
uX X
||C||F = t |cm,n |2 (10)
m n
Fig. 6: Accuracy of the estimated measurement covariance model, with respect to the Frobenius norm, as a function of the
number of erroneous constraints.
8
(a) Optimization conducted with the switchable (b) Optimization conducted with the unmodified (c) Optimization conducted with the clustering
constrain approach. max-mixtures approach. based max-mixtures approach.
Fig. 7: Optimization of the Manhattan 3500 data-set when provided with a poor a prior measurement error covariance estimate.
The black line represents the true solution and the blue line represents the specified optimizers solution.
b) Robustness to Poor Initial Measurement Covariance: We can also test the optimization framework’s ability to robustly
optimize when provided a poor a prior measurement error covariance. To conduct this evaluation, we will utilize the unmodified
(i.e, no false constraints added, and no additional noise added to the graph) Manhattan 3500 dataset. To test the sensitivity
of the optimization routine to the initial measurement error covariance, we provide the optimizer with a measurement error
covariance that is substantially smaller (i.e., scaled by a factor of 1e−6 ) than the true distribution from which the errors are
sampled. This will provide an additional metric to evaluate the robustness of the optimization framework.
To begin the evaluation, we will test the switchable constraint methodology on the modified data-set. The solution generated
by this technique is presented in Fig. 7a, where it can see that the solution generated is not consistent with the true set of
poses. The collapse of the solution generated by the switchable constraint method is to be expected in this scenario because
the observations are not consistent with the provided measurement error covariance model, thus, the optimizer is trying to
de-weight every observation.
Next, the unmodified max-mixtures framework is tested on the modified data-set, where an inaccurate estimate of the a
prior measurement error contrivance is provided. The solution generated by the max-mixtures technique is presented is Fig.
7b. This technique, like the switchable constraints, provides a poor solution in this scenario. This is to be expected in this
scenario because the observations are not consistent with the provided measurement error covariance model. The inconsistency
between the observations and the provided covariance is forcing the max-mixtures approach to de-weight most observations.
Finally, we can evaluate how the non-parametric clustering extension to max-mixtures performs. From Fig. 7c, we can see
that the specified optimization routine is considerably more robust to the poor initial measurement error covariance than the
previous techniques. This is to be expected, because we are concurrently learning the measurement error covariance distribution
while optimizing.
100
Flight Profile
Ground Trace
50
-200 -100
0 0
200 100
400 200
Northing ( m. ) Easting ( m. )
(a) The Phastball UAV flight platform [25] which is equipped with a dual- (b) The flight profile of the Phastball UAV, where the black line indicates the
frequency GNSS receiver. This image originally appeared in [26]. 3-D flight profile and the grey line indicates the ground trace.
Fig. 8: The UAV platform and flight profile for the utilized GNSS data-set.
9
In addition to the kinematic data-set collected on-board the UAV, a data-set was collected concurrently at a static reference
station near the runway. This static data-set provides dual-frequency, 1 Hz GNSS observations. Utilizing these observations, an
accurate reference solution can be generated by a Carrier-Phase Differential GPS (CP-DGPS) based Kalman filter/smoother.
The CP-DGPS solution is generated with the open-source RTKLIB [27] software.
Finally, to enable the evaluation of the robust optimization techniques on this data-set, synthetic faults are added to the the
observations. The faults added the the pseudorange observations are sampled from the distribution depicted in Fig. 9. Where,
this distribution is the sum of two Gaussians with means at ± 50 meters and a shared variance of 15 meters.
0.03
0.025
0.02
Density
0.015
0.01
0.005
0
-100 -50 0 50 100
Pseudorange Fault Magnitude ( m. )
Fig. 9: Distribution from which the faults added to the psueodrange observations were generated.
b) GNSS Factor Graph: For this evaluation, a 5-state GNSS factor graph is utilized, where the 5 states to be estimated
are presented in Eq. 11, where δP is the platforms position, Tz,w is the residual zenith troposphere bias, and Cb is the receiver
clock bias. To enable the estimation of these states, the dual-frequency pseudorange observations are utilized. For a more
thorough discussion on the construction and optimization of GNSS factor graphs, the reader is refered to [16], [26].
δP
X = Tz,w (11)
Cb
2) Results: Utilizing the data-set discussed in the previous section, we can evaluate the robust optimization techniques
on a GNSS data-set. This evaluation begins at Fig. 10, which depicts the median RSOS positioning error as the number of
erroneous observations increases. From this figure, we can see that all three of the robust optimization techniques provide
similar positioning performance when a small number of erroneous observations are present. However, there is a considerably
smaller growth rate to the RSOS positioning error increase for the proposed clustering approach as the number of erroneous
constraints is increased.
Fig. 10: Median RSOS positioning error for the robust optimization techniques as the number of erroneous constraints is varied.
As with the pose graph validation, the proposed clustering approach provides the lowest median RSOS error regardless of the
number of false constraints.
10
Finally, to provide reference for the robustness of the positioning performance of the proposed technique, a comparison is
provided in Fig. 11. In Fig. 11, the east-north positioning solution is provided for both the L2 and clustering based optimization
methodologies, when 10% of the observations contain faults. From this figure, we can see how poorly the L2 optimizer performs
with only 10% of the observations contain faults.
400 400
400
300 300
300
200 200 200
Northing ( m. )
Northing ( m. )
Northing ( m. )
100 100 100
0 0 0
-100
-100 -100
-200
-200 -200
-300
-300 -300
-400
-150 -100 -50 0 50 100 150 200 250 -150 -100 -50 0 50 100 150 200 250 -150 -100 -50 0 50 100 150 200 250
Easting ( m. ) Easting ( m. ) Easting ( m. )
(a) The true East-North ground trace obtained by a (b) East-North ground trace obtained by L2 op- (c) East-North ground trace obtained by the pro-
Carrier-Phase Differential GPS (CP-DGPS) based timization when 10% of the GNSS observations posed clustering algorithm when 10% of the GNSS
Kalman filter/smoother. contain faults. observations contain faults.
Fig. 11: East-North positioning comparison between L2 and clustering based max-mixture optimization when 10% of the
observations contain faults.
V. C ONCLUSION
Within this paper, a method of robust non-parametric graph optimization is proposed. The developed methodology relies
upon ‘collapsed’ Gibb’s sampling of a Dirichlet process to characterize the measurement covariance model through a collection
of Gaussian distribution. This mixture model is fed into the commonly used Max-Mixtures [8] frameworks to mitigate the
effect of erroneous constraints. One of the principle advantages of the developed methodology is that the requirement of an
accurate a priori measurement covariance model is relaxed.
To evaluate the performance of the proposed algorithm, two independent data-set with randomly added erroneous constraints
are utilized. The evaluation shows that the proposed methodology is as robust to erroneous constants as other state-of-the-art
approaches. Additionally, we show that the estimated covariance model accuracy characterize the true distribution.
Currently, it is believed that this work can be extended on two fronts: incremental updates, and decreased run-time. To
make the optimization technique incremental, we are curretly looking into the applicability of the concurrent filter and smooth
framework [28]. To make the approach faster, we hope to leverage the recent advance in efficient variational inference [17].
ACKNOWLEDGMENT
This work was conducted as part of an internship at the Air Force Research Laboratory, through a sub-contract with
MacAualy-Brown Inc.
R EFERENCES
[1] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.
[2] F. Lu and E. Milios, “Globally consistent range scan alignment for environment mapping,” Autonomous robots, vol. 4, no. 4, pp. 333–349, 1997.
[3] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on information theory, vol. 47,
no. 2, pp. 498–519, 2001.
[4] F. Dellaert and M. Kaess, “Square root sam: Simultaneous localization and mapping via square root information smoothing,” The International Journal
of Robotics Research, vol. 25, no. 12, pp. 1181–1203, 2006.
[5] M. Kaess, A. Ranganathan, and F. Dellaert, “isam: Fast incremental smoothing and mapping with efficient data association,” in Robotics and Automation,
2007 IEEE International Conference on. IEEE, 2007, pp. 1670–1677.
[6] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. Leonard, and F. Dellaert, “iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree,” The
International Journal of Robotics Research, vol. 31, no. 2, 2012.
[7] N. Sunderhauf and P. Protzel, “Switchable Constraints for Robust Pose Graph SLAM,” in Intelligent Robots and Systems, 2012.
[8] E. Olson and P. Agarwal, “Inference on Networks of Mixtures for Robust Robot Mapping,” 2012.
[9] P. Agarwal, G. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard, “Robust Map Optimization Using Dynamic Covariance Scaling,” in International
Conference on Robotics and Automation, 2013.
[10] S. Bedrich and X. Gu, “Gnss-based sensor fusion for safety-critical applications in rail traffic,” Galileo and EGNOS Information Catalogue, p. 8, 2004.
[11] J. J. Moré, “The levenberg-marquardt algorithm: implementation and theory,” in Numerical analysis. Springer, 1978, pp. 105–116.
[12] M. J. Powell, “A new algorithm for unconstrained optimization,” Nonlinear programming, pp. 31–65, 1970.
[13] D. M. Rosen, M. Kaess, and J. J. Leonard, “An incremental trust-region method for robust online sparse least-squares estimation,” in Robotics and
Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 1262–1269.
[14] P. J. Huber et al., “Robust estimation of a location parameter,” The Annals of Mathematical Statistics, vol. 35, no. 1, pp. 73–101, 1964.
[15] P. J. Huber, Robust Statistics. Wiley New York, 1981.
[16] R. M. Watson and J. N. Gross, “Robust Navigation In GNSS Degraded Environment Using Graph Optimization,” in ION GNSS+ 2017. the Institute
of Navigation, 2017.
[17] D. M. Blei, M. I. Jordan et al., “Variational inference for dirichlet process mixtures,” Bayesian analysis, vol. 1, no. 1, pp. 121–143, 2006.
[18] H. O. Hartley, “Maximum likelihood estimation from incomplete data,” Biometrics, vol. 14, no. 2, pp. 174–194, 1958.
11
[19] C. Andrieu, N. De Freitas, A. Doucet, and M. I. Jordan, “An introduction to mcmc for machine learning,” Machine learning, vol. 50, no. 1-2, pp. 5–43,
2003.
[20] P. Resnik and E. Hardisty, “Gibbs sampling for the uninitiated,” University of Maryland Institute for Advanced Computer Studies., Tech. Rep., 2010.
[21] F. Dellaert, “Factor graphs and GTSAM: A hands-on introduction,” Georgia Institute of Technology, Tech. Rep., 2012.
[22] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “g2o: A general framework for graph optimization,” in Robotics and Automation
(ICRA), 2011 IEEE International Conference on. IEEE, 2011, pp. 3607–3613.
[23] E. Olson, J. Leonard, and S. Teller, “Fast iterative alignment of pose graphs with poor initial estimates,” in Robotics adn Automation, 2006. ICRA 2006
IEEE International Conference on. IEEE, 2006, pp. 2262–2269.
[24] N. Sünderhauf and P. Protzel, “Switchable constraints vs. max-mixture models vs. rrr-a comparison of three approaches to robust pose graph slam,” in
Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2013, pp. 5198–5203.
[25] Y. Gu, J. Gross, F. Barchesky, H. Chao, and M. Napolitano, “Avionics design for a sub-scale fault-tolerant flight control test-bed,” in Recent Advances
in Aircraft Technology. InTech, 2012.
[26] R. M. Watson and J. N. Gross, “Evaluation of kinematic precise point positioning convergence with an incremental graph optimizer,” in Position, Location
and Navigation Symposium (PLANS), 2018 IEEE/ION. IEEE, 2018, pp. 589–596.
[27] T. Takasu, “Rtklib: An open source program package for gnss positioning,” 2011.
[28] S. Williams, V. Indelman, M. Kaess, R. Roberts, J. Leonard, and F. Dellaert, “Concurrent filtering and smoothing: A parallel architecture for real-time
navigation and full smoothing,” The Internation Journal of Robotics Research, no. 12, pp. 1544–1568, 2014.