FAQ  •  Login

Computing "g" during integration

<<

michaelvignos

Newbie
Newbie

Posts: 7

Joined: Fri Sep 04, 2015 12:51 pm

Unread post Thu Nov 03, 2016 2:19 pm

Computing "g" during integration

In the lecture from 10/17 on slide 26, the equation for g multiplies phi_P and phi by a scaling term of 1/(beta0^2*h^2). Where does this scaling term come from?

The reason I ask is that it seems this scaling term is hurting the convergence of my dynamics analysis. As I decrease h it is driving up the value of g and thus driving up the value of my correction factor. If I remove this scaling factor it does allow my solution to get closer to convergence, but it is not fully converging (i.e. it converges due to reaching the max number of iterations not due to meeting the required tolerance).

Thanks!

Mike
<<

danielpiombino

Newbie
Newbie

Posts: 14

Joined: Wed Sep 14, 2016 11:45 pm

Unread post Thu Nov 03, 2016 4:26 pm

Re: Computing "g" during integration

My understanding is that it comes from the two BDF integrations. Phi is a function of q, but q goes by (h^2)(beta^2)(q-doubledot), so phi is then a function of (h^2)(beta^2)(q-doubledot). I'm having trouble with the specifics as well, but I would think if it wasn't there, that would make g a function of h and beta, which it shouldn't be in theory.

What step size are you using that's blowing up?

-Dan
<<

Dan Negrut

Global Moderator
Global Moderator

Posts: 833

Joined: Wed Sep 03, 2008 12:24 pm

Unread post Thu Nov 03, 2016 5:36 pm

Re: Computing "g" during integration

That term is there because it helps with avoiding ill-conditioning of the Jacobian matrix. This comes into play when you do a chain rule for computing the sensitivity of your kinematic constraints wrt the acceleration. Apply the chain rule, you'll see that it cancels out and the Jacobian is h-free. Granted, you have to divide the residual by that h^2 term, but that's not that bad since the residual is usually very small to start with.
If you don't do this scaling, then you will have a swath of your Jacobian matrix be close to zero (a bunch of rows, at the bottom of the Jacobian) since those entries will come multiplied w/ b^2 h^2. This would make your matrix close to singular. And that ends up hurting the convergence.

I hope this helps a bit.
Dan

Return to ME751 Fall 2016

Who is online

Users browsing this forum: No registered users and 1 guest

Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software.