Is it safe to assume that it is the optimized compilation giving incorect values?
The code can be found at http://directory.fsf.org/science/biology/CTSim.html
I am using the "pjrec" program (tools directory) which calls this method in backprojectors.cpp (libctsim directory):
void BackprojectTrig::BackprojectView (const double* const filteredProj, const double view_angle) { double theta = view_angle;
CubicPolyInterpolator* pCubicInterp = NULL; if (interpType == Backprojector::INTERP_CUBIC) pCubicInterp = new CubicPolyInterpolator (filteredProj, nDet);
double x = xMin + xInc / 2; // Rectang coords of center of pixel for (int ix = 0; ix < nx; x += xInc, ix++) { double y = yMin + yInc / 2; for (int iy = 0; iy < ny; y += yInc, iy++) { double r = sqrt (x * x + y * y); // distance of cell from center double phi = atan2 (y, x); // angle of cell from center double L = r * cos (theta - phi); // position on detector
if (interpType == Backprojector::INTERP_NEAREST) { int iDetPos = iDetCenter + nearest<int> (L / detInc); // calc'd index in the filter raysum array
if (iDetPos >= 0 && iDetPos < nDet) v[ix][iy] += rotScale * filteredProj[iDetPos]; } else if (interpType == Backprojector::INTERP_LINEAR) { double p = L / detInc; // position along detector double pFloor = floor (p); int iDetPos = iDetCenter + static_cast<int>(pFloor); double frac = p - pFloor; // fraction distance from det if (iDetPos >= 0 && iDetPos < nDet - 1) v[ix][iy] += rotScale * ((1-frac) * filteredProj[iDetPos] + frac * filteredProj[iDetPos+1]); } else if (interpType == Backprojector::INTERP_CUBIC) { double p = iDetCenter + (L / detInc); // position along detector if (p >= 0 && p < nDet) v[ix][iy] += rotScale * pCubicInterp->interpolate (p); } } }
if (interpType == Backprojector::INTERP_CUBIC) delete pCubicInterp; }
The resulting values in image array 'v' will vary with the level of optimization.
Sean Cody wrote:
I need more details in order to help you out. The -O flags can break some code but it is pretty rare. The GCC man page does a good job explaining the optimizations done for each of the -O levels.
This is totally dependent on the errors you are seeing. Random guess would be turn off loop unrolling as the optimizer might not be detected the loop dependancies properly. I've had this bite me in the past but without details a random guess from center field.
On 4-Sep-06, at 1:04 PM, Dan Martin wrote:
I am modifying an open source C++ program. I found that my output was not the same as the original, so I spent the last 3 days searching for the errors in my program, then making extensive back- modifications to make my program's output identical to the original. I finally went back to the original, and made only minimal changes to allow compilation with my makefile.
The errors persisted. Reasoning that the original was probably compiled with optimization, I tried that and sure enough the differences disappeared (some with -O and the rest disappeared using -O2).
Calculations done mainly with integer math were fine to begin with. Using -O, linear interpolation involving doubles yielded the same results, and all results including more complex interpolation and double to int conversions were the same using -O2.
I cannot verify my output other than by using the original program.
Which results are correct? Am I to assume that optimization does something bad? [my program correct, the original compiled with optimizations incorrect].