For some reason we have a stack of DVDs that must have been burned in some strange format. When I put one in my DVD drive the drive refuses to recognize it. When I try to mount it, it simply says "no media".
What I would like to do is take a "raw" image off the disk, so I can try to recover it.
Something similar to what would be done with "dd" on a hard disk.
dd if=/dev/hdc of=/tmp/file
But again, it just says "no media". Is there a way to "force" a mount or some other technique so I can read the disk?
Or any other suggestions as to how to recover the data?
John
On 1 Sep, John Lange wrote:
For some reason we have a stack of DVDs that must have been burned in some strange format. When I put one in my DVD drive the drive refuses to recognize it. When I try to mount it, it simply says "no media".
My guess is you're screwed. The problem is probably not that the OS can't understand the format, it's that the drive itself can't figure it out. It was most likely a failed burn, dying burner, not fixated or something like that.
I verify all burns immediately after burning for full read verification. Good habit to get into.
Also, high-end Verbatim (or other noteworthy brand like Imation) data-life-plus media is recommended for anything important. It's not too much more money for peace of mind. There really is a difference in quality. I can order them in at good prices if anyone's interested.
Trevor is quite correct. (BTW: John, I'm not talking about your situation specifically here - just taking the opportunity to jump on a soapbox...)
Various studies have shown that standard CD-R media does *not* in fact have a "nearly infinite" shelf life, but that the shelf life of the cheap, generic, typically long-strategy / AZO / cyanine stuff is actually closer to 12 MONTHS. That's one year. Most floppies last longer than that.
The top-of-the-line "gold" stuff actually has real gold in the chemical mixture used to make the burnable layer, which apparently dramatically improves reliability and longevity - up to about 5 or 6 years so far. And the quality of recordable media is steadily getting *worse*, not better. Most major media OEMs now have special "Archival" media that costs significantly more (approx $3 to $5 per CD) but is "guaranteed" for rather long periods - like 10 or 25 or 50 years. Keep in mind that the "guarantee" says they'll replace the media for free if it fails, they aren't insuring you against data loss!
Most -RW media is now considered to have a longer data life than the cheap -R media.
As to DVD media, the chemical mix required is quite different from CDs, although CD-RW and DVD-RW are somewhat similar. I'm not aware of any experimental or epidemiological studies specifically on DVD media, although various engineering articles have theorized that DVD lifetime will be approximately 2/3 (66%) as long as CD media of equivalent quality.
The claims of "infinite" lifetime all arose from projected lifespans of factory-pressed CDs, not recordable CDs. A correctly pressed CD (stored correctly) should still last several hundreds, if not thousands, of years. Ditto for a correctly pressed DVD. That's assuming they aren't handled, and don't have any radial stresses placed upon them. If you laid a stack of CDs sideways so that they were resting on their edge, those CDs (even factory-pressed CDs) would start to delaminate beyond the point of readability within 3 or 4 years. Obviously some are more resistant to radial stresses than others, YMMV.
Bottom line: don't rely on CD-R, CD-RW, DVD-R, DVD+R, DVD-RAM, DVD-RW, or DVD+RW media for long-term archival. The only known way to ensure long-term data archival is to A) use archival-quality media, B) use archival-quality burners, and C) copy the data to a new generation of media well within the predicted minimum lifespan for your archival media. Generally speaking, that means shelling out lots of $$$ for expensive WORM drives, even more $$$ for the expensive media, and yet more $$$ for the labour involved to re-copy the data every 5-10 years. The timespan involved varies greatly depending on how you store the media.
Applying those same principles to readily-available CD and DVD burners, you spend about triple the normal price to get a top-of-the-line burner (generally Plextor, Pioneer, Panasonic, or Sony but it's almost a guessing game now), then you spend about 10x the normal price to get archival-grade media (or at least the "Gold" stuff from Verbatim / TDK / Imation / etc. - essentially, get a well-known brand name's top-tier media), then you store it correctly (laying flat, with pressure evenly distributed across the surface, no more than about 10 disks in a stack), then you re-copy it to new media about every 2-3 years.
If you think this is way too much cost and trouble to keep your data safe indefinitely, you are probably right. The question is, how much is your data worth to you? If you're in a federally-regulated industry (financial, health-care, military, etc.) the fines alone for not being able to retrieve data could exceed the cost of buying good equipment and media. If you're running a more "normal" business, you probably still have financial & taxation records that must be kept for 8 years. And if you're storing stuff like engineering designs, CAD work, or really any kind of intellectual property, how much do you stand to lose if you can't prove, for example, prior art in a patent defense lawsuit? Or if you can't prove you own the copyright to a piece of work someone else is stealing?
The good news is that, thanks to the way most of us now store data, none of this is all that relevant. A lot of CD burning nowadays is single-use or very short-term only and the media is discarded long before it is in any danger of becoming unreadable. However, for those of you that think burning CDs and DVDs is a great way to save your data "forever", think again.
-Adam
Trevor Cordes wrote:
On 1 Sep, John Lange wrote:
For some reason we have a stack of DVDs that must have been burned in some strange format. When I put one in my DVD drive the drive refuses to recognize it. When I try to mount it, it simply says "no media".
My guess is you're screwed. The problem is probably not that the OS can't understand the format, it's that the drive itself can't figure it out. It was most likely a failed burn, dying burner, not fixated or something like that.
I verify all burns immediately after burning for full read verification. Good habit to get into.
Also, high-end Verbatim (or other noteworthy brand like Imation) data-life-plus media is recommended for anything important. It's not too much more money for peace of mind. There really is a difference in quality. I can order them in at good prices if anyone's interested.
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
To add my anecdotal 2 cents:
I've been burning CDs for over 10 years. I've been using them as my exclusive backup system for 6 years. I switched to DVDs for backups about 2 years ago.
My oldest burns, 10 years old, are audio CDs and for the most part they still play fine. However, those were on the original, expensive, and better quality $5-$7 each media. A few songs glitch for no real discernable reason (no scratches, no dirt), so I must assume that there is some deterioration. Being simply audio, it's not the end of the world.
As for data backups, I have burned around 500 optical discs. I had to do a major recovery last year and I found that almost all the CDs (and few DVDs), most on generic media, read ok. There was 2-3 or so CD-Rs that would not read on my newest LG DVD-RW drive, but read fine on a CD-RW drive. I could not discern any logical reason for this (not brand, type, color, etc, dependent). There was maybe one file that was completely unrecoverable out of the hundreds of discs, so my mileage was pretty good.
Keep in mind, my self-programmed backup system does immediate file-level verifies after write. With this I catch about 1 disc in a 50 that for whatever reason didn't burn 100% perfectly, and I simply reburn it. I attribute this to flaws in the media where the burn will succeed but a chunk of data will be unreadable. I would never trust the burn without an immediate verify.
Because of the strange reading issues described above, I immediately implemented a reburn-burns-3-years-old policy, which actually was quite easy to add to my program and requires no extra effort. It also staggers the reburns by file so that there isn't this sudden overwhelming mass of burning I must do.
I have since switched to Verbatim Data Life Plus "archival" "lifetime" grade media. The look & feel is discernably better quality than generic, and Verbatim has excellent web pages describing the advantages (mostly having to do with the better sealing of the dye layer and better plastic).
The media I use: http://www.verbatim.com/products/product_detail.cfm?product_id=3CE81E3E-674F... My price is $26 for a 50-spindle, delivered to meetings. Around 50c a DVD is not bad for protecting your data. The 15-25c premium over generic isn't much money in the big scheme of things.
They make a zillion different types, so check them out at verbatim.com. Most of them are available to me at good prices.
We'll see in another 10 years how well these held up!
Trevor Cordes wrote:
http://www.verbatim.com/products/product_detail.cfm?product_id=3CE81E3E-674F... My price is $26 for a 50-spindle, delivered to meetings. Around 50c a
What is the price of the UltraLife™ Gold Archival Grade DVD
http://www.verbatim.com/products/product_detail.cfm?product_id=6A0D4031-1143...
Interesting discussion. I found yours and Adam's comments very informative.
-- Bill
On 2 Sep, Bill Reid wrote:
What is the price of the UltraLife$(D"o(B Gold Archival Grade DVD
Hmmm, they seem to be unavailable at my distributor. There will be a few obscure SKUs they don't carry. I will shoot them an email about it on Tue, but based on past experience it probably is unavailable due to reasons like not being carried at all in Canada.
My guess is the UltraLife would be very expensive! Looks cool though!
I tried finding an Imation or Philips alternative but there doesn't appear to be any of that class.
I am modifying an open source C++ program. I found that my output was not the same as the original, so I spent the last 3 days searching for the errors in my program, then making extensive back-modifications to make my program's output identical to the original. I finally went back to the original, and made only minimal changes to allow compilation with my makefile.
The errors persisted. Reasoning that the original was probably compiled with optimization, I tried that and sure enough the differences disappeared (some with -O and the rest disappeared using -O2).
Calculations done mainly with integer math were fine to begin with. Using -O, linear interpolation involving doubles yielded the same results, and all results including more complex interpolation and double to int conversions were the same using -O2.
I cannot verify my output other than by using the original program.
Which results are correct? Am I to assume that optimization does something bad? [my program correct, the original compiled with optimizations incorrect].
I need more details in order to help you out. The -O flags can break some code but it is pretty rare. The GCC man page does a good job explaining the optimizations done for each of the -O levels.
This is totally dependent on the errors you are seeing. Random guess would be turn off loop unrolling as the optimizer might not be detected the loop dependancies properly. I've had this bite me in the past but without details a random guess from center field.
On 4-Sep-06, at 1:04 PM, Dan Martin wrote:
I am modifying an open source C++ program. I found that my output was not the same as the original, so I spent the last 3 days searching for the errors in my program, then making extensive back- modifications to make my program's output identical to the original. I finally went back to the original, and made only minimal changes to allow compilation with my makefile.
The errors persisted. Reasoning that the original was probably compiled with optimization, I tried that and sure enough the differences disappeared (some with -O and the rest disappeared using -O2).
Calculations done mainly with integer math were fine to begin with. Using -O, linear interpolation involving doubles yielded the same results, and all results including more complex interpolation and double to int conversions were the same using -O2.
I cannot verify my output other than by using the original program.
Which results are correct? Am I to assume that optimization does something bad? [my program correct, the original compiled with optimizations incorrect].
Is it safe to assume that it is the optimized compilation giving incorect values?
The code can be found at http://directory.fsf.org/science/biology/CTSim.html
I am using the "pjrec" program (tools directory) which calls this method in backprojectors.cpp (libctsim directory):
void BackprojectTrig::BackprojectView (const double* const filteredProj, const double view_angle) { double theta = view_angle;
CubicPolyInterpolator* pCubicInterp = NULL; if (interpType == Backprojector::INTERP_CUBIC) pCubicInterp = new CubicPolyInterpolator (filteredProj, nDet);
double x = xMin + xInc / 2; // Rectang coords of center of pixel for (int ix = 0; ix < nx; x += xInc, ix++) { double y = yMin + yInc / 2; for (int iy = 0; iy < ny; y += yInc, iy++) { double r = sqrt (x * x + y * y); // distance of cell from center double phi = atan2 (y, x); // angle of cell from center double L = r * cos (theta - phi); // position on detector
if (interpType == Backprojector::INTERP_NEAREST) { int iDetPos = iDetCenter + nearest<int> (L / detInc); // calc'd index in the filter raysum array
if (iDetPos >= 0 && iDetPos < nDet) v[ix][iy] += rotScale * filteredProj[iDetPos]; } else if (interpType == Backprojector::INTERP_LINEAR) { double p = L / detInc; // position along detector double pFloor = floor (p); int iDetPos = iDetCenter + static_cast<int>(pFloor); double frac = p - pFloor; // fraction distance from det if (iDetPos >= 0 && iDetPos < nDet - 1) v[ix][iy] += rotScale * ((1-frac) * filteredProj[iDetPos] + frac * filteredProj[iDetPos+1]); } else if (interpType == Backprojector::INTERP_CUBIC) { double p = iDetCenter + (L / detInc); // position along detector if (p >= 0 && p < nDet) v[ix][iy] += rotScale * pCubicInterp->interpolate (p); } } }
if (interpType == Backprojector::INTERP_CUBIC) delete pCubicInterp; }
The resulting values in image array 'v' will vary with the level of optimization.
Sean Cody wrote:
I need more details in order to help you out. The -O flags can break some code but it is pretty rare. The GCC man page does a good job explaining the optimizations done for each of the -O levels.
This is totally dependent on the errors you are seeing. Random guess would be turn off loop unrolling as the optimizer might not be detected the loop dependancies properly. I've had this bite me in the past but without details a random guess from center field.
On 4-Sep-06, at 1:04 PM, Dan Martin wrote:
I am modifying an open source C++ program. I found that my output was not the same as the original, so I spent the last 3 days searching for the errors in my program, then making extensive back- modifications to make my program's output identical to the original. I finally went back to the original, and made only minimal changes to allow compilation with my makefile.
The errors persisted. Reasoning that the original was probably compiled with optimization, I tried that and sure enough the differences disappeared (some with -O and the rest disappeared using -O2).
Calculations done mainly with integer math were fine to begin with. Using -O, linear interpolation involving doubles yielded the same results, and all results including more complex interpolation and double to int conversions were the same using -O2.
I cannot verify my output other than by using the original program.
Which results are correct? Am I to assume that optimization does something bad? [my program correct, the original compiled with optimizations incorrect].
From gcc man page...
Important notes: -ffast-math results in code that is not necessar- ily IEEE-compliant. -fstrict-aliasing is highly likely break non- standard-compliant programs. -malign-natural only works properly if the entire program is compiled with it, and none of the standard headers/libraries contain any code that changes alignment when this option is used.
If you are having precision issues with the doubles kill the above flags with your optimizations (eg. -fno-fast-math). I'm going to assume that v is global so it may not be scoping and from your description it _sounds_ more like precision issues. If it is precision issues start turning off selectively the options from the optimized flags and find out which one is mucking up the results.
I wouldn't assume it is the code generated until you can prove it by finding a particular optimization strategy which is producing the issue. In integer only cases I would argue the compiler is outputting correct code but with reals the size/space optimizations can do a _whole lot_ of damage.
If the values are within a valid range I would suggest liberal application of assertions, it is hard to say what you mean by correct values but if the numbers are off by a certain amount or are rounded up or down from what you expect then I would put money on precision issues.
On 4-Sep-06, at 2:23 PM, Dan Martin wrote:
Is it safe to assume that it is the optimized compilation giving incorect values?
The code can be found at http://directory.fsf.org/science/biology/CTSim.html
I am using the "pjrec" program (tools directory) which calls this method in backprojectors.cpp (libctsim directory):
void BackprojectTrig::BackprojectView (const double* const filteredProj, const double view_angle) { double theta = view_angle; CubicPolyInterpolator* pCubicInterp = NULL; if (interpType == Backprojector::INTERP_CUBIC) pCubicInterp = new CubicPolyInterpolator (filteredProj, nDet); double x = xMin + xInc / 2; // Rectang coords of center of pixel for (int ix = 0; ix < nx; x += xInc, ix++) { double y = yMin + yInc / 2; for (int iy = 0; iy < ny; y += yInc, iy++) { double r = sqrt (x * x + y * y); // distance of cell from center double phi = atan2 (y, x); // angle of cell from center double L = r * cos (theta - phi); // position on detector if (interpType == Backprojector::INTERP_NEAREST) { int iDetPos = iDetCenter + nearest<int> (L / detInc); // calc'd index in the filter raysum array if (iDetPos >= 0 && iDetPos < nDet) v[ix][iy] += rotScale * filteredProj[iDetPos]; } else if (interpType == Backprojector::INTERP_LINEAR) { double p = L / detInc; // position along detector double pFloor = floor (p); int iDetPos = iDetCenter + static_cast<int>(pFloor); double frac = p - pFloor; // fraction distance from det if (iDetPos >= 0 && iDetPos < nDet - 1) v[ix][iy] += rotScale * ((1-frac) * filteredProj[iDetPos]
- frac * filteredProj[iDetPos+1]); } else if (interpType == Backprojector::INTERP_CUBIC) { double p = iDetCenter + (L / detInc); // position along
detector if (p >= 0 && p < nDet) v[ix][iy] += rotScale * pCubicInterp->interpolate (p); } } }
if (interpType == Backprojector::INTERP_CUBIC) delete pCubicInterp; }
The resulting values in image array 'v' will vary with the level of optimization.
Sean Cody wrote:
I need more details in order to help you out. The -O flags can break some code but it is pretty rare. The GCC man page does a good job explaining the optimizations done for each of the -O levels.
This is totally dependent on the errors you are seeing. Random guess would be turn off loop unrolling as the optimizer might not be detected the loop dependancies properly. I've had this bite me in the past but without details a random guess from center field.
On 4-Sep-06, at 1:04 PM, Dan Martin wrote:
I am modifying an open source C++ program. I found that my output was not the same as the original, so I spent the last 3 days searching for the errors in my program, then making extensive back- modifications to make my program's output identical to the original. I finally went back to the original, and made only minimal changes to allow compilation with my makefile.
The errors persisted. Reasoning that the original was probably compiled with optimization, I tried that and sure enough the differences disappeared (some with -O and the rest disappeared using -O2).
Calculations done mainly with integer math were fine to begin with. Using -O, linear interpolation involving doubles yielded the same results, and all results including more complex interpolation and double to int conversions were the same using - O2.
I cannot verify my output other than by using the original program.
Which results are correct? Am I to assume that optimization does something bad? [my program correct, the original compiled with optimizations incorrect].
-- -Dan
Dr. Dan Martin, MD, CCFP, BSc, BCSc (Hon)
GP Hospital Practitioner Computer Science grad student ummar143@cc.umanitoba.ca (204) 831-1746 answering machine always on
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Thanks for the advice, Sean.
I suspect it's an issue of double precision, but I cannot find what specific flags are involved.
Image colum from original program, unknown compiler optimizations: 0: 9.40411e-05 1: 0.000374236 2: 0.000421015 3: 0.000158423 4: 0.000421016 5: -6.64318e-05 6: 0.000225337
Image colum from my program, no compiler optimizations: 0: 9.40411e-05 1: 0.000388825 2: 0.000421015 3: 0.000158423 4: 0.000421016 5: 0.000374236 6: 0.000356633
Difference: 0: 0 1: -1.45885e-05 2: 0 3: 0 4: 0 5: -0.000440668 6: -0.000131296
Percentage wise, these differences can be quite significant. If I compile a single file from my version of the program (backprojectors.cpp) with the -O2 option, the differences disappear. I combined all of the options that the gcc man page said were included in -O1 and -O2, and the differences did NOT disappear, so I still don't know what specific options are making the difference in the original program.
Sean Cody wrote:
From gcc man page...
Important notes: -ffast-math results in code that is not
necessar- ily IEEE-compliant. -fstrict-aliasing is highly likely break non- standard-compliant programs. -malign-natural only works properly if the entire program is compiled with it, and none of the standard headers/libraries contain any code that changes alignment when this option is used.
If you are having precision issues with the doubles kill the above flags with your optimizations (eg. -fno-fast-math). I'm going to assume that v is global so it may not be scoping and from your description it _sounds_ more like precision issues. If it is precision issues start turning off selectively the options from the optimized flags and find out which one is mucking up the results.
I wouldn't assume it is the code generated until you can prove it by finding a particular optimization strategy which is producing the issue. In integer only cases I would argue the compiler is outputting correct code but with reals the size/space optimizations can do a _whole lot_ of damage.
If the values are within a valid range I would suggest liberal application of assertions, it is hard to say what you mean by correct values but if the numbers are off by a certain amount or are rounded up or down from what you expect then I would put money on precision issues.