Discussion:
Using Windows to make Android smoother
(too old to reply)
Andrews
2024-10-23 21:54:38 UTC
Permalink
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.

Some questions...

Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Do you change the volume label when you format on Windows for Android?
<Loading Image...>

Some background...

In May of 2021 I received from T-Mobile a new free Samsung Galaxy A32-5G
with 64GB of internal storage, which I broke twice under warranty so
T-Mobile replaced it twice within the first two years where all I had to do
was swap out the old 64GB sd card and put it into the new phone each time.

Everything came over seamlessly without the Internet involved, which, if
you know me, is something I strive to avoid when copying personal data.

But this is different than swapping out the phone. This is swapping out the
sd card which has been in use since 2021 with all my data sitting on it.
<Loading Image...>

Of course, I did two things years ago to plan for this type of event:
1. I formatted all my sd cards on Windows to the same volume name, and,
2. I put all my data under a single folder on the external sd card.

Those two simple things, I hope, will make this sd swap, uneventful.
a. Format the sd card on Windows to "0000-0001"
b. on Windows, create a top-level directory of "0001"
Note: The names don't matter as long as they're consistent.

I just bought from Amazon a 128GB three-pack with reader at about $10 each.
And, while a quick format on Windows takes a couple of seconds, a slow
format takes quite a bit longer - it's still running on my old desktop.

Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Is the type of card I bought OK for phones?

The (slow) format is still running, but when it's done, I'm going to copy
my "0001" data over from the old sdcard in the phone, to the new sdcard.

And then I'm going to swap out the old sdcard for the new one for, what I
hope to be a seamless experience using Windows to make Android smoother.

Wish me luck!
Paul
2024-10-24 07:20:49 UTC
Permalink
Post by Andrews
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.
Some questions...
Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Do you change the volume label when you format on Windows for Android?
<https://i.postimg.cc/dVtqQ9dX/sd01.jpg>
Some background...
In May of 2021 I received from T-Mobile a new free Samsung Galaxy A32-5G
with 64GB of internal storage, which I broke twice under warranty so
T-Mobile replaced it twice within the first two years where all I had to do
was swap out the old 64GB sd card and put it into the new phone each time.
Everything came over seamlessly without the Internet involved, which, if
you know me, is something I strive to avoid when copying personal data.
But this is different than swapping out the phone. This is swapping out the
sd card which has been in use since 2021 with all my data sitting on it.
<https://i.postimg.cc/fWX7wzcg/filesys.jpg>
1. I formatted all my sd cards on Windows to the same volume name, and, 2. I put all my data under a single folder on the external sd card.
Those two simple things, I hope, will make this sd swap, uneventful.
a. Format the sd card on Windows to "0000-0001"
b. on Windows, create a top-level directory of "0001"
  Note: The names don't matter as long as they're consistent.
I just bought from Amazon a 128GB three-pack with reader at about $10 each.
And, while a quick format on Windows takes a couple of seconds, a slow
format takes quite a bit longer - it's still running on my old desktop.
Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Is the type of card I bought OK for phones?
The (slow) format is still running, but when it's done, I'm going to copy
my "0001" data over from the old sdcard in the phone, to the new sdcard.
And then I'm going to swap out the old sdcard for the new one for, what I
hope to be a seamless experience using Windows to make Android smoother.
Wish me luck!
You would buy an SD card with static and dynamic wear leveling.

A quick format is good enough. It writes and puts a FAT or $MFT on the partition.

The slow format does the same, except it includes a read-verify of the surface.

To "erase" a storage device, diskpart "clean all" will write the entire
surface with zeros. Or dd.exe can write the surface with zeros.

It's hard to get good info about SD, like the wear leveling scheme.
If it has both static and dynamic wear leveling, it could last longer
because then you can't really burn a hole in it as easily.

The difference to a USB flash stick, could be the binning of the flash.
Maybe the flash is a bit better. USB sticks could be pretty low quality.
Like purchasing a 16GB USB key, could be a 32GB chip with half of it
pinned off because it didn't pass.

SD has limits on both write speed and read speed. The physical interface
is pretty "thin", and that's why it can't read faster than it does.
There is a promise that coming SD cards will have one lane PCI Express
interfaces, which should make then read better. But when that is done,
the writing won't be faster. Just as the worse USB3 flash sticks might be
100MB/sec read and 10MB/sec write. My Rally2 can write at about 16MB/sec,
but it doesn't read in a big hurry. Whereas USB3 sticks can have larger
differences between read and write. And some USB, have "uneven" behavior.
I've got a stick here now, it stalls for a while, it does a tiny bit of
writes, then stalls some more. My SD doesn't do that.

Paul
Carlos E.R.
2024-10-24 14:12:04 UTC
Permalink
Post by Paul
Post by Andrews
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.
Some questions...
Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Do you change the volume label when you format on Windows for Android?
<https://i.postimg.cc/dVtqQ9dX/sd01.jpg>
Some background...
...
Post by Paul
Post by Andrews
Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Is the type of card I bought OK for phones?
The (slow) format is still running, but when it's done, I'm going to copy
my "0001" data over from the old sdcard in the phone, to the new sdcard.
And then I'm going to swap out the old sdcard for the new one for, what I
hope to be a seamless experience using Windows to make Android smoother.
Wish me luck!
You would buy an SD card with static and dynamic wear leveling.
A quick format is good enough. It writes and puts a FAT or $MFT on the partition.
The slow format does the same, except it includes a read-verify of the surface.
Doesn't it fill everything with zeroes as well? Or erase all sectors?
That reduces the life of the card or stick.
--
Cheers, Carlos.
...winston
2024-10-24 17:20:23 UTC
Permalink
Post by Carlos E.R.
Post by Paul
Post by Andrews
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.
Some questions...
Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Do you change the volume label when you format on Windows for Android?
<https://i.postimg.cc/dVtqQ9dX/sd01.jpg>
Some background...
...
Post by Paul
Post by Andrews
Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Is the type of card I bought OK for phones?
The (slow) format is still running, but when it's done, I'm going to copy
my "0001" data over from the old sdcard in the phone, to the new sdcard.
And then I'm going to swap out the old sdcard for the new one for, what I
hope to be a seamless experience using Windows to make Android smoother.
Wish me luck!
You would buy an SD card with static and dynamic wear leveling.
A quick format is good enough. It writes and puts a FAT or $MFT on the partition.
The slow format does the same, except it includes a read-verify of the surface.
Doesn't it fill everything with zeroes as well? Or erase all sectors?
That reduces the life of the card or stick.
Writing zeros for full format option was introduced with Vista.
Applicable also to all later Windows o/s.
Full format
- files erased
- writes zeros to the whole disk
- drive scanned for bad sectors(does not fix bad sectors)
- new root directory and file system
--
...w¡ñ§±¤ñ
Harry S Robins
2024-10-24 18:00:44 UTC
Permalink
Post by ...winston
Post by Carlos E.R.
Post by Paul
You would buy an SD card with static and dynamic wear leveling.
A quick format is good enough. It writes and puts a FAT or $MFT on the partition.
The slow format does the same, except it includes a read-verify of the surface.
Doesn't it fill everything with zeroes as well? Or erase all sectors?
That reduces the life of the card or stick.
Writing zeros for full format option was introduced with Vista.
Applicable also to all later Windows o/s.
Full format
- files erased
- writes zeros to the whole disk
- drive scanned for bad sectors(does not fix bad sectors)
- new root directory and file system
If a Windows full format doesn't fix bad sectors, what does it do to them?

I always thought a format put a "jumper" so that bad sectors were ignored.

Is that "jumper" (or whatever it's really called) considered a fix?
Or is a fix something else?
...winston
2024-10-24 18:12:25 UTC
Permalink
Post by Harry S Robins
Post by ...winston
Post by Carlos E.R.
Post by Paul
You would buy an SD card with static and dynamic wear leveling.
A quick format is good enough. It writes and puts a FAT or $MFT on the partition.
The slow format does the same, except it includes a read-verify of the surface.
Doesn't it fill everything with zeroes as well? Or erase all sectors?
That reduces the life of the card or stick.
Writing zeros for full format option was introduced with Vista.
Applicable also to all later Windows o/s.
Full format
  - files erased
  - writes zeros to the whole disk
  - drive scanned for bad sectors(does not fix bad sectors)
  - new root directory and file system
If a Windows full format doesn't fix bad sectors, what does it do to them?
I always thought a format put a "jumper" so that bad sectors were ignored.
Is that "jumper" (or whatever it's really called) considered a fix?
Or is a fix something else?
Marks as bad, isolating from future use.
- marking as bad and ignored in the future, is not a 'fixing bad
sector' condition.
=> If its bad, it remains bad and prevented for future use.

Chkdisk, another tool, doesn't necessarily fix bad sectors either. Ymmv
with use.
- the most likely end result, is recovering any possible data found in
a bad sector, move to a known good sector, then mark bad sector from
future use.
--
...w¡ñ§±¤ñ
Arno Welzel
2024-10-26 10:10:05 UTC
Permalink
Harry S Robins, 2024-10-24 20:00:

[...]
Post by Harry S Robins
If a Windows full format doesn't fix bad sectors, what does it do to them?
It will verify every data block, so "bad" blocks can get recorded so
they don't get used any longer.
Post by Harry S Robins
I always thought a format put a "jumper" so that bad sectors were ignored.
Yes - but this depends on the medium used. Flash storage media like SSDs
have their own controller which does bad block mapping on their own and
use spare blocks (usually a few percent of the capacity are reserved for
this) as substitute for defect ones. However SD cards are quite "dumb"
and 100% of the capacity is used for data - so bad blocks need to be
recorded as part of the filesystem during "long" formatting.
--
Arno Welzel
https://arnowelzel.de
Carlos E.R.
2024-10-26 13:56:16 UTC
Permalink
Post by Arno Welzel
[...]
Post by Harry S Robins
If a Windows full format doesn't fix bad sectors, what does it do to them?
It will verify every data block, so "bad" blocks can get recorded so
they don't get used any longer.
Post by Harry S Robins
I always thought a format put a "jumper" so that bad sectors were ignored.
Yes - but this depends on the medium used. Flash storage media like SSDs
have their own controller which does bad block mapping on their own and
use spare blocks (usually a few percent of the capacity are reserved for
this) as substitute for defect ones. However SD cards are quite "dumb"
and 100% of the capacity is used for data - so bad blocks need to be
recorded as part of the filesystem during "long" formatting.
I seem to recall that the old MsDOS chkdisk (not sure of the name) could
mark bad sectors on floppies and such.
--
Cheers, Carlos.
Paul
2024-10-26 18:09:00 UTC
Permalink
Post by Arno Welzel
[...]
Post by Harry S Robins
If a Windows full format doesn't fix bad sectors, what does it do to them?
It will verify every data block, so "bad" blocks can get recorded so
they don't get used any longer.
Post by Harry S Robins
I always thought a format put a "jumper" so that bad sectors were ignored.
Yes - but this depends on the medium used. Flash storage media like SSDs
have their own controller which does bad block mapping on their own and
use spare blocks (usually a few percent of the capacity are reserved for
this) as substitute for defect ones. However SD cards are quite "dumb"
and 100% of the capacity is used for data - so bad blocks need to be
recorded as part of the filesystem during "long" formatting.
$BADCLUS can be updated at two times. By doing the "long format",
you can update $BADCLUS before files are put on a drive. By using the
scanning option in CHKDSK, you can mark off clusters via $BADCLUS
(presumably trashing a file at the same time, a file that was already
trashed so no big deal).

But with automatic sparing at the hard drive level, the need to
scan and add clusters to $BADCLUS is mostly removed. The only
time a disk gets a hard CRC error, is when that area of the
disk runs out of spares for repairs.

I had four CRC errors on a WD Blue in my Optiplex Refurb.
Re-writing the entire surface of the disk, flushed the errors.
There was an opportunity for the bad blocks to be spared out.
If I had done a $BADCLUS scan when the immediate problem
occurred, four clusters would be marked off as unusable,
and then there would be no need for the hard drive to
spare out the sector. But then, I have slightly less
space on the drive as a result.

You then have to ask yourself, what is the best practice for the
thing. Nothing springs to mind, except to say I would start
with an HDTune bad block scan, which would show the four bad sectors,
but without the side effect of using $BADCLUS. Then I would be in
a position to decide whether "washing and rinsing" the surface
was enough, or whether "digging divots" in the surface was needed.

When I checked, two of the CRC errors were in white space,
and two of the CRC errors were real files. By knowing what
the real files were, I could replace them. The white space
errors, it would not matter what happened there, so I had
only half the work to do (just dig up two files).

To work out what the files are, I use a copy of nfi.exe which
has LBA numbers. and then I can map an address to a file. Then
I can take a hex editor, open the "file" and notice the first
sector has stale data in it, the second sector has a CRC error.
This is the pattern you see on a "high fly" error. The head was
so far above the surface on the first sector, the mag field from
the write couldn't even couple into the surface.

You can push the entire partition into $BADCLUS. It has
the capacity to do that. if there is a pathological problem
via the controller, you could cause every file on a partition
to disappear.

In the past, a pathological situation for CHKDSK, occurs
when you pull the IDE ribbon cable half way off a drive. I
actually managed to do that one day, but I could feel that
I had bumped into something, and spotted it in time.
If you notice the disk has problems on the next boot,
you may be tempted to do a CHKDSK. It tries to rewrite stuff.
It modifies a hundred thousand things. Your disk is... trashed.
(None of the writes done, are good, because the cable is half off.)
It is for this sort of failing (an I/O issue), that CHKDSK
is the wrong tool. Other situations, the Microsoft declaration
that "stopping a CHKDSK run, causes no more damage than was there in
the first place", that statement can be true as long as the
I/O is in perfect working condition.

Summary: Firstly, a backup is your friend.
HDTune can do a bad block scan, indicating your trouble situation.
(I use more tools than this, this is just a canary run to prove trouble awaits.)
ddrescue (gddrescue package in Ubuntu) can back up a damaged disk.
[Normally you would use Macrium if the disk was undamaged, no CRCs]
Since you have your backup now, you have my permission to run CHKDSK :-)
Let the chainsaw at it.

If $BADCLUS has a use in modern times, I don't really know where it
fits. It was invented for a time when disks had less internal automation.
Maybe it covers floppies well (our floppies at work used a similar philosophy).
But generally speaking, it is my opinion that when the signs are
that a sick disk drive is involved, you should make it personal and
work at a lower level, to reduce the damage level. $BADCLUS, scanning
disks for bad clusters, is just a bad economy. If your disk really
sucks this badly, replace it. There are yoyos who will move a partition
away from a CRC damaged area, but then those same individuals don't believe
in backups, and they will be wearing that sad face when it dies entirely.

The WD Blue 250GB in this story, got replaced. Cost $65 at the time for a WD Black 1TB.

Paul
Arno Welzel
2024-10-28 15:53:31 UTC
Permalink
Post by Paul
Post by Arno Welzel
[...]
Post by Harry S Robins
If a Windows full format doesn't fix bad sectors, what does it do to them?
It will verify every data block, so "bad" blocks can get recorded so
they don't get used any longer.
Post by Harry S Robins
I always thought a format put a "jumper" so that bad sectors were ignored.
Yes - but this depends on the medium used. Flash storage media like SSDs
have their own controller which does bad block mapping on their own and
use spare blocks (usually a few percent of the capacity are reserved for
this) as substitute for defect ones. However SD cards are quite "dumb"
and 100% of the capacity is used for data - so bad blocks need to be
recorded as part of the filesystem during "long" formatting.
$BADCLUS can be updated at two times. By doing the "long format",
you can update $BADCLUS before files are put on a drive. By using the
scanning option in CHKDSK, you can mark off clusters via $BADCLUS
(presumably trashing a file at the same time, a file that was already
trashed so no big deal).
JFTR: $BADCLUS is specific for NTFS. However SD cards are usually not
formatted using NTFS. But FAT and ExFAT also have mechanisms to record
bad blocks which should not be used any longer.
--
Arno Welzel
https://arnowelzel.de
Carlos E.R.
2024-11-01 08:19:44 UTC
Permalink
Post by Arno Welzel
Post by Paul
Post by Arno Welzel
[...]
Post by Harry S Robins
If a Windows full format doesn't fix bad sectors, what does it do to them?
It will verify every data block, so "bad" blocks can get recorded so
they don't get used any longer.
Post by Harry S Robins
I always thought a format put a "jumper" so that bad sectors were ignored.
Yes - but this depends on the medium used. Flash storage media like SSDs
have their own controller which does bad block mapping on their own and
use spare blocks (usually a few percent of the capacity are reserved for
this) as substitute for defect ones. However SD cards are quite "dumb"
and 100% of the capacity is used for data - so bad blocks need to be
recorded as part of the filesystem during "long" formatting.
$BADCLUS can be updated at two times. By doing the "long format",
you can update $BADCLUS before files are put on a drive. By using the
scanning option in CHKDSK, you can mark off clusters via $BADCLUS
(presumably trashing a file at the same time, a file that was already
trashed so no big deal).
JFTR: $BADCLUS is specific for NTFS. However SD cards are usually not
formatted using NTFS. But FAT and ExFAT also have mechanisms to record
bad blocks which should not be used any longer.
If a flash media (usb stick, card) does have a bad sector I don't see
why not marking it as bad in the filesystem. Of course, you can decide
to garbage the thing as soon as. Those devices do not have bad sector
management in firmware.
--
Cheers, Carlos.
Paul
2024-10-24 18:33:04 UTC
Permalink
Post by ...winston
Post by Paul
Post by Andrews
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.
Some questions...
Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Do you change the volume label when you format on Windows for Android?
<https://i.postimg.cc/dVtqQ9dX/sd01.jpg>
Some background...
...
Post by Paul
Post by Andrews
Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Is the type of card I bought OK for phones?
The (slow) format is still running, but when it's done, I'm going to copy
my "0001" data over from the old sdcard in the phone, to the new sdcard.
And then I'm going to swap out the old sdcard for the new one for, what I
hope to be a seamless experience using Windows to make Android smoother.
Wish me luck!
You would buy an SD card with static and dynamic wear leveling.
A quick format is good enough. It writes and puts a FAT or $MFT on the partition.
The slow format does the same, except it includes a read-verify of the surface.
Doesn't it fill everything with zeroes as well? Or erase all sectors? That reduces the life of the card or stick.
Writing zeros for full format option was introduced with Vista.
Applicable also to all later Windows o/s.
Full format
 - files erased
 - writes zeros to the whole disk
 - drive scanned for bad sectors(does not fix bad sectors)
 - new root directory and file system
I recommend using Process Monitor, to track what a tool does to a device.

USB is a bit of a problem, as it is treated differently.

Not everything is logged on the system, with the same skill and dexterity.
You won't know whats going to happen until you get there.

*******

A full format was a concept of a long time ago. It might have been
called a Low Level Format.

Devices could be soft sectored. There was an index mark on the drive.
As the drive would rotate, your LLF would lay down a track. This
includes writing header section, write splice, and payload area.
If you stopped the low level format, the drive was bricked.
And seeing as the drives cost $1500 back then, you looked
like a right dope when this happened. (At least once, the power
went off at work while I was doing that.)

well, modern drives are no longer soft sectored. We no longer redefine
the interleave pattern by rewriting the entire surface of the platter.

Modern drives have servo wedges, and both headers and servo wedges are permanent.
The header is the address.

The only thing you can do to a modern drive, is write the payload section
of a sector.

Now, flash drives, the address is implicit. The address is hard wired.
Via a MAP table, there can be a mapping between external virtual LBA
and internal storage LBA. The written part of the flash, contains
the 512 byte sector, plus a 50 byte syndrome with Reed Solomon code in
it. I have never seen any information which points to there being
a verification LBA value in the sector too. But the size of the sectors
does not have to be a precise power of two. Nothing inside modern flash
has to be like that any more. They use all sorts of weird numbers for
stuff, and I don't know the rule set for that.

When you format a drive, the activity does the bare minimum.
You rewrite the metadata tables. Most of them, placing a
minimal length table is sufficient (the table will grow with time).
The master file table, it will be small, and it will have all file
entries removed.

None of the remaining clusters, need be written. The OS is not
a maintainer of a forensic situation. The user must know where the
information leakage points are, and use the right tool for the job
when highest security is required. I have tested this, by injecting
test patterns on drives, and finding later, two hundred "shards"
of info were left behind. Naturally, this leaves me just a bit concerned
about my control of things.

diskpart clean # Remove partition table, hardly cleans a damn thing, takes one second
clean all # Takes hours on a slow drive, overwrites the surface with zeros

dd.exe # dd if=/dev/zero of=/dev/sda bs=8192
# That writes zeros over the storage device SDA.
# Hard drives a divisible by 8192. The entire surface is written.
# Consult the help, to determine the real name string of the drive.
# The name shown is just an illustration.

You can then partition and format, in the knowledge that any "format"
command you're using, which does not do a lot, all the other clusters
are already zero because you dd'ed them and you don't have to worry
about information leakage.

Any time I do research on disk layouts, I establish a background pattern
by zeroing the entire drive. This takes a few hours on the size of drive
I would normally use. Next, I partition the drive. Then, take a hex editor,
and "scroll" down near the end of the disk. The GPT secondary table stands
out like a sore thumb, down at the end of the drive. You can't miss it then,
because most of the drive is still zeroed.

Similarly, for RAID research, I zero the drive on an alternate-brand
computer, do the RAID setup on the branded RAID I want to test, then
bring the drive back to the alternate-brand computer and "scroll" with
my Hex editor. Then I can see the 64KB or smaller RAID metadata table.

By knowing how these things work, you can be in control.

Using tools like Process Monitor, you can observe things being done,
if need be.

Paul
...winston
2024-10-25 01:13:06 UTC
Permalink
Post by Paul
Post by ...winston
Full format
 - files erased
 - writes zeros to the whole disk
 - drive scanned for bad sectors(does not fix bad sectors)
 - new root directory and file system
I recommend using Process Monitor, to track what a tool does to a device.
A full format was a concept of a long time ago. It might have been
called a Low Level Format.
Paul
Iirc, a true Low Level format required 3rdparty tools.
The last time I did a low level was years ago a Connor(Seagate
manufactured drive(small drive ~500 MB) installed in a HP tower.
Don't even remember the tool, obtained it from an IT admin on a 3.5" floppy.
- a few months later, the drive died. Track O, un-repairable.

<https://www.easeus.com/partition-master/high-level-format-vs-low-level-format.html?>

<https://www.easeus.com/computer-instruction/low-level-format-vs-standard-format.html?>

<https://www.minitool.com/partition-disk/high-level-format-vs-low-level.html>
--
...w¡ñ§±¤ñ
Paul
2024-10-25 05:35:29 UTC
Permalink
Post by ...winston
Post by Paul
Post by ...winston
Full format
  - files erased
  - writes zeros to the whole disk
  - drive scanned for bad sectors(does not fix bad sectors)
  - new root directory and file system
I recommend using Process Monitor, to track what a tool does to a device.
A full format was a concept of a long time ago. It might have been
called a Low Level Format.
 
    Paul
Iirc, a true Low Level format required 3rdparty tools.
 The last time I did a low level was years ago a Connor(Seagate manufactured drive(small drive ~500 MB) installed in a HP tower.
Don't even remember the tool, obtained it from an IT admin on a 3.5" floppy.
 - a few months later, the drive died. Track O, un-repairable.
<https://www.easeus.com/partition-master/high-level-format-vs-low-level-format.html?>
<https://www.easeus.com/computer-instruction/low-level-format-vs-standard-format.html?>
<https://www.minitool.com/partition-disk/high-level-format-vs-low-level.html>
I was doing them on 5MB and 10MB (full height) drives coming into the department.
It's possible a SASI controller was part of the solution at the time.
And you had to look out the window and check the weather before
you started (some of our power drops were weather-related). And you had
to have the operation run to completion, or the drive would be bricked.

One of the reasons for doing an LLF, was the change from interleave 3 to
interleave 1. Something was "going fast enough" on our hardware setup,
we could change the interleave. (It might have even involved a different
brand SASI card.) While that should have made the drive feel faster, considering
how creaky everything was back then, you would hardly notice.

1 4 7 2 5 8 3 6 9 Interleave 3 (takes three rotations to read nine sectors)
1 2 3 4 5 6 7 8 9 Interleave 1 (one rotation to read nine sectors)

The data rates were awful (less than 1MB/sec) ... but nobody benched anything back then.
We didn't want to know :-) When the alternative was a floppy drive at 75KB/sec,
you were not complaining, no matter what the number was.

Before hard drives came along, a "dream machine" was one with two floppy drives.
While this isn't ours, it illustrates what people used to fight over. Nobody
wanted the machines that had only the one floppy (your *OS* was on that floppy).
Any time the OS floppy comes out of the machine, your screen would say
"Hey, dumbass, put my floppy back in" :-) Being asked to copy a data floppy,
would always bring a scowl to the face of a single floppy drive computer user.
I had one of those for a while. There was so little RAM in the computer,
you couldn't buffer a full floppy in there! Nightmare stuff.
"Insert floppy 1" "Insert floppy 2" "Insert floppy 1" ... "Hey dumbass..."

Loading Image...

Paul
Frank Slootweg
2024-10-25 10:15:01 UTC
Permalink
Paul <***@needed.invalid> wrote:
[...]
Post by Paul
Before hard drives came along, a "dream machine" was one with two floppy drives.
While this isn't ours, it illustrates what people used to fight over.
Nobody wanted the machines that had only the one floppy (your *OS* was
on that floppy).
Any time the OS floppy comes out of the machine, your screen would say
"Hey, dumbass, put my floppy back in" :-) Being asked to copy a data
floppy, would always bring a scowl to the face of a single floppy
drive computer user. I had one of those for a while. There was so
little RAM in the computer, you couldn't buffer a full floppy in
there! Nightmare stuff. "Insert floppy 1" "Insert floppy 2" "Insert
floppy 1" ... "Hey dumbass..."
https://www.hpmuseum.net/images/9895A-35.jpg
Two 8" floppy drives for a mere $5830! A steal!

<https://www.hpmuseum.net/display_item.php?hw=262>

Thanks for the memory! I enjoyed the 'Collector's Notes' and the
reference to <https://www.hp9845.net/9845/projects/hpdrive/>. The things
people do to keep old stuff working. Amazing!
Carlos E.R.
2024-10-25 13:11:05 UTC
Permalink
Post by Paul
Before hard drives came along, a "dream machine" was one with two floppy drives.
Yes. I could not afford a hard disk (didn't sufficiently know what it
was, anyway), but I knew I needed two floppy drives on my first machine.
--
Cheers, Carlos.
Paul
2024-10-25 15:02:42 UTC
Permalink
Post by Paul
Before hard drives came along, a "dream machine" was one with two floppy drives.
Yes. I could not afford a hard disk (didn't sufficiently know what it was, anyway), but I knew I needed two floppy drives on my first machine.
We originally started with hard drives, for departmental server level.
The cost could be spread over more desks that way.

Then the 3 inch high "chunks" of hard drives arrived (and the opportunity
to put one on each desktop had finally arrived). They
could be smaller than other equipment we'd worked on or evaluated.
But still the things struck you as "not very elegant" and
little better than "a floppy with a rigid platter". Just the
heads moving radially, and using a stepper motor to move in and
out, that wasn't elegant, when twenty feet away was equipment
using voice coil. It's not like the consumer technology at the time was
aggressive.

And that stuff could be flaky. My initial reaction was not to take
one home with me :-) It would be a small bundle of trouble. If
you didn't have one, that's a good thing. The shock or vibration spec
was only about 2G's or so. Jumping on the floor could crash the heads.

The departmental server, the heads retracted out of the disc pack. The
heads didn't touch the platter when the drive was not in usage. Whereas the
Seagate full height drive was CSS (contact start stop). And I don't think
there was any retraction attempt -- if the power goes off, the heads
would just drop onto the platter where they were.

Not really all that attractive, for $1500 . Plus the cost of the controller card.
The drive itself was as dumb as a floppy. No SMART. SMART did not exist then.
And Enhanced Secure Erase, consisted of dropping the drive on the floor.
No, we didn't drop any. But the first year, there might have been four failures.

And there was no email on there. Our email was on mainframes. Developer source
was stored on the departmental server. This meant, at least initially, the
risk factors on the hard drive were minimal. I filled mine up with files,
but it took two years of usage, to fill a 10MB drive. Our file systems guy had
collected statistics, and the average file size back then was 2KB. And that's partly
because there were no graphics. Yet, we still did desktop publishing. The user
manuals were two feet thick.

Paul
Kerr-Mudd, John
2024-10-25 16:35:27 UTC
Permalink
On Fri, 25 Oct 2024 11:02:42 -0400
Post by Paul
Post by Paul
Before hard drives came along, a "dream machine" was one with two floppy drives.
Yes. I could not afford a hard disk (didn't sufficiently know what it was, anyway), but I knew I needed two floppy drives on my first machine.
We originally started with hard drives, for departmental server level.
The cost could be spread over more desks that way.
Then the 3 inch high "chunks" of hard drives arrived (and the opportunity
to put one on each desktop had finally arrived). They
could be smaller than other equipment we'd worked on or evaluated.
But still the things struck you as "not very elegant" and
little better than "a floppy with a rigid platter". Just the
heads moving radially, and using a stepper motor to move in and
out, that wasn't elegant, when twenty feet away was equipment
using voice coil. It's not like the consumer technology at the time was
aggressive.
And that stuff could be flaky. My initial reaction was not to take
one home with me :-) It would be a small bundle of trouble. If
you didn't have one, that's a good thing. The shock or vibration spec
was only about 2G's or so. Jumping on the floor could crash the heads.
The departmental server, the heads retracted out of the disc pack. The
heads didn't touch the platter when the drive was not in usage. Whereas the
Seagate full height drive was CSS (contact start stop). And I don't think
there was any retraction attempt -- if the power goes off, the heads
would just drop onto the platter where they were.
Not really all that attractive, for $1500 . Plus the cost of the controller card.
The drive itself was as dumb as a floppy. No SMART. SMART did not exist then.
And Enhanced Secure Erase, consisted of dropping the drive on the floor.
No, we didn't drop any. But the first year, there might have been four failures.
And there was no email on there. Our email was on mainframes. Developer source
was stored on the departmental server. This meant, at least initially, the
risk factors on the hard drive were minimal. I filled mine up with files,
but it took two years of usage, to fill a 10MB drive. Our file systems guy had
collected statistics, and the average file size back then was 2KB. And that's partly
because there were no graphics. Yet, we still did desktop publishing. The user
manuals were two feet thick.
Paul
A great place for this nostalgia is afc; note the xpost.
I imagine followup replies would be best directed there, i.e. dropping the
xposts back to acow-10 and acmw.
--
Bah, and indeed Humbug.
Paul
2024-10-24 18:42:09 UTC
Permalink
Post by Paul
You would buy an SD card with static and dynamic wear leveling.
Hi Paul,
Thanks for that advice. I don't even know what that means. On Amazon, they don't tell you that in the product information.
My choices were Lexar or Sandisk. I chose the (cheaper) Lexar.
But should I of given that I need "wear leveling" for that card???
Putting my glasses on & my magnifying glass to the package, I still can
barely read the print - and it's too wide for the macro lens to snap a
single photo of, so I'll create a mosaic with the macro so you can see all
the fine print and funny-looking sdcard-specific-emoji (sdoji?).
The information is hard to get.

The Tech Support at the SD company, are mostly clueless.

When an SD has both static and dynamic wear leveling, it
then had the endurance of a SATA SSD drive. You get 600 writes
to every location on the device. The wear is evened out.

A USB flash stick, some seem to have nothing. A few too many
writes to low addresses, and you burn right through it. This
means at USB flash stick death, location 0 has 600 writes
while location 0xFFFFFFFF has zero writes. As does 0xFFFFFFFE.
Much of the upper media section of the device is still
waiting for the first write. And in reality, as near as I
can determine, you're not even getting 600 writes. It's a
pathetic lower number. This happens for TLC or QLC flash,
while the MLC or SLC we can no longer get, that lasts a lot longer.

For any device, we would like that feature. There is a Patriot
(and there are other brands) of USB stick that have an SSD inside
and a USB to SATA adapter chip. And those have both features
and support TRIM.

While some discussion threads, attempt to discuss which
limited model numbers of Sd devices have the feature, there is
no one to verify the info and tell us what we're buying.

This is why I quick format, plug in, and carry on with life.
because I'm not going to find anyone with the answer.

Do backups every once in a while.

Paul
Andrews
2024-10-24 17:35:37 UTC
Permalink
Post by Andrews
I need a bigger sd card - I spent years for this moment by using Windows to
make Android sdcard swaps smoother and drama reduced experiences for all.
Some questions...
Does a Windows "Quick Format" work as well as the slow format?
Does the format type matter when the sd card is to be used in a phone?
Quick format doesn't check for bad sectors, full format does(takes
longer, mostly due the scanning for bad sectors). Full format(iirc)
removes prior stored files.
-your choice. If you concerned about the card(probably sourced and
made west of the Pacific Ocean, usually China, South Korea, Phillipines
etc.) having bad sectors, choose full. If not, choose Quick.
Thanks for that advice, where I formatted the 128GB Lexar sd card on
Windows, which took a long time (maybe 45 minutes or so?) but it seems from
what you wrote that it might not be a bad idea depending on the provenance.

Luckily, the Windows format of the Android sdcard is only done once.

It's formatted on Windows mostly to set the volume label so that the phone
still thinks the old 64GB sdcard is in the phone, when it's a 128GB sdcard.
Post by Andrews
I just bought from Amazon a 128GB three-pack with reader at about $10 each.
And, while a quick format on Windows takes a couple of seconds, a slow
format takes quite a bit longer - it's still running on my old desktop >
Did I need that slow format?
Is the default of XFat (128kbyte allocation unit) an OK setting for phones?
Your phone supports both exFAT and FAT32.
exFAT is usually more efficient in storage and file transfer for large
capacity cards
eXFAT is almost always recommended for cards greater than 32GB
For a 128GB SDXC card(phone or pc[laptop, tablet) I would never consider
formatting as FAT32 if also used on a phone.
Thanks. I took the default of exFAT for the 128GB sdcard format.
I'll remember that for the future Windows formats of Android sdcards.
As a test on Win10 Pro(1 TB SSD main drive) - I took a spare 3 yr old
128 GB Samsung EVO Plus Class 10 U3 SDXC card and formatted full - took
39 minutes for a full format. I also full formatted a 5 yr old a 64 GB
SanDisk Class 10 U1(previously only used a digital camera, no longer
used, pretty much a dust collector in spare part box) - took 32 minutes
for the full format.
I didn't measure the time but mine were similar in that my 128GB sdcard
took something like 45 minutes (I didn't measure though) to complete.

Just copying the top-level folder over to the new sdcard took similar time.
<Loading Image...>

So the price of portable storage (meaning you can move the sdcard from one
phone to another or you can swap out sdcards and everything still works),
is about two hours of waiting for things to finish up doing their thing.

Of course those two hours on Windows saves some number of hours on Android
if you have your app data on Android store things (such as maps) on the sd.

The phone still "thinks" it's the same card because the volume label didn't
change (does a volume label need to be 4 characters, a dash, & four more?).
- Not sure if it compares to whatever you bought(a quick look on
Amazon didn't show any Samsung[my preferred SDXC card] 3 packs) which
seems to mean the Samsung test may/may not not apply to full formatting
a different brand of an SDXC card, but the 64 GB at approx. a half hour
might indicate up to an hour for a full format on a 128 GB card(twice as
many sectors to scan)
Here's what I bought on Amazon <https://www.amazon.com/dp/B0CB11S919>.
I didn't know what to look for, so I simply went by the $28 price.
That's about $10 per 128GB of portable storage since it was 3 cards.

In case the links change over time, here's what the description says.

Lexar E-Series 128GB Micro SD Card 3 Pack, microSDXC UHS-I Flash Memory
Card with Adapter, 100MB/s, C10, U3, A1, V30, Full HD, 4K UHD, High Speed
TF Card

Wide Compatibility:
Ideal for your smartphones, tablets, Drones, action cameras and Gopro.
Premium memory solution for smartphones, tablets, or action cameras.

4K Ultra UHD:
Quickly captures, plays back, and transfers media files,
including 1080p Full-HD, 3D, and 4K UHD video.

High Speed Memory Card:
Leverages UHS-I technology for a transfer speed up to 100MB/s.
Loads apps faster with A1-rated performance. (Based on internal
test environment of Lexar, so the actual speed may vary with
different host devices and environments. For devices that don't
support UHS-I, the transmission speed will be different due to
interface limitations.)

Multi Capacity:
Available in capacities ranging from 32GB to 512GB.
The 128GB micro sd card can support up to 6 hours 4K video,
or up to 20 hours 1080P video, 37,600 photos, or 19,440 songs.
(Due to different capacity algorithms and partial capacity are
used for system files, management and performance optimization,
so the actual available capacity may be less than the
identifying capacity.)

Ultra Durable: Waterproof, temperature-proof, shockproof, magnetic-proof.

Would you have purchased those three cards (for about $10 each) if you knew
that the major use would be to go into a smartphone such as the A32-5G?
...winston
2024-10-24 18:35:36 UTC
Permalink
Post by Andrews
Would you have purchased those three cards (for about $10 each) if you knew
that the major use would be to go into a smartphone such as the A32-5G?
:)
That's an easy answer for me.
The only SDXC cards that I've had that failed were Lexar, SansDisk, and
Verbatim(and some time ago)...since then I've used Samsung cards in
pcs(laptops, tablets in SDXC slot), cameras, and in desktop(with a
Sabrent USB/SDXC card adapter)

Note: for the latter(desktop with adapter) I've had two USB/SDXC
adapters, both were Sabrent.
The first purchased adapter had intermittent connection problems in
the desktop devices' USB2/USB3/USB C ports(tried all 8 of them, same
issue). Contacted Sabrent, they provided(without charge) a replacement,
didn't want the initial returned. The replacement has worked fine since.

A while back, I tried the older adapter on my laptop and Surface tablet
- same intermittent issue of both devices with one exception. If I plug
the adapter into the laptop USB-C port(using a usb 3 to C cable) it
works fine. The replacement adapter works without issue on the laptop
and tablet.
- likewise, the Samsung sdxc cards without the Sabrent adapter works
when inserted into the the media card reader slot on all devices.
--
...w¡ñ§±¤ñ
Andrews
2024-10-25 18:41:13 UTC
Permalink
Another question I have is that I've chosen those 8 (actually 9) characters
because every sd card seems to come with that kind of a volume label (e.g.,
BF3A-D4C2); but I wonder if I can format it to one character?
I guess I'll try that since I doubt anyone knows if a volume label of a
single character will work on Android - so I'll run that test separately on
the remaining sd cards as I've already copied over 0001 data to this one.
Would formatting a micro-sd volume label to just "SD" for example, work?
I tested it on the remaining sdcard and it works just fine to set the
volume label to a single character instead of the 4-dash-4 characters that
are typical.

I popped the single-character volume label into my Android and it "seems"
to have worked also, which would simplify the path as LOTS of programs
require you to type a filespec (such as Android webdav servers).

So now, instead of a filespec of /storage/0000-0001/0001/webdav", I can
type "/storage/1/0001/webdav", and to be consisistent, that can be
shortened to "/storage/1/1/webdav" where the first "1" is the card volume
label and the second "1" is a top-level directory designed to show up first
in the list (which is typically in alphabetical order).

Unfortunately, I have so many phones already set up as
/storage/0000-0001/0001 for the data storage, that I'm kind of stuck with
the longer convention.

But the good news is for those of you who are just starting to become
efficient, here are my recommendations for using Windows to make Android
smoother.

1. Format all your sdcards on Windows as exFat with a single-character
volume label - which can be the number "1" (or anything you like).

2. Create a top-level directory which shows up first in a sort
(again, anything you like, but the number "1" shows up early on).

3. Then, forevermore, you can set apps to store data in that directory.
"/storage/1/1/."

If you do this for EVERY sdcard, then ANY sdcard can be popped into another
phone, and everything will work because the phone thinks is the same card.

That's what I mean by "portable" storage (which I've been doing for years).

Given it's easier to manage Android from a Windows PC than from the phone,
in the end, the combination of Windows + Android makes both smoother.
Andrews
2024-11-01 06:26:03 UTC
Permalink
Post by Andrews
Those two simple things, I hope, will make this sd swap, uneventful.
a. Format the sd card on Windows to "0000-0001"
b. on Windows, create a top-level directory of "0001"
Note: The names don't matter as long as they're consistent.
UPDATE:

It's November 1st, and if there was going to be a glitch in swapping out
the old 64GB sdcard (which had been in daily use on the Galaxy A32-5G since
April of 2021) for the new 128GB Lexar microSDXC UHS-1 sdcard.

Everything has been smooth. This shows that planning ahead, years before
you swap out the portable memory that doing two things on Windows works:

1. Format the sdcard on Windows to a known static volume name, and,
2. Put *everything* you store on that card into a single top-level
directory (which of course can have as many subdirs as you see fit).

Let the Android operating system pollute every other top-level directory on
that sdcard - but Android won't touch your entire top-level hierarchy.

It doesn't get smoother than this.

A. On Windows, format the new sdcard (I did a low-level format)
B. Connect Android to the Windows PC via USB (or Wi-Fi)
C. Copy over the one directory on the old sdcard to the new card

Then...
a. Shut Android off & swap the old sdcard for the new sdcard
b. Boot Android and test the applications
c. Everything should work fine

Android has no idea that you swapped out the 64GB card with a 128 sdcard!
It just works.
Andrews
2024-11-01 13:56:56 UTC
Permalink
Post by Andrews
1. Format the sdcard on Windows to a known static volume name, and,
2. Put *everything* you store on that card into a single top-level
directory (which of course can have as many subdirs as you see fit).
Let the Android operating system pollute every other top-level directory on
that sdcard - but Android won't touch your entire top-level hierarchy.
It doesn't get smoother than this.
BELOW IS SAGE ADVICE BORN OF THE WISDOM OF BEING OVER 80 YEARS OLD:

Given it was so smooth to think ahead by a few years to put everything you
cared about on Android, it should be the same when replacing a Windows HDD.

I've thought ahead by decades to the point I can wipe out almost all of
Windows and it wouldn't cause me more than a moment to recover from that.

At lest a decade ago I put everything I cared about on Windows into only
two top-level hierarchies, neither of which is a Windows default hierarchy.

That way, Windows never pollutes those two hierarchies:
C:\data (this contains everything that I care about on Windows)
C:\apps (this is simply the installed apps - which can be re-installed)

Notice even C:\apps isn't important because it's where I install software.
All software can be re-installed - so only C:\data matters. Nothing else.


putting everything I care about in a single hierarchy, one question in the
quest for understanding operating systems is WHY those other folders exist?

On Windows I don't touch the garbage "C:\Program Files" directory.
Nothing I care about ever goes there on purpose.

On Windows I (almost) never touch the C:\Users directory either.
Nothing I care about goes there on purpose (not even my menus).

Likewise, on Windows I "rarely" touch the "C:\Windows" hierarchy.

The result is I could (almost) wipe out my entire hard drive save for the
"C:\data" directory and I wouldn't lose anything that I cared about.

I could (almost) swap out the entire hard drive and just copy over C:\data
and it would be (almost) smooth - although there might be a few gotchas.

What's wonderful about thinking ahead with operating systems is that I
don't have to back up the entire hard drive. I just back up "C:\data".

Nothing else matters. Almost.

What would I lose if I lost everything but C:\data on a Windows HDD that I
could trivially easily recover from? I can always re-install my apps.

Note: I have a separate HDD containing the installers which uses the *same*
hierarchy as C:\apps (and C:\data\apps for appdata) which makes it smooth.

For example, thinking decades ahead, gVim is installed over here:
C:\apps\editors\text\gvim
Which means the gVim installers are archived in exactly the same way:
I:\installers\editors\text\gvim

Another example is Irfanview is installed the same well-organized way:
C:\apps\editors\pic\irfanview
Which means the gVim installers are archived in the same structure:
I:\installers\editors\pic\irfanview

Same with web browsers (and the same with all well-behaved addon programs):
C:\apps\browsers\{firefox,epic,opera,brave,iron,seamonkey,etc}
C:\installers\browsers\{firefox,epic,opera,brave,iron,seamonkey,etc}

Even the taskbar-pinned menus are NOT stored in the Windows hierarchies:
C:\menus\editors\text\gvim
C:\menus\editors\pic\irfanview
C:\menus\browsers\{firefox,epic,opera,brave,iron,seamonkey,etc}

Notice I could (almost) wipe out C:\Windows & C:\Program Files and C:\Users
and there would barely be a hiccup (almost) in recovering from that.

Every day I sleep well knowing I only need to back up C:\data and
everything I care about is safe because the rest is easy to restore.

To recover from a loss of the C:\ partition would (almost) be trivial.

My advice to others is to think ahead by a decade.
a. Store everything you care about in one top-level hierarchy
(that is NOT one of the Windows default directories!)
b. The reason is those default directories are polluted by the OS
c. Back up only that one directory and sleep well every day of your life!
--
I never use plurals in the most important directory names; but I show them
above in the key directory names simply for easier concept understanding.
Loading...