USO0RE43032E

(19)

United States

(12) Reissued Patent

(10) Patent Number:

Gaertner et al. (54)

(45) Date of Reissued Patent:

SYNCHRONIZED MIRRORED DATA INA

5,448,719 A

9/1995 Schultz et al. .................. .. 714/5

5,508,874 A

4/1996 Williams et al.

5,511,177 A *

4/1996 Kagimasaetal. .

Inventors:

.

(73)

.

5,519,831

A

5/1996

5,572,660

A

ll/l996

1VIN(U$);Ll1ke W-FFIeIIdShIIhEIkO,

5,584,007 A

12/1996 Ballard ..... ..

741/113

MN (US); Stephen R. Cornaby, Yukon,

5,586,291 A

12/1996 Lasker et al. ..

711/113

OK (Us)

5,588,129 A

12/1996 Ballard ............ ..

711/113

5,761,705 A

6/1998 DeKoning etal. .

711/113

_

Asslgnee. Seagate Technology LLC, Scotts Valley,

(21)

App1.NO.Z

(22)

Med‘

5,764,945 A

6/1998

7/l998 Yashiro et a1‘ '

5,889,629

3/1999

A

Patton, 111

......

12/684,232

6,061,194

5/2000

Bailey . . . . . . . . . . . .

5/2000

Cheung et al. .

Jan‘ 8’ 2010

6,542,962 B2 *

A

.

6,163,422

f

A

7,318,121

Issued: APPL NO;

Jan. 8, 2008 11,283,273

Filed:

NOV‘ 181 2005

Blumenau

... ...

714/22

'''''“

714/6

711/113

“ 711“ 14 . . . .. 360/75

714/6 . . . ..

360/47

.. 711/112 . . . ..

360/47

4/2003 Kodama et al.

711/114

6,571,324 B1

5/2003 Elkington etal. .

711/162

6,854,071 B2

2/2005

7,281,032 B2 *

Patent No:

12/2000

. . . ..

Ballard ............ ..

6/1999 Stallmo et al. .

Relssue 0 ~

2002/0010762 A1*

10/2007

King etal. .............. .. 714/8 Kodama .... ..

709/217

1/2002 Kodama ...................... .. 709/219

OTHER PUBLICATIONS Patterson DA, Gibson G, KatZ RH, “A case for redundant arrays of

US. Applications:

(62)

. ....

'''''''''''''

5,911,779 A

Related US, Patent Documents (64)

Jones

5,787,460 A

6,070,225 A ~

Holzhammer

361/92 711/114

MarkA.Gaertner,Vadna1sHe1ghts,

CA (Us)

.

Dec. 13, 2011

DATA STORAGE DEVICE .

(75)

US RE43,032 E

Division of application No. 10/ 185,063, ?led on Jun. 28, 2002, noW Pat. No. 6,996,668.

inexpensive disks (RAID),” Sigrnond Record, acm press (Chicago), vol. 17 (No. 3), p. 109-16, (Sep. 28, 1998). * cited by examiner

(60) Provisional application No. 60/310,368, ?led on Aug. 6, 2001.

(51)

Primary Examiner * Kevin Ellis Assistant Examiner * Alan Otto

Int. Cl.

G06F 12/16

(52) (58)

(74) Attorney, Agent, or Firm * Crawford Maunu PLLC

(2006.01)

US. Cl. ....................... .. 711/114; 711/162; 707/610

(57)

ABSTRACT

Field of Classi?cation Search ...................... .. None

See application ?le for complete search history.

A data storage device mirrors data on a data storage medium.

References Cited

optimize performance of the reading and Writing, and the

U.S. PATENT DOCUMENTS

integrity of the data. Preferably, a data storage device is alloWed to defer Writing multiple copies of data until a more

The multiple instances of data are synchronized in order to

(56) 5,091,680 A

5,363,355 5,377,094 A

2/1992

advantageous time.

Palm ........................... .. 318/368

11/1994 12/1994 Williams Takagi ........... et al. .................. .. 714/5

14 Claims, 10 Drawing Sheets

)00

1D

RECEIVE READ COMMAND

FOR MIRRORED DATA

I

1S DATA SYNCHRONIZED 7

NO

FIND COPIES WITH MOST

OPTION 2 OPTION 1

SYNCHRONIZE DATA

70

INSERT READ SUB-REQUESTS INTO SCHEULING PDDL BO

SCHEDULE SUBREOUES'T - I

REMOVE REMAINlNG SUB'REQUESTIS) FROM THE SCHEDULING POOL

US. Patent

Dec. 13, 2011

Sheet 1 0f 10

US RE43,032 E

_H

130

41 50 SYNCHRONIZATION MEANS 170

READ COMMAND

16G

WRITE COMMAND

k4

140

J

;120 125

E

FIG. 1

1 10

/100

US. Patent

Dec. 13, 2011

(

Sheet 2 0f 10

BEGlN

US RE43,032 E

)

r

20° 210

RECEIVE READ COMMAND

FOR MIRRQRED DATA

YES

IS DATA

SYNCHRONIZED NO 250

j

OPTION 2

OPTION 1



250

L

SYNCHRONIZE DATA

A270 (NSERT READ SUB-REQUESTS 1NTO SCHEDULING POOL



280

/

SCHEDULE SUBREQUEST

/

REMOVE REMAIMNG SUB-REQUEST(S) FROM THE SCHEDUUNG POOL

END

FIG. 2

FIND COPIES WITH MOST RECENT DATA

US. Patent

Dec. 13, 2011 4 ?__ __

Sheet 3 0f 10

US RE43,032 E

\‘

BEGIN

'

_____ _/

3°°

I

310

r

'

I

REcEIvE AWRI‘I‘E COMMAND

I‘

/

320 YES

No 330 ,4

I——-5

CACHING ENABLED‘?

'

-

;

.__v_ I _Y___.____

1

=

I I

YES

I DETERMINE ocw I :

Il RESTRICTIONS prim

340

l

("J

r“

:

r

INSERT REouEsTs WRITE INTOA suE- l?‘ , v 332

L .._ VOLATILE CACHE

nm'pl?qm? I I SCHEDULE AT ‘ | LEAST oNE SUB-

~~334

I TRANSFER TDANgANCvéLATTLE WRITE DATA W 351

,

—-INSERT "ALL IEs SUB-REQUESTDFNTO

DETERMNE ocw

RESTRICTIONS

LTIIJ

342

ocw LIST

(‘k

-.

—-L-—

'

I

| REMAINING sue, I

DETERMINE ocw

336

REQUESTS INTO A

THE SCHEDULING | I §___ _POOL __Q 'I

|scI-IEDuIING POOL I’\/ I

I '

INSERT WRITE sua REQUESTSINTOA SCHEDULING POOL

SCHEDULE AT ' LEAST DNE SUB-

‘ DATA FROM HOST

344

'

r-—-‘-—L——— . TRANSFER WRITE I 338

REQUEST

_ "v

[

.

i

I i

_-

' 339

) I I

__

I

I

REouEsns) FROM /\_ 34B

_-

THE SCHEDULING - '

I REMOVE REMAINING

POOL ._I

b4‘ 1 I

1__f__ _

SUB-REQDEsTIsI FROM I“ 358 THE SCHEDULING POOL =

TRANSFER WRITE DATA FROM

I

I WWII-E CACI‘IE

I TRANSFER WRITE DATA .

I TO THE RECORDING @1341

FROM NONVOLATILE IA _ 350

I___ME?5_I _-__-__l__-_ ',

CACHE To THE —_-—1———— RECORDING MEDIA

ADD REMAINING

349

I. ______I____ -

_ I 5UB-REQUEST($) I"?

i

I I TO ocw LIST I I L. __.' l

I suIa-REouEsTIs) FROM I“ ‘U ocw LIST

.

__—."""_

l

END

REMOVE

l |_.____.._,____.___

l

FIG. 3

{A _ 351

ONE SUB-REQUEST REMOVE

REMAINING sun.

I I

I SADD:éEgNNI|I~1S

I U130 wk,’

.J

SCHEDULE AT LEAST

‘I

I

356

;

MEDIA __ ._ __ .-

-,-_

m

ITD THE RECORDING ;

;A _ 352

REsTRIcTIoNs

INSERT WRITE sua

I REQUEST(S) FROM I‘?

I

35‘ 1

i

.

'

____IL _ ____

i

REQUEST

REMOVE

1

TRANSFER DATA TowRnE A nvw

'—-'—‘rSCHEDUTJNG FOOL I



"0 VOLATILE CACHING"

.

—-—-—5

359

US. Patent

Dec. 13, 2011

1

Sheet 4 0f 10

BEGIN

)

US RE43,032 E

40°

/

r

i410

COPY AN OCW UST TO NONVOLATILE MEMORY 420

,4

COPY CORRESPONDING RNSTANCE DATA TO NONVOLATlLE MEMORY

\ END

(

BEGIN

)

/500

r

/510

RESTORE OCW LIST TO OPERATING MEMORY

! FLUSH WRITE SUB-REQUESTS FROM OCW LIST

FIG. 5

i520

US. Patent

Dec. 13, 2011

(

Sheet 5 0f 10

BEGIN

I

US RE43,032 E

60°

/

630

J’

610 i DETERMINE IDLE

DETERMINE MINIMAL DATA READ FREQUENCY OF UNSYNCHRONIZED

MASS STORAGE DEVICE

LOCATION

H620

r

DETERMINE MINIMAL PERFORMANCE EFFECT OF WRITE TO THE MASS STORAGE DEVICE

DETERMINE QUANTITY OF OCW LIST ENTRIES >= A PREDETERMINED

QUANTITY

END

FIG. 6

700

/ (

BEGIN

J

I

H710

WRITE TIMESTAMP TO MEDIA FOR EVERY WRITTEN SECTOR



H120

DETERMINE MOST RECENT DATA

H730 WRITE MOST RECENT DATA TO OTHER INSTANCES

END

FIG. 7

/640

US. Patent

Dec. 13, 2011

YES

Sheet 6 0f 10

US RE43,032 E

ADDRESS RANGE

SEQUENTIAL. OVERLAPPING, OR 820

CLOSE TO AN EXISTING ENTRY?

r’

MERGE WITH

EXISTING ENTRY

AN UNUSED ENTRY IN THE OCW

’ CREATE NEW LIST ENTRY IN OCW

L840

850 MERGE WITH AN EXISTING ENTRY

LOGICALLY REMOVE ANY OLDER ENTRIES THAT OVERLAP WITH THE ADDRESS RANGE OF THE NEW ENTRY

END

FIG. 8

H860 MERGE WITH THE ENTRY THAT HAS THE CLOSEST ADDRESS RANGE

870

US. Patent

Dec. 13, 2011

Sheet 7 0f 10

US RE43,032 E

900 09

READ REQUEST

,902 READ-REQUEST RECEIVER 09

READ REQUEST

CACHE

SYCHRONIZED

1925 SYNCHRONIZATION DETERMINER 942 UNSYNCHRONIZED

READ REQUEST

T

£946 OCW LIST

/944

SYCHRONTZER

"

94o

RELATED WRITE

H950

SYNCHRONIZED READ REQUEST

REQUEST

INSERTER

V951 952

{954

[SUB-REQUESTA / [SUB-REQUEST Bi [_.___ H955

SCHEDULER

960

64

SCHEDULING

REMOVER

POOL

@

962

V SELECTED SUB~REQUEST

1

STORAGE MEDlUM

H923 INlTIAL INSTANCE

H926 SUBSEQU ENT INSTANC E

FIG. 9

r,QZD

930

US. Patent

Dec. 13, 2011

US RE43,032 E

Sheet 8 0f 10

1 000

1015

Z WRITE REQUEST ; 1010

RECEIVER 1015

Z WRITE REQUEST ;

402°

DISABLED

CACHE DETERMINER

ENABLED

H

V0 LATLLE

1030

VOLATI LE

XLOLATILE

CACHE DETERMINER

1015

Z WRITE REQUEST / 1055

10% CACHE

WRITE DATA



H1040 SYNCHRONTZER

,,1045

1055

INSERTER

WRITE DATA

1051

17

K1050

/ SUB-REQUEST AJ[SUB-REQUEST a / L_______3

1/1050 SCHEDULER 1062

@

31055

SCHEDULING POOL

1070

REMAINING

SUB-REQUEST(S)

1 085

1 8o

‘j

ADDER

1 82

SELECTED

OCW LIST

SU B-REQUEST(S) 1090

WRITER I /1095 MASS STORAGE MEDIUM

FIG. 10

US. Patent

Dec. 13, 2011

Sheet 9 0f 10

FIG. 11

US RE43,032 E

US. Patent

Dec. 13, 2011

av:

Sheet 10 0f 10

r

US RE43,032 E

men-W

\HrN::\=T\ _BE: _ doc

JOEZCU T452

JMZEU

zmoasvw

H

@HZK

KHAOEU

1/Q HM

2H2

s.Urm P.MlO.I

US RE43,032 E 1

2

SYNCHRONIZED MIRRORED DATA IN A DATA STORAGE DEVICE

three-quarters of a revolution; one-quarter for the ?rst copy to be written and one-half for the second copy to be written. What is needed is a system, method and/or apparatus that manages the reading and writing of mirrored data in a manner

Matter enclosed in heavy brackets [ ] appears in the original patent but forms no part of this reissue speci?ca

that minimizes the performance degradation during writes, yet provides the ability to read the most recently written data.

tion; matter printed in italics indicates the additions made by reissue.

SUMMARY OF THE INVENTION

RELATED APPLICATION

The above-mentioned shortcomings, disadvantages and problems are addressed by the present invention, which will

This present application is a divisional of and claims pri

be understood by reading and studying the following speci

ority ofU.S. patent application Ser. No. 10/185,063, ?led Jun.

?cation. In the present invention, the above mentioned problems are

28, 2002, now US. Pat No. 6,996,668 which claims the bene?t of US. Provisional Application Ser. No. 60/310,368

solved by allowing a data storage device to defer writing the

?led Aug. 6, 2001, both of which are hereby incorporated by reference in their entirety. This application is related to US. Pat. No. 6,295,577 issued Sep. 25, 2001, entitled “Disc storage system having a non volatile cache to store write data in the event of a power

20

failure.” FIELD OF THE INVENTION

This invention relates generally to data storage devices, and more particularly to data mirroring in data storage devices.

25

copies of the data until a more advantageous time. One embodiment of the present invention provides a method to select the most effective copy to write. Subsequently, the method may defer the writing of the other copies until a more advantageous time. Another embodiment includes a method that allows a drive to be self-aware as to whether the copies contain the same data. Subsequently, the method insures that the mo st recent copy is returned on a read request. In addition, the present invention also can be implemented as a data stor age device or as a data storage system. These and various other features as well as advantages

which characterize the present invention will be apparent

upon reading of the following detailed description and review of the associated drawings.

BACKGROUND OF THE INVENTION 30

In order to increase fault tolerance in a group of data

BRIEF DESCRIPTION OF THE DRAWINGS

storage devices, data mirroring is often implemented. Previ ously, data mirroring has been implemented in redundant array of inexpensive discs (RAID). Data mirroring may be

implemented in RAID devices by writing duplicate instances

35

of data to each of two or more disc drives in the RAID device.

Conventional RAID mirroring schemes use controlling tech nology outside the disc drive to manage the duplicated data, such as in a RAID controller. The RAID controller, either in the form of software or hardware, manages which instance of data to read. The data has to be written at least twice to synchronize the mirrored data every time there is a write request. This can cause a performance problem because the

number of write requests has increased and correspondingly the time to complete them has increased. On the other hand, data mirroring has also been performed in single disc drives. Data mirroring in a single drive poses the

40

FIG. 2 is a ?owchart of a methodto manage retrieval of data according to an embodiment of the invention. FIG. 3 is a ?owchart of a method to manage writing of data according to an embodiment of the invention. FIG. 4 is a ?owchart of a method to manage synchronized mirrored data according to an embodiment of the invention. FIG. 5 is a ?owchart of a method performed by a data

storage device, according to an embodiment of the invention. 45

same problems as data mirroring in a RAID system, as well as

additional problems. For instance, the average latency of an unqueued read request in a non-mirrored disc drive is one half revolution because the one instance of data that is read is one-half of a revolution away from the head when the read

FIG. 1 is a block diagram that provides a system level overview of the operation of embodiments of the present invention.

FIG. 6 is a ?owchart of a method to identify an opportune time to ?ush write at least one write sub-request from an OCW list according to an embodiment of the invention. FIG. 7 is a ?owchart of a method to manage synchronized mirrored data according to an embodiment of the invention. FIG. 8 is a ?owchart of a method to add a new entry to an

with a read request have been substantially lowered. This is

OCW list, according to an embodiment of the invention. FIG. 9 is a block diagram of an apparatus to manage retrieval of data, according to an embodiment of the inven tion. FIG. 10 is a block diagram of an apparatus to manage writing of data, according to an embodiment of the invention. FIG. 11 illustrates a plan view of a disc drive incorporating a preferred embodiment of the present invention showing the

accomplished because the multiple copies of data, usually

primary internal components.

50

request is initiated. To improve the inherent latency problems in a disc drive, data mirroring has been incorporated into disc drives. By placing mirrored data on the same disc drive at

55

different angular positions, the latency problems associated placed 180 degrees opposed from each other, reduces the average latency of a read request to one-quarter of a revolu

60

tion. However, the bene?ts of the performance gain for read requests is offset by the performance loss of the write

FIG. 12 illustrates a simpli?ed functional block diagram of the disc drive shown in FIG. 11. DETAILED DESCRIPTION OF THE INVENTION

requests. The invention described in this application is useful for all

The problem is that a write request must either write all

copies immediately or deal with complications of having different data on the copies. If all copies are written immedi

types of data storage devices, including hard-disc drives, optical drives (such as CDROMs), ZIP drives, ?oppy-disc

ately, then the average latency of an unqueued write request is

drives, and many other types of data storage devices.

65

US RE43,032 E 4

3

However, writing the ?rst instance of synchronized mir

Referring now to FIG. 1, a system level overview 100 of the

operation of embodiments of the present invention is shown.

rored data and the other instances of synchronized mirrored

Embodiments of the invention operate in a data storage device, such as a magnetic disc drive [1300] 1 100 in FIG. [13]

data at different times creates complexity that must be man aged. Read requests to a location in which instances do not

11.

contain the same data must read the mo st recent data. Due to

System 100 manages mirrored data 120 and 125 stored on a data storage medium 110. System 100 also includes a pro cessor 130 that is operably coupled to the data storage

the deferring of the writing of an nth instance, the most recent data may only reside in one of the locations or in cache. Power

loss, reset conditions, and any other event that destroys vola tile memory contents could prevent the deferred instances from ever being written.

medium 110 through signal lines 140. System 100 also includes synchronization means 150 operative on the proces sor 130 to synchronize the mirrored data 120 and 125 on the

FIG. 2 is a ?owchart of a method 200 to manage retrieval of

data from a data storage device that has mirrored data accord ing to an embodiment of the invention. In general, read per

data storage medium 110. In some embodiments, synchro nizing accounts for, and/ or is performed in reference to vari ous caching techniques, such as caching in a non-volatile medium, caching in a volatile medium, or no caching. In

to read. Method 200 includes receiving 210 a read command associated with mirrored data that is stored on a data storage

varying examples of the synchronization means 150, mir

medium in the data storage device. Mirrored data usually

rored data 120 and 125 is associated with a write command 160 or a read command 170. In some examples, the system 100 is incorporated in a data storage device, such as a disc

includes at least one initial instance of data and at least one

formance is improved by selecting the mo st effective instance

20

drive, and the data storage medium 110 is a recording medium, such as a disc assembly. System 100 provides the advantage of synchronizing mirrored data within a data stor

ciated with synchronized or unsynchronized data. Preferably, in some embodiments, the determination is made from data

obtained from an outstanding-copy-write (OCW) list.

age device. In general, there are two manners to perform a write com

copy of the initial instance of data. Further, method 200 may include determining 220 whether the read command is asso

An OCW list is a data structure, such as a table or a queue, 25

mand of synchronized data; either all mirrored instances of

that indicates write sub -requests that have not been performed to a data storage medium. When a write sub-request is per

the write data are written immediately to the data storage

formed, it is removed from the OCW list. The OCW list is

medium, or the instances are written at different times and must be managed on the data storage medium in a manner that maintains the ability to retrieve the most current instance. The instances are also known as “copies.” A request to write an instance is referred to as a write sub-request.

discussed in further detail below in relation to write com mands. In some embodiments, the determination is based on an indicator, such as a bit ?ag.

30

When the determining 220 indicates that the data is syn

chronized, steps 270, 280, and 290 may be performed. When

age latency of an unqueued write sub-request is three quarters

the determining 220 indicates that the data is unsynchronized, the method 200 has two options enabling it to retrieve the most recent data. The ?rst option includes synchronizing 260 the data by completing the writes related to the OCW entries. Once this is done, the method may proceed to inserting 270

of a revolution; one quarter of a revolution for the ?rst write and one half of a revolution for a second write, where the

the read sub-requests into the scheduling pool. An alternative to option 1, the method 200 may perform option 2. Option 2

Writing all instances immediately is a solution that is easy to implement, but is generally lacking in performance. In a rotational storage device, such as disc drive [1300] 1100,

35

when all instances of data are written immediately, the aver

copies are written 180 degrees opposed to each other. Writing all instances of synchronized mirrored data is a relatively simple solution to implement in the ?rmware of the data storage device. However, this approach can cause temporary

peaks in the activity of the disk drive. The temporary peaks in activity are caused by multiple write sub-requests of mirrored

40

This may require breaking the read request into pieces because the most recent data may not exist on one copy. The

45

performed for each singular write command received by the 50

includes completing write command entries in an OCW list associated with the read command. The OCW list may include read and write commands for instances of mirrored

data. Synchronizing 260 the data is the primary distinction between a synchronized read and an unsynchronized read. Method 200 continues with inserting 270 one or more, or

medium in a manner that maintains the ability to retrieve a

current instance yields better performance than conventional

method 200 may be designed to include only one of the options or both of the options. When the read command does not specify the most

recently written data, the read data is synchronized 260. In one embodiment of the synchronizing 260, the synchronizing

data to the data storage medium of the disk drive that are

disk drive. Furthermore, a read request for synchronized mirrored data is performed generally faster than a read request for unsynchronized non-mirrored data, as discussed below in conjunction with method 200 in FIG. 2. Managing synchronized mirrored data on a data storage

includes ?nding 250 the copies with the most recent data.

55

systems. In this manner, a ?rst instance or instance of the

mirrored data is written. After the ?rst instance is written, all other instances of the mirrored write data are not written,

all, read sub-requests into a scheduling pool. In some embodi ments, the read request received 210 is divided or partitioned into Y sub-requests where Y is the number of synchronized instances. Thereafter, method 200 includes scheduling 280 one of the read sub-requests. In some embodiments, the

synchronized mirrored data is written in the same amount of time as one instance of unsynchronized non-mirrored data in

scheduling pool is implemented with a rotational position sorting (RPS) queue. Scheduling 280 may includes schedul ing sub-requests in reference to predetermined parameters. For example, in a disc drive [1300] 1] 00, the scheduling may

conventional solutions. Deferring the writing of the other

be done in reference to a seek time and latency of the read

instead the write sub-requests are deferred until a more

advantageous or opportune time. Thus, the initial instance of

instances until a time when the drive is idle yields a substan tial performance improvement to a conventional data mirror

ing system and also yields no substantial performance degra dation in comparison to writing non-mirrored data systems.

60

65

sub-request. A scheduling algorithm as disclosed in US. Pat. No. 5,570,332 may be used to accomplish this. Method 200 also includes removing 290 remaining sub

requests from the scheduling pool. The remaining sub-re

US RE43,032 E 5

6

quests are the sub-requests that Were not scheduled 280. Method 200 can enhance the performance of a data storage

mands are divided or partitioned into N sub-requests for a

device because the method 200 alloWs the data storage device

may have its oWn rotational positional if the N instances are

to determine the instance that can be read most quickly. In a

rotationally staggered.

data storage device having N instances. Each sub-request

preferred embodiment of the present invention, method 200 is completely internal to the data storage device, thus a data

Next, scheduling 346 is done for at least one of the Write

For example, Where the data storage device is a disc drive, the

sub-requests. A Write sub-request may be selected 346 from among the plurality of Write sub-requests in accordance With a rotational position-sorting (RPS) algorithm. The RPS algo rithm selects the sub-request that is associated With the

disc drive uses information that is not readily available exter

instance of data that Will yield predetermined throughput

nally to the data storage device, such as the geometry of the disc drive and current position of the read/Write head to schedule the optimal command.

and/or response time characteristics. At least tWo Write sub requests are scheduled When the determination of OCW

restrictions indicates that adding entries for the sub-requests

FIG. 3 is a ?owchart of a method 300 to manage Writing of data on a data storage device that has synchronized mirrored

Will cause OCW list entries to logically overlap With one another.

storage device is much more capable than a conventional

external controller of optimizing mirrored read performance.

data, according to one embodiment of the invention. A Write command may either Write all instances immedi ately to the data storage medium or manage instances on the data storage medium having various versions of the data. If all instances are Written immediately, then the average latency of an unqueued Write command is three quarters of a revolution.

Method 340 also includes removing 348 remaining sub

requests from the scheduling pool. Preferably, the remaining sub-requests that are removed 348 Were not scheduled 346. 20

After the remaining sub-requests are removed 348, transfer ring 347 data associated With the Write command from host to

the recording media is completed. Finally, adding 349 the

This is often not a desirable solution because this is sloWer

remaining Write sub-requests to the OCW list is performed. In

than merely performing the ?rst Write sub-request.

some embodiments, Write sub-requests that Were not added to the OCW list, because of restrictions in the OCW list that no longer exist, are added to the OCW list. In some embodi ments, some of the Write sub-requests are not added to the OCW list in order to ensure integrity of the OCW list. In some

One solution is to defer the Writing of the copies of the data to a more advantageous time. If Writing copies is performed at

25

a time When the data storage medium is idle, there is no

performance degradation from Writing the copies. In contrast, Writing copies When the data storage medium is not idle can degrade performance of the data storage device. The most recent data may only reside in some of the instance locations

embodiments, the Write sub-requests may be performed rather than stored in the OCW for later performance. In some 30

most recent data. Power loss, reset conditions, and any other event that destroys memory contents could possibly prevent the cached instances from ever being Written. Method 300 includes receiving 310 a Write command that speci?es Whether the data requested is associated With mir rored data. Thereafter, method 300 includes determining 320

has started, the adding 349 remaining Write sub-requests must 35

be completed. Thus, functions 347 and 349 must be done in an atomic, indivisible manner. In other Words, if transferring 347 is started, adding 349 must be guaranteed to complete suc

cessfully.

Whether the Write command is associated With enabled or

disabled caching. In some embodiments, the determining 320 yields an indication that the Write command speci?es dis

embodiments, the remaining Write sub-requests’ data may also be saved in the cache in addition to being added to an OCW list. To enable the OCW list to be accurate in the event of a poWer loss or similar condition, once the transferring 347

or cache. Thus, read sub-requests to a location must read the

40

In some embodiments, When the determining 327 yields an indication that the Write command is associated With nonvola

tile caching, method 350 is accomplished. Preferably, method

abled caching. Subsequently, the Writing 330 of one or more instances of the data may be performed. The Writing 330 can

350 includes transferring 351 data associated With the Write

include inserting 332 one or more Write sub-requests into a

command to a nonvolatile cache. Next, method 350 includes

scheduling pool, scheduling 334 one or more of the Write

inserting 354 an entry into the OCW list that does not specify

sub-requests, removing 336 remaining sub-requests from the

45

an instance; rather, a pointer to data in a cache is included.

scheduling pool, transferring 338 data associated With the

This type of OCW list entry designates that all copies of the

Write command from ho st to the recording media, and adding 339 remaining Write sub-requests to an OCW list. To enable

data need to be Written. To enable the OCW list to be accurate in the event of a poWer loss or similar condition, once the

the OCW list to be accurate in the event of a poWer loss or

similar condition, once the transferring 338 has started, the

50

transferring 351 has started, the inserting 354 the all copies designated Write sub-request must be completed. Thus, func

adding 339 remaining Write sub-requests must be completed.

tions 351 and 354 must be done in an atomic, indivisible

Thus, functions 338 and 339 must be done in an atomic, indivisible manner. In other Words, if transferring 338 is

manner. In other Words, if transferring 351 is started, inserting 354 must be guaranteed to complete successfully.

started, adding 339 must be guaranteed to complete success

fully.

55

When the Write command speci?es enabled caching, a caching type determination 327 as to Whether or not the caching is in a volatile medium or in a non-volatile medium.

When volatile caching is used, a method 340 is performed that synchronizes the instances of the data related to the Write

60

request. Synchronization of data is performed by transferring 341

to an OCW list is completed. Thereafter, all Write sub-re quests are inserted 344 into a scheduling pool. Write com

the Write sub -requests may be performed rather than stored in the OCW for later performance. Speci?cally, if adding to the OCW list is not possible Without creating overlapping LBA ranges for different instances, the corresponding copies can

be synchronized by ?ushing Write sub-requests from the OCW list prior to Writing any neW data. Thereafter, method

data associated With the Write command to a volatile cache and transmitting an indication of completed status to a host

(not shoWn). Next, determining 342 restrictions in reference

Next, determining 352 restrictions in reference to an OCW list may be completed. In some embodiments, some of the Write sub-requests are not added to the OCW list in order to ensure integrity of the OCW list. Also in some embodiments,

350 includes inserting 356 all possible Write sub-requests into 65

a scheduling pool. Some Write sub-requests may not be added to the OCW list to ensure OCW list integrity; therefore, these Write sub-requests must be Written to the storage medium.

US RE43,032 E 7

8

Subsequently, method 350 includes scheduling 357 at least one of the write sub-requests. The write sub-request may be selected from among the plurality of write sub-requests in accordance with a rotational position-sorting (RPS) algo rithm. The RPS algorithm may select the sub-request that is associated with the instance of data that will yield predeter mined throughput and/or response time characteristics. At

there is a power loss condition or other condition that destroys

the OCW resident memory. Method 400 includes copying 410 the OCW list to nonvolatile memory. Whether or not the

OCW list is copied may depend on if the OCW list is not present in the nonvolatile memory.

Method 400 also includes copying 420 corresponding instance data to nonvolatile memory, if nonvolatile write

caching is supported. In some embodiments, the copying 420 is performed using back electromotive force (EMF) gener

least two write sub-requests are scheduled when the determi nation of OCW restrictions indicates that adding entries for the sub-requests will cause OCW list entries to logically overlap with one another. Method 350 also includes removing 358 remaining sub

ated by a rotating member, such as a disc assembly, of the data storage device. Method 400 enables a data storage device to

protect OCW list entries, thus allowing for data synchroniza

requests from the scheduling pool. The remaining write sub

tion to be completed at a later time. FIG. 5 is a ?owchart of a method 500 to manage synchro nized mirrored data in a data storage device, in which a valid OCW list is present in non-volatile memory. Method 500 may be used in a power-up condition or for any other condition that

requests are writes that were not scheduled. The remaining

sub-requests may include all of the corresponding sub-re quests that would have been indicated due to the all copies

designation, minus the write sub-request corresponding to the copy that was written. After the remaining sub-requests are

removed 358, transferring 360 data associated with the write command from host to the recording media is completed.

20

To enable the OCW list to be accurate in the event of a power loss or similar condition, once the transferring 360 has

started, the adding 359 remaining write sub-requests must be completed. Thus, functions 360 and 359 must be done in an atomic, indivisible manner. In other words, if transferring 360 is started, adding 359 must be guaranteed to complete suc

25

cessfully. Finally, method 350 includes removing 359 write sub

as determining 610 that the data storage device is idle. 30

that designates all copies need to be written; then adding all write sub-requests corresponding to unwritten copies to the OCW list. Remaining OCW entries for the original all copies designated entry no longer need to maintain a cache data

FIG. 6 is a ?owchart of a method [1800] to identify an opportune time to ?ush at least one write sub-request from an OCW list, according to one embodiment of the invention. An opportune time may be identi?ed in one of many ways, such

Another opportune time may be identi?ed by determining

requests for the write command from the OCW list after the

write sub-requests are performed. The removing 359 may include removing the OCW list entry with the data pointer

requires the OCW list to be restored into operating memory. Method 500 includes restoring 510 the OCW list to operating memory and ?ushing 520 writes from the OCW list during opportune times. In some embodiments, the restoring 510 does not need to happen if the OCW list is run-time main tained in nonvolatile memory.

620 that performing a write to the data storage device from the OCW list will have an effect on the performance of other

functions of the data storage device. Usually, a predetermined 35

threshold will be set to monitor the performance effect of the write to the data storage device. Other ways to determine opportune time to ?ush write from an OCW list include

pointer.

determining 630 that data is being read, with frequency that is

All write commands of synchronized data involve the use of an OCW list that represents write sub-requests to the data storage medium that have not been performed. The OCW list

at least a predetermined threshold, from an unsynchronized location of the data storage device. Still further, an opportune

time may be identi?ed by determining 640 that the quantity of

includes one or more entries. In one embodiment, each entry 40 entries in the OCW list is at least equal to a predetermined

includes a representation of a logical block address and a

quantity.

representation of which instance of the write request is the sub-request. For example, an entry may represent the ?rst write sub-request of two write sub-requests that were gener

FIG. 7 is a ?owchart of a method 700 to manage synchro nized mirrored data in a data storage device, according to an embodiment of the invention. Method 700 may be used to initialize a data storage device having an invalid OCW list in

ated from a write request. Each entry may also include a

45

representation of the length of the write-data associated with the sub-request, such as the number of bytes of data of the write sub-request. Further, each entry may also include a

nonvolatile memory. Method 7 may be used as an alternative to methods 400 and 500 when there is a power loss condition or when recovering from a corrupted OCW list. Method 700

representation of a location of the write-data, such as cache, or nonvolatile cache.

50

includes writing 710 a timestamp to the storage media for every sector that is written, determining 720 the most recent

In method 300, ranges of logical block addresses (LBAs) of

data, and, if the data is not synchronized, writing 730 the most

the OCW list entries are not allowed to logically overlap with one another. When adding to the OCW list, entries can be

recent data to the other copies. In some embodiments, the

merged to avoid logically overlapping entries. If all write

copy of every sector to determine the most recent data.

sub-requests can be added to the OCW list without creating overlapping entries then there may be no restrictions. If only some of the write sub-requests can be added, then only non overlapping write sub-requests are inserted into the OCW list.

determining includes synchronizing data by scanning every 55

Method 800 includes determining 810 whether or not an

address associated with the new entry is sequential, overlap

Allowing only non-overlapping write sub-requests requires restricting the scheduling of overlapping instance write(s). If

60

none of the write sub-requests can be added to the OCW list

while maintaining no overlapping entries, then write sub requests must be ?ushed from the OCW list to allow entries to be added to the OWC list. FIG. 4 is a ?owchart of a method 400 to manage synchro nized mirrored data in a data storage device according to an embodiment of the invention. Method 400 may be used when

FIG. 8 illustrates a method 800 to add a new entry to an

OCW list, according to an embodiment of the invention.

ping or close to an address associated with an existing entry in the OCW list. If the address associated with the new entry is sequential, overlapping or close to an address associated with

an existing entry in the outstanding-copy-write list, the new entry may be merged 820 with the existing entry in the OCW list. 65

A determination 83 0 is made if an unused entry exists in the OCW list. If an unused entry exists, a new entry is created 840 in the OCW list. If an unused entry does not exist in the OCW

US RE43,032 E 9

10 When the determiner 925 indicates that the data is synchro

list the new entry is merged 850 With an existing entry. Addi

tionally, method 800 may also include logically removing

nized, control is passed to the cache 980, Which is prompted

870 any older entries that overlap With the address range of the neW entry. Method 800 may include merging 860 the neW entry With an existing entry that has the closest LBA range.

to send a synchronized read request 940 to the inserter 950. As an alternative to apparatus 900, When the synchroniza tion determiner 925 yields an indication that the read request

909 speci?es unsynchronized data, the most recent data is

FIG. 9 is a block diagram of an apparatus 900 to manage retrieval of data from a data storage device that has mirrored

data, according to an embodiment of the invention. Apparatus 900 includes a receiver 902 of a read request 909. The read

request 909 may identify data to be read. The read request 909 may be received from a source other than the data storage device, such as a host computer.

The read request 909 may be associated With mirrored data 920. The mirrored data 920 is stored on a data storage medium

930 in the data storage device. The mirrored data 920 includes

5

read. In apparatus 900, the data storage device 930 ful?lls the read request 909 from valid cache data in a cache 980, When the valid cache data exists, because retrieving data from a cache 980 is typically the fastest manner of retrieving the data. This is done irrespective of Whether or not the data is synchronized on the data storage medium. In other Words, the read command 909 Will be ful?lled or satis?ed by valid cache data in the cache 980 if the data of the read command 909

an initial instance of data 923, and one or more subsequent

exists, Without passing control to the synchronization deter

instances 926 that are substantially duplicates of the initial instance of the data 923. Apparatus 900 also includes a synchronization determiner 925 for Whether the read request 909 is associated With syn chronized or unsynchronized data. In some embodiments, the determination is performed from data obtained from an OCW

miner 925.

20

instance that is faster to retrieve is a cached instance or on the

list 946. The OCW list 946 may be stored on the storage

medium 93 0. The result (not shoWn) of the determination may be stored as a bit ?ag (not shoWn). When the synchronization determiner 925 indicates that the data is unsynchronized, control is passed to a synchro nizer 944. In one embodiment of the synchronizer 944, the synchronizer 944 completes Write request entries 948 in an OCW list 946 that are associated With the unsynchronized read request 942. Apparatus 900 includes an inserter 950 of all read sub requests 951 into a scheduling pool 956. In one embodiment, each read request 909, is divided or partitioned into N sub requests for a drive With N instances. For example, When the

25

30

data storage medium 930.A data storage device is much more capable than a conventional external controller of managing and optimizing mirrored read performance. In some embodi ments Where the data storage device is a disc drive, the disc drive uses information that is not readily available externally to the data storage device, such as the geometry of the disc drive and current position of a read/Write head, to schedule the

optimal request. FIG. 10 is a block diagram ofan apparatus 1000 to manage

Writing of data on a data storage device that has synchronized mirrored data, according to an embodiment of the invention. Write requests may occur to unsynchronized locations on the 35

data is divided into tWo identical mirrored instances, as in the

initial instance 923 and the subsequent instance 926, the read command is divided or partitioned into tWo sub-requests such as sub-request A 952 and sub-request B 954. Furthermore, each sub-request has an indication of the unique address that

Apparatus 900 provides the ability to manage mirrored and/or synchronized data 920 Within the data storage device. Performance of read operations is improved in apparatus 900 by selecting the most effective instance to read, Whether the

media. In this case, the Write request is effectively divided into N sub-requests for a drive With N copies of the synchro nized data. One of the functions of apparatus 1000 is that outstanding Writes to the same copy locations may be dis

40

carded. The drive data storage device may accomplish this by Write caching With this functionality. Write caching alloWs

the data is stored on the data storage medium. In some

the drive to return status prior to Writing a ?rst copy. Altema

embodiments Where the data storage device is a disc drive, each address includes the rotational position of the data because the N instances may be rotationally staggered on the data storage medium. Apparatus 900 also includes a scheduler 958 of all of the read sub-requests 951. The scheduler 958 determines the schedule or priority of the sub-requests 951 in the scheduling

tively, this functionality may also be incorporated into non

volatile Write caching functionality. 45

ciated With mirrored data. Also included is caching type determiner 1020 for the Write request 1015 is associated With

pool 956. A sorter 960 in the scheduler 958 uses a rotational

position sorting algorithm or an alternative algorithm to select

Apparatus 1000 includes a receiver 1010 of a Write request 1015. In some embodiments, the Write request 1015 is asso

50

the sub-request 962 in the scheduling pool 956 that Will yield

enabled or disabled caching. In some embodiments, the determiner 1020 yields an indi cation that the Write request 1015 is associated With enabled

the best throughput and/ or response time characteristics that

caching. Where caching is enabled, control is passed to a volatile cache determiner 1030 for deducing Whether the

are desired.

Write request 1015 is associated With volatile or nonvolatile

Additionally, apparatus 900 includes a remover 964. All sub-requests that are not selected for reading are removed

caching. Where volatile caching is enabled, control is passed 55

to a synchronizer 1040 that guarantees and/or ensures syn

chronization of data. Apparatus 1000 includes an inserter 1045 that inserts Write sub-requests into a scheduling pool 1065. A at least one Write

from the scheduling pool 956 by the remover 964. In some embodiments, the remover 964 removes all associated sub

requests that remain in the scheduling pool 956 after the selected sub-request 962 is scheduled by the scheduler 958. In

sub-request 1051 is generated by the inserter 1045 from the

some embodiments, the remover 964 removes all associated 60 Write request 1015. Examples of Write sub-requests 1051 are

sub-requests that remain in the scheduling pool 956 after the selected sub-request 962 is performed. For example, if the

Write sub-request A 1050 and Write sub-request B 1052. One sub-request is generated for each instance of the Write data

scheduler 958 selected sub-request B from among the tWo

1055 to be Written on the data storage medium. Each of the

sub-requests, sub-request A 952 and sub-request B 954, in scheduling pool 956, after sub-request B 954 is performed,

plurality of Write sub-requests are associated With a unique

the remover 964 removes sub-request A 952 from the sched

address on the data storage medium. In a disc drive, each address may have a different rotational positional than each of

uling pool 956.

the other Write sub-requests. Writing each of the sub-requests

65

US RE43,032 E 11

12

1051 With different rotational positions allows subsequent Write requests for the mirrored data to be read quickly by

memory location of Write data. Non-Write caching implemen tations may retain some or all of the Write data in cache to

selecting the instance of the data that has the closest rotational

improve the performance of Writing other copies. The length

position. 1

of the list is limited by the amount of volatile memory (if the list is initially stored in volatile memory) and nonvolatile

Apparatus 1000 also includes a scheduler 1060 that deter

mines the schedule of the sub-requests in the scheduling pool

memory.

1065. The scheduler 1060 schedules a selected sub-request

PoWer loss, hard reset conditions, and events that destroy

1085. In some embodiments, scheduler 1060 includes a sorter 1062 that uses a rotational position sorting algorithm or an

volatile memory contents require that the drive save informa tion on all outstanding Writes, listed in the OCW list, into a nonvolatile memory. The OCW list must be saved so the drive can synchroniZe the copies at some later point in time. Non volatile memory can take the form of ?ash memory that is

alternative algorithm to select a sub-request 1085. The

selected sub-request 1085 is usually selected to yield the best throughput and/ or response time characteristics that are desired. In some embodiments, the desired characteristic is

programmed upon poWer loss using back EMF or batteries, battery packed SRAM, or other nonvolatile technologies. For non-volatile caching, every outstanding internal Write is recorded into memory, possibly volatile, at the point status is

performance, Wherein the sub-request of data that is closest to the rotational position of the disc head is selected for Writing. The one or more sub-requests that are not the selected sub

request 1085 are the remaining sub-requests 1070. A Writer 1090 is operably coupled to the scheduling pool 1065. The Writer 1090 performs the selected sub-request 1085 in a data storage medium 1095. The data 1055 associ ated With the selected sub-request 1085 in the cache 1061 is transferred by the Writer 1090 to the data storage medium

sent to the host for the Write. Outstanding Writes that are logged into volatile memory are transferred into nonvolatile 20

data may also have to be transferred to nonvolatile memory if

nonvolatile Write caching is enabled and no copies have been Written to disc. The drive is still cogniZant of all outstanding

1095. All sub-requests that are not selected for Writing may be

removed from the scheduling pool 1065. In some embodi ments, all associated sub-requests are removed that remain in

25

Writes upon restoration of poWer or recovery from reset or

other condition by reading the OCW list from nonvolatile

the scheduling pool 1065 after the selected sub-request 1085

memory. FIG. 11 is an exploded vieW of one embodiment of the

is scheduled by the scheduler 1060. In some embodiments, all associated sub-requests are removed that remain in the sched

uling pool 1065 after the selected sub-request 1085 is per

memory upon poWer loss or hard reset or any other condition

that Would destroy volatile memory. Corresponding Write

present invention, this embodiment shoWing a disc drive 30

1100. The disc drive 1100 is one example of a data storage

formed. For example, if the scheduler 1060 selected sub

device, such as compact disc (CDROM) devices, tape car

request B from among the tWo sub-requests sub-request A 1050 and sub-request B 1052 in scheduling pool 1065, after sub-request B 1052 is performed, sub-request A 1050 is removed from the scheduling pool 1065. The scheduling pool

tridge devices, digital versatile disc (DVD) or digital video disc (DVD) devices. FIG. 11 provides a top [plan] plane vieW 35

of a disc drive block data storage device 1100. The disc drive 1100 includes a sealed housing 1101 formed by a rigid base

1065 is operably coupled to the inserter 1045 and the sched

deck 1102 and a top cover [104] 1104 (shoWn in partial

uler 1060. The sub-requests that are not selected are saved in the OCW list 1082 so that the instances of data in the cache

cutaWay).

[1060] 106] that are associated With the remaining sub-re

Mechanical components of the disc drive 1100 are sup

ported Within the housing 1101, including a spindle motor 40

volatile caching. Where nonvolatile caching is enabled, con trol passes to a synchroniZer [1050] 1040 that ensures syn chroniZation of data. The adder 1080 inserts a designated

45

entry that lists all copies in the OCW list. When the determining 1020 yields an indication, such as a

puter, not shoWn) and directs overall disc drive operation. The 50

inserter 1045. The OCW list 1082 is a data structure, such as a table, that

indicates Write sub-requests that have not been performed to a data storage medium. The OCW list includes one or more

entries. The data storage device must alWays be aWare of all

55

interface circuit 1124 includes a programmable controller (processor) 1126 With associated memory 1128, a buffer 1130, an error correction code (ECC) block 1132, a sequencer 1134 and an input/output (I/O) control block 1136. The buffer 1130 temporarily stores user data during read and Write operations, and includes a command queue (CQ)

1131 Where multiple pending access operations are tempo rarily stored pending execution. The ECC block 1132 applies

remaining outstanding Writes to copies to guarantee that all copies eventually contain the same data. All outstanding copy Writes are recorded into the OCW (outstanding copy Write) list. The necessary information needed to complete the out standing Writes is included in the OCW list. Each list entry must include at least the LBA or PBA, length, and copy indicator. A copy indicator referring to the most recent data may also need to be saved if more than tWo copies exist. If Write caching is enabled and no copies have been Written, then the cache memory location of the Write data is also saved. Non-Write caching implementations may also save the cache

cent the discs 1108. The actuator assembly is rotated about an actuator axis through application of current to a coil 1114 of a voice coil motor (VCM) 1116. FIG. 12 provides a functional block diagram for the disc drive 1100. A hardWare/?rmWare based interface circuit 1124 communicates With a host device (such as a personal com

?ag, that the Write request 1015 is associated With disabled

caching. Where caching is disabled, control is passed to

1106 Which rotates a number of recording discs 1108 at a

constant high speed, and an actuator assembly 1110 supports a corresponding number of data transducing heads 1112 adja

quests 1070 can be Written to the data storage medium later. In some embodiments, the determiner 1030 may yield an indication that the Write request 1015 is associated With non

60

on-the-?y error detection and correction to retrieved data. The sequencer 1134 asserts read and Write gates to direct the reading and Writing of data. The U0 block 1136 serves as an interface With the host device. FIG. 12 further shoWs the disc drive 1100 to include a

read/Write (R/W) channel 1138 Which encodes and serialiZes data during Write operations and reconstructs user data from 65

the discs 1108 during read operations. A preampli?er/driver circuit (preamp) 1140 applies Write currents to the heads 1112

and provides pre-ampli?cation of readback signals.

US RE43,032 E 14

13

most recent data; and Writing 730 the most recent data to the

A servo control circuit 1142 uses servo data to provide the

appropriate current to the coil 1114 to position the heads 1112

other instances, if the data is not synchronized. A data storage device is also provided including: at least prises a programmable ARM processor 1144 (Advanced one data storage medium; and a controller, communicatively Reduced-Instruction-Set-Computer (RISC) Machine). The 5 coupled to the data storage medium. The controller including as required. The servo control circuit 1142 preferably com

controller 1126 communicates With the ARM 1144 to move the heads 1112 to the desired locations on the discs 1108

a data structure suitable for representing at least one Write

sub -request of mirrored data on the data storage medium that has not been performed to the data storage medium and a synchronizer of the mirrored data on the data storage medium. A further embodiment of the data storage device including a representation of a logical buffer address, if non

during execution of the various pending access commands in the command queue 1131 in turn. In summary, a method is provided for receiving 210 a read command associated With mirrored data of a data storage medium in a data storage device and determining 220 Whether the read command is associated With synchronized or unsyn

chronized data. Another embodiment may include, inserting

volatile caching is being used; a representation of the instance of the Write request; a representation of the length of Write data associated With the sub-request; and a representation of

270 at least one read sub-request from the read command into a scheduling pool; scheduling 280 one of the read sub-re

a location of the Write-data. It is to be understood that even though numerous charac

quests; and removing 290 remaining sub-requests from the

teristics and advantages of various embodiments of the present invention have been set forth in the foregoing descrip tion, together With details of the structure and function of various embodiments of the invention, this disclosure is illus trative only, and changes may be made in detail, especially in matters of structure and arrangement of parts Within the prin ciples of the present invention to the full extent indicated by the broad general meaning of the terms in Which the appended claims are expressed. For example, the particular elements may vary depending on the particular application for the data synchronizing While maintaining substantially the same functionality Without departing from the scope and spirit of

scheduling pool. Another embodiment is a method comprising the steps of receiving 31 0 a Write command associated With mirrored data of a data storage medium in a data storage device; and deter mining 320 Whether the Write command is associated With enabled or disabled caching.

20

Also described is a method to add a neW entry to an out

standing-copy-Write list in a data storage device, the method including merging the neW entry With the existing entry in the outstanding-copy-Write list, if the address associated With the

25

neW entry is sequential, overlapping or close to an address

associated With an existing entry in the outstanding-copy Write list; and creating a neW entry in the outstanding-copy Write list, if an unused entry exists in the outstanding-copy Write list and if the address associated With the neW entry is

the present invention. In addition, although the preferred 30

embodiment described herein is directed to a data synchro

35

nization method for a disc drive, it Will be appreciated by those skilled in the art that the teachings of the present inven tion can be applied to other systems, like CD-Rs, CD-RWs, tape or any data storage system, Without departing from the scope and spirit of the present invention.

not sequential, overlapping or close to an address associated

With an existing entry in the outstanding-copy-Write list. Also provided is a method to manage a destructive condi

tion associated With an outstanding-copy-Write resident list in

volatile memory in a data storage device, including copying

What is claimed is:

410 the outstanding-copy-Write list to nonvolatile memory; and copying 420 corresponding instance data to nonvolatile memory.

receiving a command to retrieve data associated With mir rored data of a data storage medium in a data storage

1. A method comprising the steps of: 40

device;

Another embodiment includes a method to initialize a data

storage device With a valid outstanding-copy-Write list in nonvolatile memory associated With synchronized mirrored

determining Whether the retrieve command is associated

data in the data storage device, by restoring 510 the outstand ing-copy-Write list to operating memory and performing 520 Writes from the outstanding-copy-Write list during opportune

inserting at least one retrieve sub -request from the retrieve

With synchronized or unsynchronized data; 45

command into a scheduling pool; scheduling one of the retrieve sub-requests; and

times. removing remaining sub-requests from the scheduling pool. In yet another embodiment of the present invention, a 2. The method of claim 1, and further comprising if the method to perform entries in an outstanding-copy-Write list of a data storage device includes determining 610 that the data 50 retrieve command is associated With unsynchronized data: synchronizing data. storage device is idle. In a further embodiment, the method includes determining 620 that performing a Write to the data 3. The method of claim 2, Wherein the synchronizing data

further comprises the steps of:

storage device from the outstanding-copy-Write list Will have an effect on the performance of other functions of the data storage device that is less than a predetermined threshold. In

55

yet another embodiment, the method includes determining 630 that data is being read With frequency at least equal to a predetermined threshold from an unsynchronized location of the data storage device. The method may also include deter mining 640 that the quantity of entries in the outstanding copy-Write list is at least equal to a predetermined quantity.

60

completing stores of entries in a data structure indicating Which store requests have not been completed that are related to the command. 4. The method of claim 1, if the retrieve command is

associated With unsynchronized data: retrieving most recently stored data. 5. A data storage device comprising: a data storage medium comprising mirrored data;

Another embodiment of the present invention includes a method to initialize a data storage device having an invalid

an input for receiving a retrieve command associated With

outstanding-copy-Write list in nonvolatile memory that is associated With synchronized mirrored data in the data stor age device, including Writing 710 a timestamp to the record ing media for every sector that is Written; determining 720 the

a controller, Which determines Whether the retrieve com

the mirrored data; and 65

mand is associated With synchronized or unsynchro nized data, inserts at least one retrieve sub-request from the retrieve command into a scheduling pool, schedules

US RE43,032 E 15

16 10. The method ofclaim 9, andfurther comprising ifthe

one of the retrieve sub-requests, and removes remaining

sub-requests from the scheduling pool. 6. The data storage device of claim 5 and further compris ing a synchronizer, Which synchronizes the data if the retrieve command is associated With unsynchronized data. 7. The data storage device of claim 6, Wherein the control ler is further adapted to complete stores of entries in a data

retrieve command is associated with unsynchronized data:

synchronizing data. 1]. The method of claim 10, wherein the synchronizing 5

completing stores of entries in a data structure indicating which store requests have not been completed that are related to the command.

structure indicating Which store requests have not been com pleted that are related to the command.

12. The method of claim 1], ifthe retrieve command is associated with unsynchronized data: retrieving most recently stored data.

8. The data storage device of claim 5, the controller is adapted to retrieve most recently stored data if the retrieve command is associated With unsynchronized data. 9. A circuit-based method comprising: in response to a command that prompts execution of cir cuit-based logic to retrieve data associated with mir rored data of a data storage medium in a data storage device, and to an indication ofwhether the retrieve com

data further comprises the steps of'

13. The method ofclaim 9, andfurther comprising ifthe retrieve command is associated with unsynchronized data, then synchronizing data, and therein managing a destructive 15

condition associated with a data storage device that is con

?gured to store the mirrored data.

mand is associated with synchronized or unsynchro

14. The method ofclaim 9, andfurther comprising:

nized data, using the circuit-based logic tofacilitate

managing a destructive condition associated with a data

inserting at least one retrieve sub-request from the retrieve command into a scheduling pool; scheduling one of the retrieve sub-requests; and

removing remaining sub-requests from the scheduling

pool.

20

storage device that is configured to store the mirrored data.

Synchronized mirrored data in a data storage device

Jan 8, 2008 - Pat. No. 6,295,577 issued. Sep. 25, 2001, entitled “Disc storage system having a non volatile cache to store write data in the event of a power.

2MB Sizes 1 Downloads 305 Views

Recommend Documents

Synchronized mirrored data in a data storage device
Jan 8, 2008 - types of data storage devices, including hard-disc drives, optical drives (such as CDROMs), ZIP drives, ?oppy-disc drives, and many other types ...

Embroidery data creating device
Aug 29, 1996 - embroidery data creating device, an image data, WhlCh consists of a .... hard disk drive, and a CRT (Cathode Ray Tube) display, etc. Recently ...

Embroidery data creating device
Aug 29, 1996 - THINNING OPERATION. PICK uP LOOPS. I ATTHIBUTE SETTING I 85. CONVERT. SEWING DATA. STORE. EMBROIDERY DATA. S2. S3. S4.

DATA STORAGE TECHNOLOGY.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

data storage, retrieval and data base management ...
Data Types and Index Fields: Integer Number, Single and Double precision, Logical,. Character, String, Memo data, ... organization, storage, management, and retrieval of data in a database. 5.1. Management ... Database Definition: A collection of dat

Enabling Data Storage Security in Cloud Computing for ... - wseas.us
Cloud computing provides unlimited infrastructure to store and ... service, paying instead for what they use. ... Due to this redundancy the data can be easily modified by unauthorized users which .... for application purposes, the user interacts.

Enabling Data Storage Security in Cloud Computing for ... - wseas.us
important aspect of quality of service, Cloud. Computing inevitably poses ... also proposed distributed protocols [8]-[10] for ensuring storage .... Best practices for managing trust in private clouds ... information they're hosting on behalf of thei

data integrity proofs in cloud storage pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. data integrity ...

Data Storage Placement in Sensor Networks
May 25, 2006 - mission by aggregating data; it does not address storage problem in sensor networks. Data-centric storage schemes. [16,17,19] store data to different places in sensor networks according to different data types. In [17,19], the authors

Dynamic Authentication for Efficient Data Storage in HMS
other proceedings present in distributed computing operations. SAAS(Software As a Service), PAAS(Platform As a. Service), and Infrastructure As a Service are three basic services of the cloud computing for storage data, processing data and maintains

storage management in data centers pdf
There was a problem previewing this document. Retrying... Download. Connect more ... storage management in data centers pdf. storage management in data ...

CStorage: Distributed Data Storage in Wireless Sensor ...
ments) of the signal employing compressive sensing (CS) tech- niques [6, 7]. On the ..... Networks,” Technical. Report, University of Southern California,, 2009.

Time-varying Management of Data Storage - Microsoft
Currently, administrators set these relevant parameters manually, though such a policy can lead to un- desired system behavior. For example, if a data item that ...

DAMSEL - A Data Model Storage Library for Exascale Science - CUCIS
Jul 26, 2011 - DAMSEL - A Data Model Storage Library for. Exascale ... Storage data models developed in the 1990s; Network. Common Data ... Big Picture.

A Comparison of Data File and Storage ...
A comparison of data file and storage configurations for efficient ... procedures for data analysis suffer the same fate (Abiteboul et al., 2005). ..... Rew, R and Davis, G 1990, 'The Unidata netCDF: Software for scientific data access', in Sixth ...

A Novel Scheme for Remote Data Storage - Dual Encryption - IJRIT
Abstract:- In recent years, cloud computing has become a major part of IT industry. It is envisioned as a next generation in It. every organizations and industries ...

A comparison of data file and storage ... - Semantic Scholar
Satellite data volumes have seen a steady increase in recent years due to improvements in sensor .... series analysis due to the I/O overhead incurred when.

SeDas A Self-Destructing Data System Based on Active Storage ...
SeDas A Self-Destructing Data System Based on Active Storage Framework..pdf. SeDas A Self-Destructing Data System Based on Active Storage Framework..

DAMSEL - A Data Model Storage Library for Exascale Science - CUCIS
Jul 26, 2011 - Proposed API and implementation, Data layout (In Progress). 2 ... Here, we have identified data models used in the motifs .... Big Picture.

A Novel Scheme for Remote Data Storage - Dual Encryption - IJRIT
stored in the cloud. By using the corresponding private key, the embedded data and the key can be extracted successfully from the cloud. This scheme ensures ...

A comparison of data file and storage ... - Semantic Scholar
Email: A. K. Bachoo [email protected], F. van den Bergh [email protected], ... queries, it still leaves the retrieval of the bulk of the data up to the .... Equation 3 is best understood by studying its ... Its software drivers are designed to sto

Yobicash: a cryptocurrency for secure sharing and storage of data
The World Wide Web is built on top of technologies for sharing, storing and retrieving data. A few decades after its inception, the web has become the backbone of the information economy, and thanks to innovations as the Internet of Things, Virtual R