Commit Graph

165 Commits

Author SHA1 Message Date
David Gibson
f43e35255c Virtual hash page table handling on pSeries machine
On pSeries logical partitions, excepting the old POWER4-style full system
partitions, the guest does not have direct access to the hardware page
table.  Instead, the pagetable exists in hypervisor memory, and the guest
must manipulate it with hypercalls.

However, our current pSeries emulation more closely resembles the old
style where the guest must set up and handle the pagetables itself.  This
patch converts it to act like a modern partition.

This involves two things: first, the hash translation path is modified to
permit the has table to be stored externally to the emulated machine's
RAM.  The pSeries machine init code configures the CPUs to use this mode.

Secondly, we emulate the PAPR hypercalls for manipulating the external
hashed page table.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
9d52e9079d Add POWER7 support for ppc
This adds emulation support for the recent POWER7 cpu to qemu.  It's far
from perfect - it's missing a number of POWER7 features so far, including
any support for VSX or decimal floating point instructions.  However, it's
close enough to boot a kernel with the POWER7 PVR.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
cdaee00633 Support 1T segments on ppc
Traditionally, the "segments" used for the two-stage translation used on
powerpc MMUs were 256MB in size.  This was the only option on all hash
page table based 32-bit powerpc cpus, and on the earlier 64-bit hash page
table based cpus.  However, newer 64-bit cpus also permit 1TB segments

This patch adds support for 1TB segment translation to the qemu code.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
256cebe5d1 Better factor the ppc hash translation path
Currently the path handling hash page table translation in get_segment()
has a mix of common and 32 or 64 bit specific code.  However the
division is not done terribly well which results in a lot of messy code
flipping between common and divided paths.

This patch improves the organization, consolidating several divided paths
into one.  This in turn allows simplification of some code in
get_segment(), removing a number of ugly interim variables.

This new factorization will also make it easier to add support for the 1T
segments added in newer CPUs.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
fda6a0ecc6 Use "hash" more consistently in ppc mmu code
Currently, get_segment() has a variable called hash.  However it doesn't
(quite) get the hash value for the ppc hashed page table.  Instead it
gets the hash shifted - effectively the offset of the hash bucket within
the hash page table.

As well, as being different to the normal use of plain "hash" in the
architecture documentation, this usage necessitates some awkward 32/64
dependent masks and shifts which clutter up the path in get_segment().

This patch alters the code to use raw hash values through get_segment()
including storing raw hashes instead of pte group offsets in the ctx
structure.  This cleans up the path noticeably.

This does necessitate 32/64 dependent shifts when the hash values are
taken out of the ctx structure and used, but those paths already have
32/64 bit variants so this is less awkward than it was in get_segment().

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
bb593904c1 Parse SDR1 on mtspr instead of at translate time
On ppc machines with hash table MMUs, the special purpose register SDR1
contains both the base address of the encoded size (hashed) page tables.

At present, we interpret the SDR1 value within the address translation
path.  But because the encodings of the size for 32-bit and 64-bit are
different this makes for a confusing branch on the MMU type with a bunch
of curly shifts and masks in the middle of the translate path.

This patch cleans things up by moving the interpretation on SDR1 into the
helper function handling the write to the register.  This leaves a simple
pre-sanitized base address and mask for the hash table in the CPUState
structure which is easier to work with in the translation path.

This makes the translation path more readable.  It addresses the FIXME
comment currently in the mtsdr1 helper, by validating the SDR1 value during
interpretation.  Finally it opens the way for emulating a pSeries-style
partition where the hash table used for translation is not mapped into
the guests's RAM.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
8500e3a912 Clean up slb_lookup() function
The slb_lookup() function, used in the ppc translation path returns a
number of slb entry fields in reference parameters.  However, only one
of the two callers of slb_lookup() actually wants this information.

This patch, therefore, makes slb_lookup() return a simple pointer to the
located SLB entry (or NULL), and the caller which needs the fields can
extract them itself.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:55 +02:00
David Gibson
efdef95fee Implement PowerPC slbmfee and slbmfev instructions
For a 64-bit PowerPC target, qemu correctly implements translation
through the segment lookaside buffer.  Likewise it supports the
slbmte instruction which is used to load entries into the SLB.

However, it does not emulate the slbmfee and slbmfev instructions
which read SLB entries back into registers.  Because these are
only occasionally used in guests (mostly for debugging) we get
away with it.

However, given the recent SLB cleanups, it becomes quite easy to
implement these, and thereby allow, amongst other things, a guest
Linux to use xmon's command to dump the SLB.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:54 +02:00
David Gibson
d569956eaf Add a hook to allow hypercalls to be emulated on PowerPC
PowerPC and POWER chips since the POWER4 and 970 have a special
hypervisor mode, and a corresponding form of the system call
instruction which traps to the hypervisor.

qemu currently has stub implementations of hypervisor mode.  That
is, the outline is there to allow qemu to run a PowerPC hypervisor
under emulation.  There are a number of details missing so this
won't actually work at present, but the idea is there.

What there is no provision at all, is for qemu to instead emulate
the hypervisor itself.  That is to have hypercalls trap into qemu
and their result be emulated from qemu, rather than running
hypervisor code within the emulated system.

Hypervisor hardware aware KVM implementations are in the works and
it would  be useful for debugging and development to also allow
full emulation of the same para-virtualized guests as such a KVM.

Therefore, this patch adds a hook which will allow a machine to
set up emulation of hypervisor calls.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:54 +02:00
David Gibson
81762d6dd0 Clean up PowerPC SLB handling code
Currently the SLB information when emulating a PowerPC 970 is
storeed in a structure with the unhelpfully named fields 'tmp'
and 'tmp64'.  While the layout in these fields does match the
description of the SLB in the architecture document, it is not
convenient either for looking up the SLB, or for emulating the
slbmte instruction.

This patch, therefore, reorganizes the SLB entry structure to be
divided in the the "ESID related" and "VSID related" fields as
they are divided in instructions accessing the SLB.

In addition to making the code smaller and more readable, this will
make it easier to implement for the 1TB segments used in more
recent PowerPC chips.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-04-01 18:34:54 +02:00
Blue Swirl
ae0bfb79aa ppc: remove video.x
Only Mac-on-Linux stuff used video.x, OpenBIOS does not need it.

Remove video.x MoL hacks.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-10-13 18:38:07 +00:00
John Clark
999fa40e43 ppc: Minor 40x MMU fixes
* Fix swapped reading of tlblo/hi.
* Fix tlb exec permissions

Signed-off-by: John Clark <clarkjc@runbox.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-10-05 18:38:55 +02:00
Edgar E. Iglesias
a586e548fb powerpc: Improve emulation of the BookE MMU
Improve the emulation of the BookE MMU to be able to boot linux
on virtex5 boards.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-09-24 22:01:20 +02:00
Alexander Graf
41557447d3 PPC: Redesign interrupt trigger path
According to the Book3S spec, the interrupt context starts with an MSR
value that is rather simple. If we leave out the HV case, it's almost
always 0.

To reflect this, let's redesign the way that MSR value gets calculated.
Using this, we also squash the bug where MSR_POW can slip through into
the interrupt handler MSR.

Reported-by: Thomas Monjalon <thomas.monjalon@openwide.fr>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-09-15 16:18:33 +02:00
Edgar E. Iglesias
24e0e38b83 powerpc: Avoid TLB related log spamming
Invalid TLB entries are normal and should not spam the log.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-09-11 14:29:07 +02:00
Thomas Monjalon
0f89cc7b6c target-ppc: remove useless line
This line was a bit clear.
The next lines set or reset this bit (LE) depending of another bit (ILE).
So the first line is useless.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-05-31 19:18:25 +02:00
Thomas Monjalon
c3d420ead1 target-ppc: fix RFI by clearing some bits of MSR
Since commit 2ada0ed, "Return From Interrupt" is broken for PPC processors
because some interrupt specifics bits of SRR1 are copied to MSR.

SRR1 is a save of MSR during interrupt.
During RFI, MSR must be restored from SRR1.
But some bits of SRR1 are interrupt-specific and are not used for MSR saving.

This is the specification (ISA 2.06) at chapter 6.4.3 (Interrupt Processing):
"2. Bits 33:36 and 42:47 of SRR1 or HSRR1 are loaded with information specific
    to the interrupt type.
 3. Bits 0:32, 37:41, and 48:63 of SRR1 or HSRR1 are loaded with a copy of the
    corresponding bits of the MSR."

Below is a representation of MSR bits which are not saved:
0:15 16:31 32  33:36    37:41      42:47     48:63
——— | ——— | — X X X X — — — — — X X X X X X | ————
0000 0000 |    7   |   8   |   3   |   F    | 0000

History:
In the initial Qemu implementation (e1833e1), the mask 0x783F0000 was used for
saving MSR in SRR1. But all the bits 32:47 were cleared during RFI restoring.
This was wrong. The commit 2ada0ed explains that this breaks Altivec.
Indeed, bit 38 (for Altivec support) must be saved and restored.
The change of 2ada0ed was to restore all the bits of SRR1 to MSR.
But it's also wrong.

Explanation:
As an example, let's see what's happening after a TLB miss.
According to the e300 manual (E300CORERM table 5-6), the TLB miss interrupts
set the bits 44-47 for KEY, I/D, WAY and S/L. These bits are specifics to the
interrupt and must not be copied into MSR at the end of the interrupt.
With the current implementation, a TLB miss overwrite bits POW, TGPR and ILE.

Fix:
It shouldn't be needed to filter-out bits on MSR saving when interrupt occurs.
Specific bits overwrite MSR ones in SRR1.
But at the end of interrupt (RFI), specifics bits must be cleared before
restoring MSR from SRR1. The mask 0x783F0000 apply here.

Discussion:
The bits of the mask 0x783F0000 are cleared after an interrupt.
I cannot find a specification which talks about this
but I assume it is the truth since Linux can run this way.
Maybe it's not perfect but it's better (works for e300).

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-05-31 19:17:44 +02:00
Blue Swirl
05f92404cd ppc: remove dead assignments, spotted by clang analyzer
Value stored is never read.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-04-25 20:32:49 +00:00
Paul Brook
d4c430a80f Large page TLB flush
QEMU uses a fixed page size for the CPU TLB.  If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.

When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page.  However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.

Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages.  If the guest invalidates this region then flush the
whole TLB.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-17 02:44:41 +00:00
Paul Brook
4fcc562bff Remove cpu_get_phys_page_debug from userspace emulation
cpu_get_phys_page_debug makes no sense for userspace emulation, so remove it.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-12 18:34:25 +00:00
Alexander Graf
b2eca4453f PPC: Fix large pages
We were masking 1TB SLB entries on the feature bit of 16 MB pages. Obviously
that breaks, so let's just ignore 1TB SLB entries for now and instead do
16MB pages correctly.

This fixes PPC64 Linux boot with -m above 256.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-02-14 16:10:54 +02:00
Edgar E. Iglesias
dcbc9a70af ppc-40x: Correct ESR for zone protection faults.
Raise the zone protection fault in ESR for TLB faults caused by
zone protection bits.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-01-14 14:42:30 +01:00
Edgar E. Iglesias
ec5c3e487e ppc-40x: Correct decoding of zone protection bits.
The 40x MMU has 15 zones in the ZPR register.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2010-01-14 14:42:17 +01:00
Blue Swirl
b55a37c981 user: move CPU reset call to main.c for x86/PPC/Sparc
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-11-07 10:37:06 +00:00
Blue Swirl
d84bda46de PPC: rename cpu_ppc_reset to cpu_reset for consistency
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-11-07 10:36:04 +00:00
Blue Swirl
e43941318d PPC: remove unneeded calls to device reset
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-11-07 09:32:21 +00:00
Anthony Liguori
c227f0995e Revert "Get rid of _t suffix"
In the very least, a change like this requires discussion on the list.

The naming convention is goofy and it causes a massive merge problem.  Something
like this _must_ be presented on the list first so people can provide input
and cope with it.

This reverts commit 99a0949b72.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01 16:12:16 -05:00
malc
99a0949b72 Get rid of _t suffix
Some not so obvious bits, slirp and Xen were left alone for the time
being.

Signed-off-by: malc <av1474@comtv.ru>
2009-10-01 22:45:02 +04:00
Blue Swirl
b11ebf64b6 Replace REGX with PRIx64
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-08-16 11:54:37 +00:00
Blue Swirl
90e189ece1 Replace local ADDRX/PADDRX macros with TARGET_FMT_lx/plx
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-08-16 11:13:18 +00:00
Blue Swirl
636aa20056 Replace always_inline with inline
We define inline as always_inline.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-08-16 09:06:54 +00:00
Nathan Froyd
18b21a2f83 target-ppc: retain l{w,d}arx loaded value
We do this so we can check on the corresponding stc{w,d}x. whether the
value has changed.  It's a poor man's form of implementing atomic
operations and is valid only for NPTL usermode Linux emulation.

Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: malc <av1474@comtv.ru>
2009-08-03 20:33:41 +04:00
Blue Swirl
0bf9e31af1 Fix most warnings (errors with -Werror) when debugging is enabled
I used the following command to enable debugging:
perl -p -i -e 's/^\/\/#define DEBUG/#define DEBUG/g' * */* */*/*

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-20 17:19:25 +00:00
Blue Swirl
8167ee8839 Update to a hopefully more future proof FSF address
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-16 20:47:01 +00:00
Paul Brook
5561650587 Include assert.h from qemu-common.h
Include assert.h from qemu-common.h and remove other direct uses.
cpu-all.h still need to include it because of the dyngen-exec.h hacks

Signed-off-by: Paul Brook <paul@codesourcery.com>
2009-05-13 20:54:26 +01:00
Blue Swirl
fc1c67bc2a Fix PPC reset 2009-04-28 18:00:30 +00:00
aliguori
0bf46a40a1 qemu: introduce qemu_init_vcpu (Marcelo Tosatti)
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7242 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-24 18:03:41 +00:00
aurel32
bf1752ef58 target-ppc: Explain why the whole TLB is flushed on SR write
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6947 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-29 13:36:32 +00:00
blueswir1
9485593725 Disable BAT for 970
The 970 doesn't know BAT, so let's not search BATs there.
This was only in as a hack for OpenHackWare so it would
work on PPC64.

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6759 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:58:30 +00:00
blueswir1
8eee0af947 Keep SLB in-CPU
Real 970 CPUs have the SLB not memory backed, but inside the CPU.
This breaks bridge mode for 970 for now, but at least keeps us from
overwriting physical addresses 0x0 - 0x300, rendering our interrupt
handlers useless.

I put in a stub for bridge mode operation that could be enabled
easily, but for now it's safer to leave that off I guess (970fx doesn't
have bridge mode AFAIK).

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6757 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:57:42 +00:00
blueswir1
29c8ca6f2e Fix NX bit
ctx->nx only got ORed, but never reset. So when one page in the
lifetime of the VM was ever NX, all later pages were too.

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6755 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:57:01 +00:00
blueswir1
6ce0ca1204 Enable 64bit mode on interrupts
Real 970s enable MSR_SF on all interrupts. The current code didn't do
this until now, so let's activate it!

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6752 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:54:59 +00:00
blueswir1
5b5aba4f14 Implement large pages
The current SLB/PTE code does not support large pages, which are
required by Linux, as it boots up with the kernel regions up as large.

This patch implements large page support, so we can run Linux.

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6748 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:51:18 +00:00
blueswir1
f6b868fc58 Implement slbmte
In order to modify SLB entries on recent PPC64 machines, the slbmte
instruction is used.

This patch implements the slbmte instruction and makes the "bridge"
mode code use the slb set functions, so we can move the SLB into
the CPU struct later.

This is required for Linux to run on PPC64.

Signed-off-by: Alexander Graf <alex@csgraf.de>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6747 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:50:01 +00:00
blueswir1
07c485ce78 Turn MMU off on reset
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6637 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-21 17:29:14 +00:00
aliguori
0d0266a53b targets: remove error handling from qemu_malloc() callers (Avi Kivity)
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6530 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-05 22:06:11 +00:00
aliguori
eca1bdf415 Log reset events (Jan Kiszka)
Original idea&code by Kevin Wolf, split-up in two patches and added more
archs.

This patch introduces a flag to log CPU resets. Useful for tracing
unexpected resets (such as those triggered by x86 triple faults).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6452 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-26 19:54:31 +00:00
aliguori
93fcfe39a0 Convert references to logfile/loglevel to use qemu_log*() macros
This is a large patch that changes all occurrences of logfile/loglevel
global variables to use the new qemu_log*() macros.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6338 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 22:34:14 +00:00
aliguori
d12d51d5ba Clean up debugging code #ifdefs (Eduardo Habkost)
Use macros to avoid #ifdefs on debugging code.

This patch doesn't try to merge logging macros from different files,
but just unify the debugging code #ifdefs onto a macro on each file. A
further cleanup can unify the debugging macros on a common header, later

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6332 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 21:48:06 +00:00
aliguori
da07cf59b0 powerpc/kvm: enable POWERPC_MMU_BOOKE_FSL when kvm is enabled (Liu Yu)
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Acked-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6329 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 21:24:24 +00:00