Make JSCells have 32-bit Structure pointers
authormhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 27 Feb 2014 01:27:18 +0000 (01:27 +0000)
committermhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 27 Feb 2014 01:27:18 +0000 (01:27 +0000)
https://bugs.webkit.org/show_bug.cgi?id=123195

Reviewed by Filip Pizlo.

This patch changes JSCells such that they no longer have a full 64-bit Structure
pointer in their header. Instead they now have a 32-bit index into
a per-VM table of Structure pointers. 32-bit platforms still use normal Structure
pointers.

This change frees up an additional 32 bits of information in our object headers.
We then use this extra space to store the indexing type of the object, the JSType
of the object, some various type flags, and garbage collection data (e.g. mark bit).
Because this inline type information is now faster to read, it pays for the slowdown
incurred by having to perform an extra indirection through the StructureIDTable.

This patch also threads a reference to the current VM through more of the C++ runtime
to offset the cost of having to look up the VM to get the actual Structure pointer.

* API/JSContext.mm:
(-[JSContext setException:]):
(-[JSContext wrapperForObjCObject:]):
(-[JSContext wrapperForJSObject:]):
* API/JSContextRef.cpp:
(JSContextGroupRelease):
(JSGlobalContextRelease):
* API/JSObjectRef.cpp:
(JSObjectIsFunction):
(JSObjectCopyPropertyNames):
* API/JSValue.mm:
(containerValueToObject):
* API/JSWrapperMap.mm:
(tryUnwrapObjcObject):
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/AbstractMacroAssembler.h:
* assembler/MacroAssembler.h:
(JSC::MacroAssembler::patchableBranch32WithPatch):
(JSC::MacroAssembler::patchableBranch32):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::branchPtrWithPatch):
(JSC::MacroAssemblerARM64::patchableBranch32WithPatch):
(JSC::MacroAssemblerARM64::canJumpReplacePatchableBranch32WithPatch):
(JSC::MacroAssemblerARM64::startOfPatchableBranch32WithPatchOnAddress):
(JSC::MacroAssemblerARM64::revertJumpReplacementToPatchableBranch32WithPatch):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::store8):
(JSC::MacroAssemblerARMv7::branch32WithPatch):
(JSC::MacroAssemblerARMv7::patchableBranch32WithPatch):
(JSC::MacroAssemblerARMv7::canJumpReplacePatchableBranch32WithPatch):
(JSC::MacroAssemblerARMv7::startOfPatchableBranch32WithPatchOnAddress):
(JSC::MacroAssemblerARMv7::revertJumpReplacementToPatchableBranch32WithPatch):
* assembler/MacroAssemblerX86.h:
(JSC::MacroAssemblerX86::branch32WithPatch):
(JSC::MacroAssemblerX86::canJumpReplacePatchableBranch32WithPatch):
(JSC::MacroAssemblerX86::startOfPatchableBranch32WithPatchOnAddress):
(JSC::MacroAssemblerX86::revertJumpReplacementToPatchableBranch32WithPatch):
* assembler/MacroAssemblerX86_64.h:
(JSC::MacroAssemblerX86_64::store32):
(JSC::MacroAssemblerX86_64::moveWithPatch):
(JSC::MacroAssemblerX86_64::branch32WithPatch):
(JSC::MacroAssemblerX86_64::canJumpReplacePatchableBranch32WithPatch):
(JSC::MacroAssemblerX86_64::startOfBranch32WithPatchOnRegister):
(JSC::MacroAssemblerX86_64::startOfPatchableBranch32WithPatchOnAddress):
(JSC::MacroAssemblerX86_64::revertJumpReplacementToPatchableBranch32WithPatch):
* assembler/RepatchBuffer.h:
(JSC::RepatchBuffer::startOfPatchableBranch32WithPatchOnAddress):
(JSC::RepatchBuffer::revertJumpReplacementToPatchableBranch32WithPatch):
* assembler/X86Assembler.h:
(JSC::X86Assembler::revertJumpTo_movq_i64r):
(JSC::X86Assembler::revertJumpTo_movl_i32r):
* bytecode/ArrayProfile.cpp:
(JSC::ArrayProfile::computeUpdatedPrediction):
* bytecode/ArrayProfile.h:
(JSC::ArrayProfile::ArrayProfile):
(JSC::ArrayProfile::addressOfLastSeenStructureID):
(JSC::ArrayProfile::observeStructure):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::heap):
* bytecode/UnlinkedCodeBlock.h:
* debugger/Debugger.h:
* dfg/DFGAbstractHeap.h:
* dfg/DFGArrayifySlowPathGenerator.h:
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGJITCompiler.h:
(JSC::DFG::JITCompiler::branchWeakStructure):
(JSC::DFG::JITCompiler::branchStructurePtr):
* dfg/DFGOSRExitCompiler32_64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::osrWriteBarrier):
(JSC::DFG::adjustAndJumpToTarget):
* dfg/DFGOperations.cpp:
(JSC::DFG::putByVal):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::checkArray):
(JSC::DFG::SpeculativeJIT::arrayify):
(JSC::DFG::SpeculativeJIT::compilePeepHoleObjectEquality):
(JSC::DFG::SpeculativeJIT::compileInstanceOfForObject):
(JSC::DFG::SpeculativeJIT::compileInstanceOf):
(JSC::DFG::SpeculativeJIT::compileToStringOnCell):
(JSC::DFG::SpeculativeJIT::speculateObject):
(JSC::DFG::SpeculativeJIT::speculateFinalObject):
(JSC::DFG::SpeculativeJIT::speculateObjectOrOther):
(JSC::DFG::SpeculativeJIT::speculateString):
(JSC::DFG::SpeculativeJIT::speculateStringObject):
(JSC::DFG::SpeculativeJIT::speculateStringOrStringObject):
(JSC::DFG::SpeculativeJIT::emitSwitchChar):
(JSC::DFG::SpeculativeJIT::emitSwitchString):
(JSC::DFG::SpeculativeJIT::genericWriteBarrier):
(JSC::DFG::SpeculativeJIT::writeBarrier):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
(JSC::DFG::SpeculativeJIT::speculateStringObjectForStructure):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull):
(JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranchNull):
(JSC::DFG::SpeculativeJIT::compileObjectEquality):
(JSC::DFG::SpeculativeJIT::compileObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compileObjectOrOtherLogicalNot):
(JSC::DFG::SpeculativeJIT::emitObjectOrOtherBranch):
(JSC::DFG::SpeculativeJIT::compile):
(JSC::DFG::SpeculativeJIT::writeBarrier):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull):
(JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranchNull):
(JSC::DFG::SpeculativeJIT::compileObjectEquality):
(JSC::DFG::SpeculativeJIT::compileObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compileObjectOrOtherLogicalNot):
(JSC::DFG::SpeculativeJIT::emitObjectOrOtherBranch):
(JSC::DFG::SpeculativeJIT::compile):
(JSC::DFG::SpeculativeJIT::writeBarrier):
* dfg/DFGWorklist.cpp:
* ftl/FTLAbstractHeapRepository.cpp:
(JSC::FTL::AbstractHeapRepository::AbstractHeapRepository):
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileCheckStructure):
(JSC::FTL::LowerDFGToLLVM::compileArrayifyToStructure):
(JSC::FTL::LowerDFGToLLVM::compilePutStructure):
(JSC::FTL::LowerDFGToLLVM::compileToString):
(JSC::FTL::LowerDFGToLLVM::compileMultiGetByOffset):
(JSC::FTL::LowerDFGToLLVM::compileMultiPutByOffset):
(JSC::FTL::LowerDFGToLLVM::speculateTruthyObject):
(JSC::FTL::LowerDFGToLLVM::allocateCell):
(JSC::FTL::LowerDFGToLLVM::equalNullOrUndefined):
(JSC::FTL::LowerDFGToLLVM::isObject):
(JSC::FTL::LowerDFGToLLVM::isString):
(JSC::FTL::LowerDFGToLLVM::isArrayType):
(JSC::FTL::LowerDFGToLLVM::hasClassInfo):
(JSC::FTL::LowerDFGToLLVM::isType):
(JSC::FTL::LowerDFGToLLVM::speculateStringOrStringObject):
(JSC::FTL::LowerDFGToLLVM::speculateStringObjectForCell):
(JSC::FTL::LowerDFGToLLVM::speculateStringObjectForStructureID):
(JSC::FTL::LowerDFGToLLVM::speculateNonNullObject):
(JSC::FTL::LowerDFGToLLVM::loadMarkByte):
(JSC::FTL::LowerDFGToLLVM::loadStructure):
(JSC::FTL::LowerDFGToLLVM::weakStructure):
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLOutput.h:
(JSC::FTL::Output::store8):
* heap/GCAssertions.h:
* heap/Heap.cpp:
(JSC::Heap::getConservativeRegisterRoots):
(JSC::Heap::collect):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::structureIDTable):
* heap/MarkedSpace.h:
(JSC::MarkedSpace::forEachBlock):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::branchIfCellNotObject):
(JSC::AssemblyHelpers::genericWriteBarrier):
(JSC::AssemblyHelpers::emitLoadStructure):
(JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
* jit/JIT.h:
* jit/JITCall.cpp:
(JSC::JIT::compileOpCall):
(JSC::JIT::privateCompileClosureCall):
* jit/JITCall32_64.cpp:
(JSC::JIT::emit_op_ret_object_or_this):
(JSC::JIT::compileOpCall):
(JSC::JIT::privateCompileClosureCall):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::generateFastPathChecks):
* jit/JITInlineCacheGenerator.h:
* jit/JITInlines.h:
(JSC::JIT::emitLoadCharacterString):
(JSC::JIT::checkStructure):
(JSC::JIT::emitJumpIfCellNotObject):
(JSC::JIT::emitAllocateJSObject):
(JSC::JIT::emitArrayProfilingSiteWithCell):
(JSC::JIT::emitArrayProfilingSiteForBytecodeIndexWithCell):
(JSC::JIT::branchStructure):
(JSC::branchStructure):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_check_has_instance):
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emit_op_is_undefined):
(JSC::JIT::emit_op_is_string):
(JSC::JIT::emit_op_ret_object_or_this):
(JSC::JIT::emit_op_to_primitive):
(JSC::JIT::emit_op_jeq_null):
(JSC::JIT::emit_op_jneq_null):
(JSC::JIT::emit_op_get_pnames):
(JSC::JIT::emit_op_next_pname):
(JSC::JIT::emit_op_eq_null):
(JSC::JIT::emit_op_neq_null):
(JSC::JIT::emit_op_to_this):
(JSC::JIT::emitSlow_op_to_this):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_check_has_instance):
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emit_op_is_undefined):
(JSC::JIT::emit_op_is_string):
(JSC::JIT::emit_op_to_primitive):
(JSC::JIT::emit_op_jeq_null):
(JSC::JIT::emit_op_jneq_null):
(JSC::JIT::emitSlow_op_eq):
(JSC::JIT::emitSlow_op_neq):
(JSC::JIT::compileOpStrictEq):
(JSC::JIT::emit_op_eq_null):
(JSC::JIT::emit_op_neq_null):
(JSC::JIT::emit_op_get_pnames):
(JSC::JIT::emit_op_next_pname):
(JSC::JIT::emit_op_to_this):
* jit/JITOperations.cpp:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::stringGetByValStubGenerator):
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::emit_op_get_by_pname):
(JSC::JIT::emit_op_put_by_val):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emitLoadWithStructureCheck):
(JSC::JIT::emitSlow_op_get_from_scope):
(JSC::JIT::emitSlow_op_put_to_scope):
(JSC::JIT::checkMarkWord):
(JSC::JIT::emitWriteBarrier):
(JSC::JIT::addStructureTransitionCheck):
(JSC::JIT::emitIntTypedArrayGetByVal):
(JSC::JIT::emitFloatTypedArrayGetByVal):
(JSC::JIT::emitIntTypedArrayPutByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::stringGetByValStubGenerator):
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::emit_op_put_by_val):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_get_by_pname):
(JSC::JIT::emitLoadWithStructureCheck):
* jit/JSInterfaceJIT.h:
(JSC::JSInterfaceJIT::emitJumpIfNotType):
* jit/Repatch.cpp:
(JSC::repatchByIdSelfAccess):
(JSC::addStructureTransitionCheck):
(JSC::replaceWithJump):
(JSC::generateProtoChainAccessStub):
(JSC::tryCacheGetByID):
(JSC::tryBuildGetByIDList):
(JSC::writeBarrier):
(JSC::emitPutReplaceStub):
(JSC::emitPutTransitionStub):
(JSC::tryBuildPutByIdList):
(JSC::tryRepatchIn):
(JSC::linkClosureCall):
(JSC::resetGetByID):
(JSC::resetPutByID):
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::loadJSStringArgument):
(JSC::SpecializedThunkJIT::loadArgumentWithSpecificClass):
* jit/ThunkGenerators.cpp:
(JSC::virtualForThunkGenerator):
(JSC::arrayIteratorNextThunkGenerator):
* jit/UnusedPointer.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/Arguments.cpp:
(JSC::Arguments::createStrictModeCallerIfNecessary):
(JSC::Arguments::createStrictModeCalleeIfNecessary):
* runtime/Arguments.h:
(JSC::Arguments::createStructure):
* runtime/ArrayPrototype.cpp:
(JSC::shift):
(JSC::unshift):
(JSC::arrayProtoFuncToString):
(JSC::arrayProtoFuncPop):
(JSC::arrayProtoFuncReverse):
(JSC::performSlowSort):
(JSC::arrayProtoFuncSort):
(JSC::arrayProtoFuncSplice):
(JSC::arrayProtoFuncUnShift):
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/Executable.h:
(JSC::ExecutableBase::isFunctionExecutable):
(JSC::ExecutableBase::clearCodeVirtual):
(JSC::ScriptExecutable::unlinkCalls):
* runtime/GetterSetter.cpp:
(JSC::callGetter):
(JSC::callSetter):
* runtime/InitializeThreading.cpp:
* runtime/JSArray.cpp:
(JSC::JSArray::unshiftCountSlowCase):
(JSC::JSArray::setLength):
(JSC::JSArray::pop):
(JSC::JSArray::push):
(JSC::JSArray::shiftCountWithArrayStorage):
(JSC::JSArray::shiftCountWithAnyIndexingType):
(JSC::JSArray::unshiftCountWithArrayStorage):
(JSC::JSArray::unshiftCountWithAnyIndexingType):
(JSC::JSArray::sortNumericVector):
(JSC::JSArray::sortNumeric):
(JSC::JSArray::sortCompactedVector):
(JSC::JSArray::sort):
(JSC::JSArray::sortVector):
(JSC::JSArray::fillArgList):
(JSC::JSArray::copyToArguments):
(JSC::JSArray::compactForSorting):
* runtime/JSCJSValueInlines.h:
(JSC::JSValue::toThis):
(JSC::JSValue::put):
(JSC::JSValue::putByIndex):
(JSC::JSValue::equalSlowCaseInline):
* runtime/JSCell.cpp:
(JSC::JSCell::put):
(JSC::JSCell::putByIndex):
(JSC::JSCell::deleteProperty):
(JSC::JSCell::deletePropertyByIndex):
* runtime/JSCell.h:
(JSC::JSCell::clearStructure):
(JSC::JSCell::mark):
(JSC::JSCell::isMarked):
(JSC::JSCell::structureIDOffset):
(JSC::JSCell::typeInfoFlagsOffset):
(JSC::JSCell::typeInfoTypeOffset):
(JSC::JSCell::indexingTypeOffset):
(JSC::JSCell::gcDataOffset):
* runtime/JSCellInlines.h:
(JSC::JSCell::JSCell):
(JSC::JSCell::finishCreation):
(JSC::JSCell::type):
(JSC::JSCell::indexingType):
(JSC::JSCell::structure):
(JSC::JSCell::visitChildren):
(JSC::JSCell::isObject):
(JSC::JSCell::isString):
(JSC::JSCell::isGetterSetter):
(JSC::JSCell::isProxy):
(JSC::JSCell::isAPIValueWrapper):
(JSC::JSCell::setStructure):
(JSC::JSCell::methodTable):
(JSC::Heap::writeBarrier):
* runtime/JSDataView.cpp:
(JSC::JSDataView::createStructure):
* runtime/JSDestructibleObject.h:
(JSC::JSCell::classInfo):
* runtime/JSFunction.cpp:
(JSC::JSFunction::getOwnNonIndexPropertyNames):
(JSC::JSFunction::put):
(JSC::JSFunction::defineOwnProperty):
* runtime/JSGenericTypedArrayView.h:
(JSC::JSGenericTypedArrayView::createStructure):
* runtime/JSObject.cpp:
(JSC::getCallableObjectSlow):
(JSC::JSObject::copyButterfly):
(JSC::JSObject::visitButterfly):
(JSC::JSFinalObject::visitChildren):
(JSC::JSObject::getOwnPropertySlotByIndex):
(JSC::JSObject::put):
(JSC::JSObject::putByIndex):
(JSC::JSObject::enterDictionaryIndexingModeWhenArrayStorageAlreadyExists):
(JSC::JSObject::enterDictionaryIndexingMode):
(JSC::JSObject::notifyPresenceOfIndexedAccessors):
(JSC::JSObject::createInitialIndexedStorage):
(JSC::JSObject::createInitialUndecided):
(JSC::JSObject::createInitialInt32):
(JSC::JSObject::createInitialDouble):
(JSC::JSObject::createInitialContiguous):
(JSC::JSObject::createArrayStorage):
(JSC::JSObject::convertUndecidedToInt32):
(JSC::JSObject::convertUndecidedToDouble):
(JSC::JSObject::convertUndecidedToContiguous):
(JSC::JSObject::constructConvertedArrayStorageWithoutCopyingElements):
(JSC::JSObject::convertUndecidedToArrayStorage):
(JSC::JSObject::convertInt32ToDouble):
(JSC::JSObject::convertInt32ToContiguous):
(JSC::JSObject::convertInt32ToArrayStorage):
(JSC::JSObject::genericConvertDoubleToContiguous):
(JSC::JSObject::convertDoubleToArrayStorage):
(JSC::JSObject::convertContiguousToArrayStorage):
(JSC::JSObject::ensureInt32Slow):
(JSC::JSObject::ensureDoubleSlow):
(JSC::JSObject::ensureContiguousSlow):
(JSC::JSObject::ensureArrayStorageSlow):
(JSC::JSObject::ensureArrayStorageExistsAndEnterDictionaryIndexingMode):
(JSC::JSObject::switchToSlowPutArrayStorage):
(JSC::JSObject::setPrototype):
(JSC::JSObject::setPrototypeWithCycleCheck):
(JSC::JSObject::putDirectNonIndexAccessor):
(JSC::JSObject::deleteProperty):
(JSC::JSObject::hasOwnProperty):
(JSC::JSObject::deletePropertyByIndex):
(JSC::JSObject::getPrimitiveNumber):
(JSC::JSObject::hasInstance):
(JSC::JSObject::getPropertySpecificValue):
(JSC::JSObject::getPropertyNames):
(JSC::JSObject::getOwnPropertyNames):
(JSC::JSObject::getOwnNonIndexPropertyNames):
(JSC::JSObject::seal):
(JSC::JSObject::freeze):
(JSC::JSObject::preventExtensions):
(JSC::JSObject::reifyStaticFunctionsForDelete):
(JSC::JSObject::removeDirect):
(JSC::JSObject::putByIndexBeyondVectorLengthWithoutAttributes):
(JSC::JSObject::putByIndexBeyondVectorLength):
(JSC::JSObject::putDirectIndexBeyondVectorLengthWithArrayStorage):
(JSC::JSObject::putDirectIndexBeyondVectorLength):
(JSC::JSObject::getNewVectorLength):
(JSC::JSObject::countElements):
(JSC::JSObject::increaseVectorLength):
(JSC::JSObject::ensureLengthSlow):
(JSC::JSObject::growOutOfLineStorage):
(JSC::JSObject::getOwnPropertyDescriptor):
(JSC::putDescriptor):
(JSC::JSObject::defineOwnNonIndexProperty):
* runtime/JSObject.h:
(JSC::getJSFunction):
(JSC::JSObject::getArrayLength):
(JSC::JSObject::getVectorLength):
(JSC::JSObject::putByIndexInline):
(JSC::JSObject::canGetIndexQuickly):
(JSC::JSObject::getIndexQuickly):
(JSC::JSObject::tryGetIndexQuickly):
(JSC::JSObject::getDirectIndex):
(JSC::JSObject::canSetIndexQuickly):
(JSC::JSObject::canSetIndexQuicklyForPutDirect):
(JSC::JSObject::setIndexQuickly):
(JSC::JSObject::initializeIndex):
(JSC::JSObject::hasSparseMap):
(JSC::JSObject::inSparseIndexingMode):
(JSC::JSObject::getDirect):
(JSC::JSObject::getDirectOffset):
(JSC::JSObject::isSealed):
(JSC::JSObject::isFrozen):
(JSC::JSObject::flattenDictionaryObject):
(JSC::JSObject::ensureInt32):
(JSC::JSObject::ensureDouble):
(JSC::JSObject::ensureContiguous):
(JSC::JSObject::rageEnsureContiguous):
(JSC::JSObject::ensureArrayStorage):
(JSC::JSObject::arrayStorage):
(JSC::JSObject::arrayStorageOrNull):
(JSC::JSObject::ensureLength):
(JSC::JSObject::currentIndexingData):
(JSC::JSObject::getHolyIndexQuickly):
(JSC::JSObject::currentRelevantLength):
(JSC::JSObject::isGlobalObject):
(JSC::JSObject::isVariableObject):
(JSC::JSObject::isStaticScopeObject):
(JSC::JSObject::isNameScopeObject):
(JSC::JSObject::isActivationObject):
(JSC::JSObject::isErrorInstance):
(JSC::JSObject::inlineGetOwnPropertySlot):
(JSC::JSObject::fastGetOwnPropertySlot):
(JSC::JSObject::getPropertySlot):
(JSC::JSObject::putDirectInternal):
(JSC::JSObject::setStructureAndReallocateStorageIfNecessary):
* runtime/JSPropertyNameIterator.h:
(JSC::JSPropertyNameIterator::createStructure):
* runtime/JSProxy.cpp:
(JSC::JSProxy::getOwnPropertySlot):
(JSC::JSProxy::getOwnPropertySlotByIndex):
(JSC::JSProxy::put):
(JSC::JSProxy::putByIndex):
(JSC::JSProxy::defineOwnProperty):
(JSC::JSProxy::deleteProperty):
(JSC::JSProxy::deletePropertyByIndex):
(JSC::JSProxy::getPropertyNames):
(JSC::JSProxy::getOwnPropertyNames):
* runtime/JSScope.cpp:
(JSC::JSScope::objectAtScope):
* runtime/JSString.h:
(JSC::JSString::createStructure):
(JSC::isJSString):
* runtime/JSType.h:
* runtime/JSTypeInfo.h:
(JSC::TypeInfo::TypeInfo):
(JSC::TypeInfo::isObject):
(JSC::TypeInfo::structureIsImmortal):
(JSC::TypeInfo::zeroedGCDataOffset):
(JSC::TypeInfo::inlineTypeFlags):
* runtime/MapData.h:
* runtime/ObjectConstructor.cpp:
(JSC::objectConstructorGetOwnPropertyNames):
(JSC::objectConstructorKeys):
(JSC::objectConstructorDefineProperty):
(JSC::defineProperties):
(JSC::objectConstructorSeal):
(JSC::objectConstructorFreeze):
(JSC::objectConstructorIsSealed):
(JSC::objectConstructorIsFrozen):
* runtime/ObjectPrototype.cpp:
(JSC::objectProtoFuncDefineGetter):
(JSC::objectProtoFuncDefineSetter):
(JSC::objectProtoFuncToString):
* runtime/Operations.cpp:
(JSC::jsTypeStringForValue):
(JSC::jsIsObjectType):
* runtime/Operations.h:
(JSC::normalizePrototypeChainForChainAccess):
(JSC::normalizePrototypeChain):
* runtime/PropertyMapHashTable.h:
(JSC::PropertyTable::createStructure):
* runtime/RegExp.h:
(JSC::RegExp::createStructure):
* runtime/SparseArrayValueMap.h:
* runtime/Structure.cpp:
(JSC::Structure::Structure):
(JSC::Structure::~Structure):
(JSC::Structure::prototypeChainMayInterceptStoreTo):
* runtime/Structure.h:
(JSC::Structure::id):
(JSC::Structure::idBlob):
(JSC::Structure::objectInitializationFields):
(JSC::Structure::structureIDOffset):
* runtime/StructureChain.h:
(JSC::StructureChain::createStructure):
* runtime/StructureIDTable.cpp: Added.
(JSC::StructureIDTable::StructureIDTable):
(JSC::StructureIDTable::~StructureIDTable):
(JSC::StructureIDTable::resize):
(JSC::StructureIDTable::flushOldTables):
(JSC::StructureIDTable::allocateID):
(JSC::StructureIDTable::deallocateID):
* runtime/StructureIDTable.h: Added.
(JSC::StructureIDTable::base):
(JSC::StructureIDTable::get):
* runtime/SymbolTable.h:
* runtime/TypedArrayType.cpp:
(JSC::typeForTypedArrayType):
* runtime/TypedArrayType.h:
* runtime/WeakMapData.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@164764 268f45cc-cd09-0410-ab3c-d52691b4dbfc

108 files changed:
Source/JavaScriptCore/API/JSContext.mm
Source/JavaScriptCore/API/JSContextRef.cpp
Source/JavaScriptCore/API/JSObjectRef.cpp
Source/JavaScriptCore/API/JSValue.mm
Source/JavaScriptCore/API/JSWrapperMap.mm
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/GNUmakefile.list.am
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
Source/JavaScriptCore/assembler/MacroAssemblerX86.h
Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
Source/JavaScriptCore/assembler/RepatchBuffer.h
Source/JavaScriptCore/assembler/X86Assembler.h
Source/JavaScriptCore/bytecode/ArrayProfile.cpp
Source/JavaScriptCore/bytecode/ArrayProfile.h
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
Source/JavaScriptCore/dfg/DFGAbstractHeap.h
Source/JavaScriptCore/dfg/DFGArrayifySlowPathGenerator.h
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGJITCompiler.h
Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGWorklist.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
Source/JavaScriptCore/ftl/FTLOutput.h
Source/JavaScriptCore/heap/GCAssertions.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/SlotVisitorInlines.h
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITCall.cpp
Source/JavaScriptCore/jit/JITCall32_64.cpp
Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp
Source/JavaScriptCore/jit/JITInlineCacheGenerator.h
Source/JavaScriptCore/jit/JITInlines.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
Source/JavaScriptCore/jit/JSInterfaceJIT.h
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/jit/SpecializedThunkJIT.h
Source/JavaScriptCore/jit/ThunkGenerators.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/runtime/Arguments.cpp
Source/JavaScriptCore/runtime/Arguments.h
Source/JavaScriptCore/runtime/ArrayPrototype.cpp
Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
Source/JavaScriptCore/runtime/Executable.h
Source/JavaScriptCore/runtime/GetterSetter.cpp
Source/JavaScriptCore/runtime/InitializeThreading.cpp
Source/JavaScriptCore/runtime/JSArray.cpp
Source/JavaScriptCore/runtime/JSCJSValueInlines.h
Source/JavaScriptCore/runtime/JSCell.cpp
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSCellInlines.h
Source/JavaScriptCore/runtime/JSDataView.cpp
Source/JavaScriptCore/runtime/JSDestructibleObject.h
Source/JavaScriptCore/runtime/JSFunction.cpp
Source/JavaScriptCore/runtime/JSGenericTypedArrayView.h
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
Source/JavaScriptCore/runtime/JSProxy.cpp
Source/JavaScriptCore/runtime/JSScope.cpp
Source/JavaScriptCore/runtime/JSString.h
Source/JavaScriptCore/runtime/JSType.h
Source/JavaScriptCore/runtime/JSTypeInfo.h
Source/JavaScriptCore/runtime/MapData.h
Source/JavaScriptCore/runtime/ObjectConstructor.cpp
Source/JavaScriptCore/runtime/ObjectPrototype.cpp
Source/JavaScriptCore/runtime/Operations.cpp
Source/JavaScriptCore/runtime/Operations.h
Source/JavaScriptCore/runtime/PropertyMapHashTable.h
Source/JavaScriptCore/runtime/RegExp.h
Source/JavaScriptCore/runtime/SparseArrayValueMap.h
Source/JavaScriptCore/runtime/Structure.cpp
Source/JavaScriptCore/runtime/Structure.h
Source/JavaScriptCore/runtime/StructureChain.h
Source/JavaScriptCore/runtime/StructureIDBlob.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/StructureIDTable.cpp [new file with mode: 0644]
Source/JavaScriptCore/runtime/StructureIDTable.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/SymbolTable.h
Source/JavaScriptCore/runtime/TypedArrayType.cpp
Source/JavaScriptCore/runtime/TypedArrayType.h
Source/JavaScriptCore/runtime/WeakMapData.h

index ebe0f0c..5cfb3e1 100644 (file)
 
 - (void)setException:(JSValue *)value
 {
+    JSC::APIEntryShim entryShim(toJS(m_context));
     if (value)
         m_exception.set(toJS(m_context)->vm(), toJS(JSValueToObject(m_context, valueInternalValue(value), 0)));
     else
 
 - (JSValue *)wrapperForObjCObject:(id)object
 {
-    // Lock access to m_wrapperMap
-    JSC::JSLockHolder lock(toJS(m_context));
+    JSC::APIEntryShim entryShim(toJS(m_context));
     return [m_wrapperMap jsWrapperForObject:object];
 }
 
 - (JSValue *)wrapperForJSObject:(JSValueRef)value
 {
-    JSC::JSLockHolder lock(toJS(m_context));
+    JSC::APIEntryShim entryShim(toJS(m_context));
     return [m_wrapperMap objcWrapperForJSValueRef:value];
 }
 
index 3e62393..bb261f0 100644 (file)
@@ -67,16 +67,10 @@ JSContextGroupRef JSContextGroupRetain(JSContextGroupRef group)
 
 void JSContextGroupRelease(JSContextGroupRef group)
 {
-    IdentifierTable* savedIdentifierTable;
     VM& vm = *toJS(group);
 
-    {
-        JSLockHolder lock(vm);
-        savedIdentifierTable = wtfThreadData().setCurrentIdentifierTable(vm.identifierTable);
-        vm.deref();
-    }
-
-    wtfThreadData().setCurrentIdentifierTable(savedIdentifierTable);
+    APIEntryShim entryShim(&vm);
+    vm.deref();
 }
 
 static bool internalScriptTimeoutCallback(ExecState* exec, void* callbackPtr, void* callbackData)
@@ -164,7 +158,7 @@ void JSGlobalContextRelease(JSGlobalContextRef ctx)
     IdentifierTable* savedIdentifierTable;
     ExecState* exec = toJS(ctx);
     {
-        JSLockHolder lock(exec);
+        APIEntryShim entryShim(exec);
 
         VM& vm = exec->vm();
         savedIdentifierTable = wtfThreadData().setCurrentIdentifierTable(vm.identifierTable);
index 980eec3..9a5b305 100644 (file)
@@ -507,10 +507,11 @@ bool JSObjectDeletePrivateProperty(JSContextRef ctx, JSObjectRef object, JSStrin
     return false;
 }
 
-bool JSObjectIsFunction(JSContextRef, JSObjectRef object)
+bool JSObjectIsFunction(JSContextRef ctx, JSObjectRef object)
 {
     if (!object)
         return false;
+    APIEntryShim entryShim(toJS(ctx));
     CallData callData;
     JSCell* cell = toJS(object);
     return cell->methodTable()->getCallData(cell, callData) != CallTypeNone;
@@ -606,12 +607,12 @@ JSPropertyNameArrayRef JSObjectCopyPropertyNames(JSContextRef ctx, JSObjectRef o
         ASSERT_NOT_REACHED();
         return 0;
     }
-    JSObject* jsObject = toJS(object);
     ExecState* exec = toJS(ctx);
     APIEntryShim entryShim(exec);
 
     VM* vm = &exec->vm();
 
+    JSObject* jsObject = toJS(object);
     JSPropertyNameArrayRef propertyNames = new OpaqueJSPropertyNameArray(vm);
     PropertyNameArray array(vm);
     jsObject->methodTable()->getPropertyNames(jsObject, exec, array, ExcludeDontEnumProperties);
index 66ea7d8..ef1ab4d 100644 (file)
@@ -698,6 +698,8 @@ static id containerValueToObject(JSGlobalContextRef context, JSContainerConverto
             ASSERT([current.objc isKindOfClass:[NSMutableDictionary class]]);
             NSMutableDictionary *dictionary = (NSMutableDictionary *)current.objc;
 
+            JSC::APIEntryShim entryShim(toJS(context));
+
             JSPropertyNameArrayRef propertyNameArray = JSObjectCopyPropertyNames(context, js);
             size_t length = JSPropertyNameArrayGetCount(propertyNameArray);
 
index aa9e8a5..c25f8ac 100644 (file)
@@ -625,6 +625,7 @@ id tryUnwrapObjcObject(JSGlobalContextRef context, JSValueRef value)
     JSValueRef exception = 0;
     JSObjectRef object = JSValueToObject(context, value, &exception);
     ASSERT(!exception);
+    JSC::APIEntryShim entryShim(toJS(context));
     if (toJS(object)->inherits(JSC::JSCallbackObject<JSC::JSAPIWrapperObject>::info()))
         return (id)JSC::jsCast<JSC::JSAPIWrapperObject*>(toJS(object))->wrappedObject();
     if (id target = tryUnwrapConstructor(object))
index 0db0cdc..01f1d66 100644 (file)
@@ -472,6 +472,7 @@ set(JavaScriptCore_SOURCES
     runtime/StringRecursionChecker.cpp
     runtime/Structure.cpp
     runtime/StructureChain.cpp
+    runtime/StructureIDTable.cpp
     runtime/StructureRareData.cpp
     runtime/SymbolTable.cpp
     runtime/TestRunnerUtils.cpp
index 8412025..06cab3c 100644 (file)
@@ -1,3 +1,560 @@
+2014-02-25  Mark Hahnenberg  <mhahnenberg@apple.com>
+
+        Make JSCells have 32-bit Structure pointers
+        https://bugs.webkit.org/show_bug.cgi?id=123195
+
+        Reviewed by Filip Pizlo.
+
+        This patch changes JSCells such that they no longer have a full 64-bit Structure
+        pointer in their header. Instead they now have a 32-bit index into
+        a per-VM table of Structure pointers. 32-bit platforms still use normal Structure
+        pointers.
+
+        This change frees up an additional 32 bits of information in our object headers.
+        We then use this extra space to store the indexing type of the object, the JSType
+        of the object, some various type flags, and garbage collection data (e.g. mark bit).
+        Because this inline type information is now faster to read, it pays for the slowdown 
+        incurred by having to perform an extra indirection through the StructureIDTable.
+
+        This patch also threads a reference to the current VM through more of the C++ runtime
+        to offset the cost of having to look up the VM to get the actual Structure pointer.
+
+        * API/JSContext.mm:
+        (-[JSContext setException:]):
+        (-[JSContext wrapperForObjCObject:]):
+        (-[JSContext wrapperForJSObject:]):
+        * API/JSContextRef.cpp:
+        (JSContextGroupRelease):
+        (JSGlobalContextRelease):
+        * API/JSObjectRef.cpp:
+        (JSObjectIsFunction):
+        (JSObjectCopyPropertyNames):
+        * API/JSValue.mm:
+        (containerValueToObject):
+        * API/JSWrapperMap.mm:
+        (tryUnwrapObjcObject):
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/AbstractMacroAssembler.h:
+        * assembler/MacroAssembler.h:
+        (JSC::MacroAssembler::patchableBranch32WithPatch):
+        (JSC::MacroAssembler::patchableBranch32):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::branchPtrWithPatch):
+        (JSC::MacroAssemblerARM64::patchableBranch32WithPatch):
+        (JSC::MacroAssemblerARM64::canJumpReplacePatchableBranch32WithPatch):
+        (JSC::MacroAssemblerARM64::startOfPatchableBranch32WithPatchOnAddress):
+        (JSC::MacroAssemblerARM64::revertJumpReplacementToPatchableBranch32WithPatch):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::store8):
+        (JSC::MacroAssemblerARMv7::branch32WithPatch):
+        (JSC::MacroAssemblerARMv7::patchableBranch32WithPatch):
+        (JSC::MacroAssemblerARMv7::canJumpReplacePatchableBranch32WithPatch):
+        (JSC::MacroAssemblerARMv7::startOfPatchableBranch32WithPatchOnAddress):
+        (JSC::MacroAssemblerARMv7::revertJumpReplacementToPatchableBranch32WithPatch):
+        * assembler/MacroAssemblerX86.h:
+        (JSC::MacroAssemblerX86::branch32WithPatch):
+        (JSC::MacroAssemblerX86::canJumpReplacePatchableBranch32WithPatch):
+        (JSC::MacroAssemblerX86::startOfPatchableBranch32WithPatchOnAddress):
+        (JSC::MacroAssemblerX86::revertJumpReplacementToPatchableBranch32WithPatch):
+        * assembler/MacroAssemblerX86_64.h:
+        (JSC::MacroAssemblerX86_64::store32):
+        (JSC::MacroAssemblerX86_64::moveWithPatch):
+        (JSC::MacroAssemblerX86_64::branch32WithPatch):
+        (JSC::MacroAssemblerX86_64::canJumpReplacePatchableBranch32WithPatch):
+        (JSC::MacroAssemblerX86_64::startOfBranch32WithPatchOnRegister):
+        (JSC::MacroAssemblerX86_64::startOfPatchableBranch32WithPatchOnAddress):
+        (JSC::MacroAssemblerX86_64::revertJumpReplacementToPatchableBranch32WithPatch):
+        * assembler/RepatchBuffer.h:
+        (JSC::RepatchBuffer::startOfPatchableBranch32WithPatchOnAddress):
+        (JSC::RepatchBuffer::revertJumpReplacementToPatchableBranch32WithPatch):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::revertJumpTo_movq_i64r):
+        (JSC::X86Assembler::revertJumpTo_movl_i32r):
+        * bytecode/ArrayProfile.cpp:
+        (JSC::ArrayProfile::computeUpdatedPrediction):
+        * bytecode/ArrayProfile.h:
+        (JSC::ArrayProfile::ArrayProfile):
+        (JSC::ArrayProfile::addressOfLastSeenStructureID):
+        (JSC::ArrayProfile::observeStructure):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::heap):
+        * bytecode/UnlinkedCodeBlock.h:
+        * debugger/Debugger.h:
+        * dfg/DFGAbstractHeap.h:
+        * dfg/DFGArrayifySlowPathGenerator.h:
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::JITCompiler::branchWeakStructure):
+        (JSC::DFG::JITCompiler::branchStructurePtr):
+        * dfg/DFGOSRExitCompiler32_64.cpp:
+        (JSC::DFG::OSRExitCompiler::compileExit):
+        * dfg/DFGOSRExitCompiler64.cpp:
+        (JSC::DFG::OSRExitCompiler::compileExit):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::osrWriteBarrier):
+        (JSC::DFG::adjustAndJumpToTarget):
+        * dfg/DFGOperations.cpp:
+        (JSC::DFG::putByVal):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::checkArray):
+        (JSC::DFG::SpeculativeJIT::arrayify):
+        (JSC::DFG::SpeculativeJIT::compilePeepHoleObjectEquality):
+        (JSC::DFG::SpeculativeJIT::compileInstanceOfForObject):
+        (JSC::DFG::SpeculativeJIT::compileInstanceOf):
+        (JSC::DFG::SpeculativeJIT::compileToStringOnCell):
+        (JSC::DFG::SpeculativeJIT::speculateObject):
+        (JSC::DFG::SpeculativeJIT::speculateFinalObject):
+        (JSC::DFG::SpeculativeJIT::speculateObjectOrOther):
+        (JSC::DFG::SpeculativeJIT::speculateString):
+        (JSC::DFG::SpeculativeJIT::speculateStringObject):
+        (JSC::DFG::SpeculativeJIT::speculateStringOrStringObject):
+        (JSC::DFG::SpeculativeJIT::emitSwitchChar):
+        (JSC::DFG::SpeculativeJIT::emitSwitchString):
+        (JSC::DFG::SpeculativeJIT::genericWriteBarrier):
+        (JSC::DFG::SpeculativeJIT::writeBarrier):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
+        (JSC::DFG::SpeculativeJIT::speculateStringObjectForStructure):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull):
+        (JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranchNull):
+        (JSC::DFG::SpeculativeJIT::compileObjectEquality):
+        (JSC::DFG::SpeculativeJIT::compileObjectToObjectOrOtherEquality):
+        (JSC::DFG::SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality):
+        (JSC::DFG::SpeculativeJIT::compileObjectOrOtherLogicalNot):
+        (JSC::DFG::SpeculativeJIT::emitObjectOrOtherBranch):
+        (JSC::DFG::SpeculativeJIT::compile):
+        (JSC::DFG::SpeculativeJIT::writeBarrier):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull):
+        (JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranchNull):
+        (JSC::DFG::SpeculativeJIT::compileObjectEquality):
+        (JSC::DFG::SpeculativeJIT::compileObjectToObjectOrOtherEquality):
+        (JSC::DFG::SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality):
+        (JSC::DFG::SpeculativeJIT::compileObjectOrOtherLogicalNot):
+        (JSC::DFG::SpeculativeJIT::emitObjectOrOtherBranch):
+        (JSC::DFG::SpeculativeJIT::compile):
+        (JSC::DFG::SpeculativeJIT::writeBarrier):
+        * dfg/DFGWorklist.cpp:
+        * ftl/FTLAbstractHeapRepository.cpp:
+        (JSC::FTL::AbstractHeapRepository::AbstractHeapRepository):
+        * ftl/FTLAbstractHeapRepository.h:
+        * ftl/FTLLowerDFGToLLVM.cpp:
+        (JSC::FTL::LowerDFGToLLVM::compileCheckStructure):
+        (JSC::FTL::LowerDFGToLLVM::compileArrayifyToStructure):
+        (JSC::FTL::LowerDFGToLLVM::compilePutStructure):
+        (JSC::FTL::LowerDFGToLLVM::compileToString):
+        (JSC::FTL::LowerDFGToLLVM::compileMultiGetByOffset):
+        (JSC::FTL::LowerDFGToLLVM::compileMultiPutByOffset):
+        (JSC::FTL::LowerDFGToLLVM::speculateTruthyObject):
+        (JSC::FTL::LowerDFGToLLVM::allocateCell):
+        (JSC::FTL::LowerDFGToLLVM::equalNullOrUndefined):
+        (JSC::FTL::LowerDFGToLLVM::isObject):
+        (JSC::FTL::LowerDFGToLLVM::isString):
+        (JSC::FTL::LowerDFGToLLVM::isArrayType):
+        (JSC::FTL::LowerDFGToLLVM::hasClassInfo):
+        (JSC::FTL::LowerDFGToLLVM::isType):
+        (JSC::FTL::LowerDFGToLLVM::speculateStringOrStringObject):
+        (JSC::FTL::LowerDFGToLLVM::speculateStringObjectForCell):
+        (JSC::FTL::LowerDFGToLLVM::speculateStringObjectForStructureID):
+        (JSC::FTL::LowerDFGToLLVM::speculateNonNullObject):
+        (JSC::FTL::LowerDFGToLLVM::loadMarkByte):
+        (JSC::FTL::LowerDFGToLLVM::loadStructure):
+        (JSC::FTL::LowerDFGToLLVM::weakStructure):
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        * ftl/FTLOutput.h:
+        (JSC::FTL::Output::store8):
+        * heap/GCAssertions.h:
+        * heap/Heap.cpp:
+        (JSC::Heap::getConservativeRegisterRoots):
+        (JSC::Heap::collect):
+        (JSC::Heap::writeBarrier):
+        * heap/Heap.h:
+        (JSC::Heap::structureIDTable):
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::forEachBlock):
+        * heap/SlotVisitorInlines.h:
+        (JSC::SlotVisitor::internalAppend):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::branchIfCellNotObject):
+        (JSC::AssemblyHelpers::genericWriteBarrier):
+        (JSC::AssemblyHelpers::emitLoadStructure):
+        (JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
+        * jit/JIT.h:
+        * jit/JITCall.cpp:
+        (JSC::JIT::compileOpCall):
+        (JSC::JIT::privateCompileClosureCall):
+        * jit/JITCall32_64.cpp:
+        (JSC::JIT::emit_op_ret_object_or_this):
+        (JSC::JIT::compileOpCall):
+        (JSC::JIT::privateCompileClosureCall):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::generateFastPathChecks):
+        * jit/JITInlineCacheGenerator.h:
+        * jit/JITInlines.h:
+        (JSC::JIT::emitLoadCharacterString):
+        (JSC::JIT::checkStructure):
+        (JSC::JIT::emitJumpIfCellNotObject):
+        (JSC::JIT::emitAllocateJSObject):
+        (JSC::JIT::emitArrayProfilingSiteWithCell):
+        (JSC::JIT::emitArrayProfilingSiteForBytecodeIndexWithCell):
+        (JSC::JIT::branchStructure):
+        (JSC::branchStructure):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_check_has_instance):
+        (JSC::JIT::emit_op_instanceof):
+        (JSC::JIT::emit_op_is_undefined):
+        (JSC::JIT::emit_op_is_string):
+        (JSC::JIT::emit_op_ret_object_or_this):
+        (JSC::JIT::emit_op_to_primitive):
+        (JSC::JIT::emit_op_jeq_null):
+        (JSC::JIT::emit_op_jneq_null):
+        (JSC::JIT::emit_op_get_pnames):
+        (JSC::JIT::emit_op_next_pname):
+        (JSC::JIT::emit_op_eq_null):
+        (JSC::JIT::emit_op_neq_null):
+        (JSC::JIT::emit_op_to_this):
+        (JSC::JIT::emitSlow_op_to_this):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_check_has_instance):
+        (JSC::JIT::emit_op_instanceof):
+        (JSC::JIT::emit_op_is_undefined):
+        (JSC::JIT::emit_op_is_string):
+        (JSC::JIT::emit_op_to_primitive):
+        (JSC::JIT::emit_op_jeq_null):
+        (JSC::JIT::emit_op_jneq_null):
+        (JSC::JIT::emitSlow_op_eq):
+        (JSC::JIT::emitSlow_op_neq):
+        (JSC::JIT::compileOpStrictEq):
+        (JSC::JIT::emit_op_eq_null):
+        (JSC::JIT::emit_op_neq_null):
+        (JSC::JIT::emit_op_get_pnames):
+        (JSC::JIT::emit_op_next_pname):
+        (JSC::JIT::emit_op_to_this):
+        * jit/JITOperations.cpp:
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::stringGetByValStubGenerator):
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emitSlow_op_get_by_val):
+        (JSC::JIT::emit_op_get_by_pname):
+        (JSC::JIT::emit_op_put_by_val):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emitLoadWithStructureCheck):
+        (JSC::JIT::emitSlow_op_get_from_scope):
+        (JSC::JIT::emitSlow_op_put_to_scope):
+        (JSC::JIT::checkMarkWord):
+        (JSC::JIT::emitWriteBarrier):
+        (JSC::JIT::addStructureTransitionCheck):
+        (JSC::JIT::emitIntTypedArrayGetByVal):
+        (JSC::JIT::emitFloatTypedArrayGetByVal):
+        (JSC::JIT::emitIntTypedArrayPutByVal):
+        (JSC::JIT::emitFloatTypedArrayPutByVal):
+        * jit/JITPropertyAccess32_64.cpp:
+        (JSC::JIT::stringGetByValStubGenerator):
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emitSlow_op_get_by_val):
+        (JSC::JIT::emit_op_put_by_val):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emit_op_get_by_pname):
+        (JSC::JIT::emitLoadWithStructureCheck):
+        * jit/JSInterfaceJIT.h:
+        (JSC::JSInterfaceJIT::emitJumpIfNotType):
+        * jit/Repatch.cpp:
+        (JSC::repatchByIdSelfAccess):
+        (JSC::addStructureTransitionCheck):
+        (JSC::replaceWithJump):
+        (JSC::generateProtoChainAccessStub):
+        (JSC::tryCacheGetByID):
+        (JSC::tryBuildGetByIDList):
+        (JSC::writeBarrier):
+        (JSC::emitPutReplaceStub):
+        (JSC::emitPutTransitionStub):
+        (JSC::tryBuildPutByIdList):
+        (JSC::tryRepatchIn):
+        (JSC::linkClosureCall):
+        (JSC::resetGetByID):
+        (JSC::resetPutByID):
+        * jit/SpecializedThunkJIT.h:
+        (JSC::SpecializedThunkJIT::loadJSStringArgument):
+        (JSC::SpecializedThunkJIT::loadArgumentWithSpecificClass):
+        * jit/ThunkGenerators.cpp:
+        (JSC::virtualForThunkGenerator):
+        (JSC::arrayIteratorNextThunkGenerator):
+        * jit/UnusedPointer.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/Arguments.cpp:
+        (JSC::Arguments::createStrictModeCallerIfNecessary):
+        (JSC::Arguments::createStrictModeCalleeIfNecessary):
+        * runtime/Arguments.h:
+        (JSC::Arguments::createStructure):
+        * runtime/ArrayPrototype.cpp:
+        (JSC::shift):
+        (JSC::unshift):
+        (JSC::arrayProtoFuncToString):
+        (JSC::arrayProtoFuncPop):
+        (JSC::arrayProtoFuncReverse):
+        (JSC::performSlowSort):
+        (JSC::arrayProtoFuncSort):
+        (JSC::arrayProtoFuncSplice):
+        (JSC::arrayProtoFuncUnShift):
+        * runtime/CommonSlowPaths.cpp:
+        (JSC::SLOW_PATH_DECL):
+        * runtime/Executable.h:
+        (JSC::ExecutableBase::isFunctionExecutable):
+        (JSC::ExecutableBase::clearCodeVirtual):
+        (JSC::ScriptExecutable::unlinkCalls):
+        * runtime/GetterSetter.cpp:
+        (JSC::callGetter):
+        (JSC::callSetter):
+        * runtime/InitializeThreading.cpp:
+        * runtime/JSArray.cpp:
+        (JSC::JSArray::unshiftCountSlowCase):
+        (JSC::JSArray::setLength):
+        (JSC::JSArray::pop):
+        (JSC::JSArray::push):
+        (JSC::JSArray::shiftCountWithArrayStorage):
+        (JSC::JSArray::shiftCountWithAnyIndexingType):
+        (JSC::JSArray::unshiftCountWithArrayStorage):
+        (JSC::JSArray::unshiftCountWithAnyIndexingType):
+        (JSC::JSArray::sortNumericVector):
+        (JSC::JSArray::sortNumeric):
+        (JSC::JSArray::sortCompactedVector):
+        (JSC::JSArray::sort):
+        (JSC::JSArray::sortVector):
+        (JSC::JSArray::fillArgList):
+        (JSC::JSArray::copyToArguments):
+        (JSC::JSArray::compactForSorting):
+        * runtime/JSCJSValueInlines.h:
+        (JSC::JSValue::toThis):
+        (JSC::JSValue::put):
+        (JSC::JSValue::putByIndex):
+        (JSC::JSValue::equalSlowCaseInline):
+        * runtime/JSCell.cpp:
+        (JSC::JSCell::put):
+        (JSC::JSCell::putByIndex):
+        (JSC::JSCell::deleteProperty):
+        (JSC::JSCell::deletePropertyByIndex):
+        * runtime/JSCell.h:
+        (JSC::JSCell::clearStructure):
+        (JSC::JSCell::mark):
+        (JSC::JSCell::isMarked):
+        (JSC::JSCell::structureIDOffset):
+        (JSC::JSCell::typeInfoFlagsOffset):
+        (JSC::JSCell::typeInfoTypeOffset):
+        (JSC::JSCell::indexingTypeOffset):
+        (JSC::JSCell::gcDataOffset):
+        * runtime/JSCellInlines.h:
+        (JSC::JSCell::JSCell):
+        (JSC::JSCell::finishCreation):
+        (JSC::JSCell::type):
+        (JSC::JSCell::indexingType):
+        (JSC::JSCell::structure):
+        (JSC::JSCell::visitChildren):
+        (JSC::JSCell::isObject):
+        (JSC::JSCell::isString):
+        (JSC::JSCell::isGetterSetter):
+        (JSC::JSCell::isProxy):
+        (JSC::JSCell::isAPIValueWrapper):
+        (JSC::JSCell::setStructure):
+        (JSC::JSCell::methodTable):
+        (JSC::Heap::writeBarrier):
+        * runtime/JSDataView.cpp:
+        (JSC::JSDataView::createStructure):
+        * runtime/JSDestructibleObject.h:
+        (JSC::JSCell::classInfo):
+        * runtime/JSFunction.cpp:
+        (JSC::JSFunction::getOwnNonIndexPropertyNames):
+        (JSC::JSFunction::put):
+        (JSC::JSFunction::defineOwnProperty):
+        * runtime/JSGenericTypedArrayView.h:
+        (JSC::JSGenericTypedArrayView::createStructure):
+        * runtime/JSObject.cpp:
+        (JSC::getCallableObjectSlow):
+        (JSC::JSObject::copyButterfly):
+        (JSC::JSObject::visitButterfly):
+        (JSC::JSFinalObject::visitChildren):
+        (JSC::JSObject::getOwnPropertySlotByIndex):
+        (JSC::JSObject::put):
+        (JSC::JSObject::putByIndex):
+        (JSC::JSObject::enterDictionaryIndexingModeWhenArrayStorageAlreadyExists):
+        (JSC::JSObject::enterDictionaryIndexingMode):
+        (JSC::JSObject::notifyPresenceOfIndexedAccessors):
+        (JSC::JSObject::createInitialIndexedStorage):
+        (JSC::JSObject::createInitialUndecided):
+        (JSC::JSObject::createInitialInt32):
+        (JSC::JSObject::createInitialDouble):
+        (JSC::JSObject::createInitialContiguous):
+        (JSC::JSObject::createArrayStorage):
+        (JSC::JSObject::convertUndecidedToInt32):
+        (JSC::JSObject::convertUndecidedToDouble):
+        (JSC::JSObject::convertUndecidedToContiguous):
+        (JSC::JSObject::constructConvertedArrayStorageWithoutCopyingElements):
+        (JSC::JSObject::convertUndecidedToArrayStorage):
+        (JSC::JSObject::convertInt32ToDouble):
+        (JSC::JSObject::convertInt32ToContiguous):
+        (JSC::JSObject::convertInt32ToArrayStorage):
+        (JSC::JSObject::genericConvertDoubleToContiguous):
+        (JSC::JSObject::convertDoubleToArrayStorage):
+        (JSC::JSObject::convertContiguousToArrayStorage):
+        (JSC::JSObject::ensureInt32Slow):
+        (JSC::JSObject::ensureDoubleSlow):
+        (JSC::JSObject::ensureContiguousSlow):
+        (JSC::JSObject::ensureArrayStorageSlow):
+        (JSC::JSObject::ensureArrayStorageExistsAndEnterDictionaryIndexingMode):
+        (JSC::JSObject::switchToSlowPutArrayStorage):
+        (JSC::JSObject::setPrototype):
+        (JSC::JSObject::setPrototypeWithCycleCheck):
+        (JSC::JSObject::putDirectNonIndexAccessor):
+        (JSC::JSObject::deleteProperty):
+        (JSC::JSObject::hasOwnProperty):
+        (JSC::JSObject::deletePropertyByIndex):
+        (JSC::JSObject::getPrimitiveNumber):
+        (JSC::JSObject::hasInstance):
+        (JSC::JSObject::getPropertySpecificValue):
+        (JSC::JSObject::getPropertyNames):
+        (JSC::JSObject::getOwnPropertyNames):
+        (JSC::JSObject::getOwnNonIndexPropertyNames):
+        (JSC::JSObject::seal):
+        (JSC::JSObject::freeze):
+        (JSC::JSObject::preventExtensions):
+        (JSC::JSObject::reifyStaticFunctionsForDelete):
+        (JSC::JSObject::removeDirect):
+        (JSC::JSObject::putByIndexBeyondVectorLengthWithoutAttributes):
+        (JSC::JSObject::putByIndexBeyondVectorLength):
+        (JSC::JSObject::putDirectIndexBeyondVectorLengthWithArrayStorage):
+        (JSC::JSObject::putDirectIndexBeyondVectorLength):
+        (JSC::JSObject::getNewVectorLength):
+        (JSC::JSObject::countElements):
+        (JSC::JSObject::increaseVectorLength):
+        (JSC::JSObject::ensureLengthSlow):
+        (JSC::JSObject::growOutOfLineStorage):
+        (JSC::JSObject::getOwnPropertyDescriptor):
+        (JSC::putDescriptor):
+        (JSC::JSObject::defineOwnNonIndexProperty):
+        * runtime/JSObject.h:
+        (JSC::getJSFunction):
+        (JSC::JSObject::getArrayLength):
+        (JSC::JSObject::getVectorLength):
+        (JSC::JSObject::putByIndexInline):
+        (JSC::JSObject::canGetIndexQuickly):
+        (JSC::JSObject::getIndexQuickly):
+        (JSC::JSObject::tryGetIndexQuickly):
+        (JSC::JSObject::getDirectIndex):
+        (JSC::JSObject::canSetIndexQuickly):
+        (JSC::JSObject::canSetIndexQuicklyForPutDirect):
+        (JSC::JSObject::setIndexQuickly):
+        (JSC::JSObject::initializeIndex):
+        (JSC::JSObject::hasSparseMap):
+        (JSC::JSObject::inSparseIndexingMode):
+        (JSC::JSObject::getDirect):
+        (JSC::JSObject::getDirectOffset):
+        (JSC::JSObject::isSealed):
+        (JSC::JSObject::isFrozen):
+        (JSC::JSObject::flattenDictionaryObject):
+        (JSC::JSObject::ensureInt32):
+        (JSC::JSObject::ensureDouble):
+        (JSC::JSObject::ensureContiguous):
+        (JSC::JSObject::rageEnsureContiguous):
+        (JSC::JSObject::ensureArrayStorage):
+        (JSC::JSObject::arrayStorage):
+        (JSC::JSObject::arrayStorageOrNull):
+        (JSC::JSObject::ensureLength):
+        (JSC::JSObject::currentIndexingData):
+        (JSC::JSObject::getHolyIndexQuickly):
+        (JSC::JSObject::currentRelevantLength):
+        (JSC::JSObject::isGlobalObject):
+        (JSC::JSObject::isVariableObject):
+        (JSC::JSObject::isStaticScopeObject):
+        (JSC::JSObject::isNameScopeObject):
+        (JSC::JSObject::isActivationObject):
+        (JSC::JSObject::isErrorInstance):
+        (JSC::JSObject::inlineGetOwnPropertySlot):
+        (JSC::JSObject::fastGetOwnPropertySlot):
+        (JSC::JSObject::getPropertySlot):
+        (JSC::JSObject::putDirectInternal):
+        (JSC::JSObject::setStructureAndReallocateStorageIfNecessary):
+        * runtime/JSPropertyNameIterator.h:
+        (JSC::JSPropertyNameIterator::createStructure):
+        * runtime/JSProxy.cpp:
+        (JSC::JSProxy::getOwnPropertySlot):
+        (JSC::JSProxy::getOwnPropertySlotByIndex):
+        (JSC::JSProxy::put):
+        (JSC::JSProxy::putByIndex):
+        (JSC::JSProxy::defineOwnProperty):
+        (JSC::JSProxy::deleteProperty):
+        (JSC::JSProxy::deletePropertyByIndex):
+        (JSC::JSProxy::getPropertyNames):
+        (JSC::JSProxy::getOwnPropertyNames):
+        * runtime/JSScope.cpp:
+        (JSC::JSScope::objectAtScope):
+        * runtime/JSString.h:
+        (JSC::JSString::createStructure):
+        (JSC::isJSString):
+        * runtime/JSType.h:
+        * runtime/JSTypeInfo.h:
+        (JSC::TypeInfo::TypeInfo):
+        (JSC::TypeInfo::isObject):
+        (JSC::TypeInfo::structureIsImmortal):
+        (JSC::TypeInfo::zeroedGCDataOffset):
+        (JSC::TypeInfo::inlineTypeFlags):
+        * runtime/MapData.h:
+        * runtime/ObjectConstructor.cpp:
+        (JSC::objectConstructorGetOwnPropertyNames):
+        (JSC::objectConstructorKeys):
+        (JSC::objectConstructorDefineProperty):
+        (JSC::defineProperties):
+        (JSC::objectConstructorSeal):
+        (JSC::objectConstructorFreeze):
+        (JSC::objectConstructorIsSealed):
+        (JSC::objectConstructorIsFrozen):
+        * runtime/ObjectPrototype.cpp:
+        (JSC::objectProtoFuncDefineGetter):
+        (JSC::objectProtoFuncDefineSetter):
+        (JSC::objectProtoFuncToString):
+        * runtime/Operations.cpp:
+        (JSC::jsTypeStringForValue):
+        (JSC::jsIsObjectType):
+        * runtime/Operations.h:
+        (JSC::normalizePrototypeChainForChainAccess):
+        (JSC::normalizePrototypeChain):
+        * runtime/PropertyMapHashTable.h:
+        (JSC::PropertyTable::createStructure):
+        * runtime/RegExp.h:
+        (JSC::RegExp::createStructure):
+        * runtime/SparseArrayValueMap.h:
+        * runtime/Structure.cpp:
+        (JSC::Structure::Structure):
+        (JSC::Structure::~Structure):
+        (JSC::Structure::prototypeChainMayInterceptStoreTo):
+        * runtime/Structure.h:
+        (JSC::Structure::id):
+        (JSC::Structure::idBlob):
+        (JSC::Structure::objectInitializationFields):
+        (JSC::Structure::structureIDOffset):
+        * runtime/StructureChain.h:
+        (JSC::StructureChain::createStructure):
+        * runtime/StructureIDTable.cpp: Added.
+        (JSC::StructureIDTable::StructureIDTable):
+        (JSC::StructureIDTable::~StructureIDTable):
+        (JSC::StructureIDTable::resize):
+        (JSC::StructureIDTable::flushOldTables):
+        (JSC::StructureIDTable::allocateID):
+        (JSC::StructureIDTable::deallocateID):
+        * runtime/StructureIDTable.h: Added.
+        (JSC::StructureIDTable::base):
+        (JSC::StructureIDTable::get):
+        * runtime/SymbolTable.h:
+        * runtime/TypedArrayType.cpp:
+        (JSC::typeForTypedArrayType):
+        * runtime/TypedArrayType.h:
+        * runtime/WeakMapData.h:
+
 2014-02-26  Mark Hahnenberg  <mhahnenberg@apple.com>
 
         Unconditional logging in compileFTLOSRExit
index baa0ec5..426fe3f 100644 (file)
@@ -1228,6 +1228,9 @@ javascriptcore_sources += \
        Source/JavaScriptCore/runtime/StructureChain.h \
        Source/JavaScriptCore/runtime/Structure.cpp \
        Source/JavaScriptCore/runtime/Structure.h \
+       Source/JavaScriptCore/runtime/StructureIDBlob.h \
+       Source/JavaScriptCore/runtime/StructureIDTable.cpp \
+       Source/JavaScriptCore/runtime/StructureIDTable.h \
        Source/JavaScriptCore/runtime/StructureInlines.h \
        Source/JavaScriptCore/runtime/StructureRareData.cpp \
        Source/JavaScriptCore/runtime/StructureRareData.h \
index 261cba9..6d613c8 100644 (file)
     <ClCompile Include="..\runtime\StringRecursionChecker.cpp" />
     <ClCompile Include="..\runtime\Structure.cpp" />
     <ClCompile Include="..\runtime\StructureChain.cpp" />
+    <ClCompile Include="..\runtime\StructureIDTable.cpp" />
     <ClCompile Include="..\runtime\StructureRareData.cpp" />
     <ClCompile Include="..\runtime\SymbolTable.cpp" />
     <ClCompile Include="..\runtime\TestRunnerUtils.cpp" />
     <ClInclude Include="..\runtime\StringRecursionChecker.h" />
     <ClInclude Include="..\runtime\Structure.h" />
     <ClInclude Include="..\runtime\StructureChain.h" />
+    <ClInclude Include="..\runtime\StructureIDBlob.h" />
+    <ClInclude Include="..\runtime\StructureIDTable.h" />
     <ClInclude Include="..\runtime\StructureRareData.h" />
     <ClInclude Include="..\runtime\StructureRareDataInlines.h" />
     <ClInclude Include="..\runtime\StructureTransitionTable.h" />
   <ImportGroup Label="ExtensionTargets">
     <Import Project="$(VCTargetsPath)\BuildCustomizations\masm.targets" />
   </ImportGroup>
-</Project>
\ No newline at end of file
+</Project>
index a96a560..d5f12e9 100644 (file)
     <ClCompile Include="..\runtime\StructureChain.cpp">
       <Filter>runtime</Filter>
     </ClCompile>
+    <ClCompile Include="..\runtime\StructureIDTable.cpp">
+      <Filter>runtime</Filter>
+    </ClCompile>
     <ClCompile Include="..\runtime\SymbolTable.cpp">
       <Filter>runtime</Filter>
     </ClCompile>
     <ClInclude Include="..\runtime\Structure.h">
       <Filter>runtime</Filter>
     </ClInclude>
+    <ClInclude Include="..\runtime\StructureIDBlobh">
+      <Filter>runtime</Filter>
+    </ClInclude>
     <ClInclude Include="..\runtime\StructureChain.h">
       <Filter>runtime</Filter>
     </ClInclude>
+    <ClInclude Include="..\runtime\StructureIDBlob.h">
+      <Filter>runtime</Filter>
+    </ClInclude>
+    <ClInclude Include="..\runtime\StructureIDTable.h">
+      <Filter>runtime</Filter>
+    </ClInclude>
     <ClInclude Include="..\runtime\StructureTransitionTable.h">
       <Filter>runtime</Filter>
     </ClInclude>
   <ItemGroup>
     <MASM Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\LowLevelInterpreterWin.asm" />
   </ItemGroup>
-</Project>
\ No newline at end of file
+</Project>
index 293bd01..c156ebe 100644 (file)
                2A68295B1875F80500B6C3E2 /* CopyWriteBarrier.h in Headers */ = {isa = PBXBuildFile; fileRef = 2A68295A1875F80500B6C3E2 /* CopyWriteBarrier.h */; settings = {ATTRIBUTES = (Private, ); }; };
                2A6F462617E959CE00C45C98 /* HeapOperation.h in Headers */ = {isa = PBXBuildFile; fileRef = 2A6F462517E959CE00C45C98 /* HeapOperation.h */; settings = {ATTRIBUTES = (Private, ); }; };
                2A7A58EF1808A4C40020BDF7 /* DeferGC.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 2A7A58EE1808A4C40020BDF7 /* DeferGC.cpp */; };
+               2AAAA31218BD49D100394CC8 /* StructureIDBlob.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AAAA31018BD49D100394CC8 /* StructureIDBlob.h */; settings = {ATTRIBUTES = (Private, ); }; };
                2AAD964A18569417001F93BE /* RecursiveAllocationScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AAD964918569417001F93BE /* RecursiveAllocationScope.h */; };
                2AC922BB18A16182003CE0FB /* FTLDWARFDebugLineInfo.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 2AC922B918A16182003CE0FB /* FTLDWARFDebugLineInfo.cpp */; };
                2AC922BC18A16182003CE0FB /* FTLDWARFDebugLineInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AC922BA18A16182003CE0FB /* FTLDWARFDebugLineInfo.h */; };
                2ACCF3DE185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 2ACCF3DC185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.cpp */; };
                2ACCF3DF185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 2ACCF3DD185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.h */; };
                2AD8932B17E3868F00668276 /* HeapIterationScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AD8932917E3868F00668276 /* HeapIterationScope.h */; };
+               2AF7382C18BBBF92008A5A37 /* StructureIDTable.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 2AF7382A18BBBF92008A5A37 /* StructureIDTable.cpp */; };
+               2AF7382D18BBBF92008A5A37 /* StructureIDTable.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AF7382B18BBBF92008A5A37 /* StructureIDTable.h */; settings = {ATTRIBUTES = (Private, ); }; };
                371D842D17C98B6E00ECF994 /* libz.dylib in Frameworks */ = {isa = PBXBuildFile; fileRef = 371D842C17C98B6E00ECF994 /* libz.dylib */; };
                41359CF30FDD89AD00206180 /* DateConversion.h in Headers */ = {isa = PBXBuildFile; fileRef = D21202290AD4310C00ED79B6 /* DateConversion.h */; };
                4443AE3316E188D90076F110 /* Foundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 51F0EB6105C86C6B00E6DF1B /* Foundation.framework */; };
                2A68295A1875F80500B6C3E2 /* CopyWriteBarrier.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CopyWriteBarrier.h; sourceTree = "<group>"; };
                2A6F462517E959CE00C45C98 /* HeapOperation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapOperation.h; sourceTree = "<group>"; };
                2A7A58EE1808A4C40020BDF7 /* DeferGC.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DeferGC.cpp; sourceTree = "<group>"; };
+               2AAAA31018BD49D100394CC8 /* StructureIDBlob.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureIDBlob.h; sourceTree = "<group>"; };
                2AAD964918569417001F93BE /* RecursiveAllocationScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RecursiveAllocationScope.h; sourceTree = "<group>"; };
                2AC922B918A16182003CE0FB /* FTLDWARFDebugLineInfo.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLDWARFDebugLineInfo.cpp; path = ftl/FTLDWARFDebugLineInfo.cpp; sourceTree = "<group>"; };
                2AC922BA18A16182003CE0FB /* FTLDWARFDebugLineInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLDWARFDebugLineInfo.h; path = ftl/FTLDWARFDebugLineInfo.h; sourceTree = "<group>"; };
                2ACCF3DC185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStoreBarrierElisionPhase.cpp; path = dfg/DFGStoreBarrierElisionPhase.cpp; sourceTree = "<group>"; };
                2ACCF3DD185FE26B0083E2AD /* DFGStoreBarrierElisionPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStoreBarrierElisionPhase.h; path = dfg/DFGStoreBarrierElisionPhase.h; sourceTree = "<group>"; };
                2AD8932917E3868F00668276 /* HeapIterationScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapIterationScope.h; sourceTree = "<group>"; };
+               2AF7382A18BBBF92008A5A37 /* StructureIDTable.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StructureIDTable.cpp; sourceTree = "<group>"; };
+               2AF7382B18BBBF92008A5A37 /* StructureIDTable.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureIDTable.h; sourceTree = "<group>"; };
                371D842C17C98B6E00ECF994 /* libz.dylib */ = {isa = PBXFileReference; lastKnownFileType = "compiled.mach-o.dylib"; name = libz.dylib; path = usr/lib/libz.dylib; sourceTree = SDKROOT; };
                449097EE0F8F81B50076A327 /* FeatureDefines.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = FeatureDefines.xcconfig; sourceTree = "<group>"; };
                451539B812DC994500EF7AC4 /* Yarr.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Yarr.h; path = yarr/Yarr.h; sourceTree = "<group>"; };
                7EF6E0BB0EB7A1EC0079AFAF /* runtime */ = {
                        isa = PBXGroup;
                        children = (
+                               2AF7382A18BBBF92008A5A37 /* StructureIDTable.cpp */,
+                               2AF7382B18BBBF92008A5A37 /* StructureIDTable.h */,
                                BCF605110E203EF800B9A64D /* ArgList.cpp */,
                                BCF605120E203EF800B9A64D /* ArgList.h */,
                                BC257DE50E1F51C50016B6C9 /* Arguments.cpp */,
                                1420BE7A10AA6DDB00F455D2 /* WeakRandom.h */,
                                A7DCB77912E3D90500911940 /* WriteBarrier.h */,
                                C2B6D75218A33793004A9301 /* WriteBarrierInlines.h */,
+                               2AAAA31018BD49D100394CC8 /* StructureIDBlob.h */,
                        );
                        path = runtime;
                        sourceTree = "<group>";
                                0F6B1CB91861244C00845D97 /* ArityCheckMode.h in Headers */,
                                A1A009C11831A26E00CF8711 /* ARM64Assembler.h in Headers */,
                                86D3B2C410156BDE002865E7 /* ARMAssembler.h in Headers */,
+                               2AAAA31218BD49D100394CC8 /* StructureIDBlob.h in Headers */,
                                86ADD1450FDDEA980006EEC2 /* ARMv7Assembler.h in Headers */,
                                65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */,
                                0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */,
                                0F63945515D07057006A597C /* ArrayProfile.h in Headers */,
                                BC18C3E70E16F5CD00B34460 /* ArrayPrototype.h in Headers */,
                                BC18C5240E16FC8A00B34460 /* ArrayPrototype.lut.h in Headers */,
+                               2AF7382D18BBBF92008A5A37 /* StructureIDTable.h in Headers */,
                                0FB7F39615ED8E4600F167B2 /* ArrayStorage.h in Headers */,
                                9688CB150ED12B4E001D649F /* AssemblerBuffer.h in Headers */,
                                86D3B2C510156BDE002865E7 /* AssemblerBufferWithConstantPool.h in Headers */,
                                A5CEEE14187F3BAD00E55C99 /* InspectorAgent.cpp in Sources */,
                                A593CF86184038CA00BFCE27 /* InspectorAgentRegistry.cpp in Sources */,
                                A593CF7C1840360300BFCE27 /* InspectorBackendDispatcher.cpp in Sources */,
+                               2AF7382C18BBBF92008A5A37 /* StructureIDTable.cpp in Sources */,
                                A5FD0081189B191A00633231 /* InspectorConsoleAgent.cpp in Sources */,
                                A57D23E51890CEBF0031C7FA /* InspectorDebuggerAgent.cpp in Sources */,
                                A532438718568335002ED692 /* InspectorJSBackendDispatchers.cpp in Sources */,
index 7d7ee79..1fd47be 100644 (file)
@@ -434,7 +434,7 @@ public:
 
     // DataLabel32:
     //
-    // A DataLabelPtr is used to refer to a location in the code containing a pointer to be
+    // A DataLabel32 is used to refer to a location in the code containing a 32-bit constant to be
     // patched after the code has been generated.
     class DataLabel32 {
         template<class TemplateAssemblerType>
index bf461e3..69bc11a 100644 (file)
@@ -366,6 +366,11 @@ public:
         return PatchableJump(branchPtrWithPatch(cond, left, dataLabel, initialRightValue));
     }
 
+    PatchableJump patchableBranch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        return PatchableJump(branch32WithPatch(cond, left, dataLabel, initialRightValue));
+    }
+
 #if !CPU(ARM_TRADITIONAL)
     PatchableJump patchableJump()
     {
@@ -381,6 +386,11 @@ public:
     {
         return PatchableJump(branch32(cond, reg, imm));
     }
+
+    PatchableJump patchableBranch32(RelationalCondition cond, Address address, TrustedImm32 imm)
+    {
+        return PatchableJump(branch32(cond, address, imm));
+    }
 #endif
 #endif
 
index da0697f..856dce8 100644 (file)
@@ -2220,6 +2220,13 @@ public:
         return branch64(cond, left, dataTempRegister);
     }
 
+    ALWAYS_INLINE Jump branch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        dataLabel = DataLabel32(this);
+        moveWithPatch(initialRightValue, getCachedDataTempRegisterIDAndInvalidate());
+        return branch32(cond, left, dataTempRegister);
+    }
+
     PatchableJump patchableBranchPtr(RelationalCondition cond, Address left, TrustedImmPtr right = TrustedImmPtr(0))
     {
         m_makeJumpPatchable = true;
@@ -2252,6 +2259,14 @@ public:
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32WithPatch(cond, left, dataLabel, initialRightValue);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableJump()
     {
         m_makeJumpPatchable = true;
@@ -2322,6 +2337,7 @@ public:
     RegisterID scratchRegisterForBlinding() { return getCachedDataTempRegisterIDAndInvalidate(); }
 
     static bool canJumpReplacePatchableBranchPtrWithPatch() { return false; }
+    static bool canJumpReplacePatchableBranch32WithPatch() { return false; }
     
     static CodeLocationLabel startOfBranchPtrWithPatchOnRegister(CodeLocationDataLabelPtr label)
     {
@@ -2334,6 +2350,12 @@ public:
         return CodeLocationLabel();
     }
     
+    static CodeLocationLabel startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32)
+    {
+        UNREACHABLE_FOR_PLATFORM();
+        return CodeLocationLabel();
+    }
+    
     static void revertJumpReplacementToBranchPtrWithPatch(CodeLocationLabel instructionStart, RegisterID, void* initialValue)
     {
         reemitInitialMoveWithPatch(instructionStart.dataLocation(), initialValue);
@@ -2344,6 +2366,11 @@ public:
         UNREACHABLE_FOR_PLATFORM();
     }
 
+    static void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel, Address, int32_t)
+    {
+        UNREACHABLE_FOR_PLATFORM();
+    }
+
 protected:
     ALWAYS_INLINE Jump makeBranch(ARM64Assembler::Condition cond)
     {
index 79d8bab..0fa3079 100644 (file)
@@ -753,6 +753,11 @@ public:
         store32(dataTempRegister, address);
     }
 
+    void store8(RegisterID src, Address address)
+    {
+        store8(src, setupArmAddress(address));
+    }
+    
     void store8(RegisterID src, BaseIndex address)
     {
         store8(src, setupArmAddress(address));
@@ -770,6 +775,12 @@ public:
         store8(dataTempRegister, address);
     }
     
+    void store8(TrustedImm32 imm, Address address)
+    {
+        move(imm, dataTempRegister);
+        store8(dataTempRegister, address);
+    }
+    
     void store16(RegisterID src, BaseIndex address)
     {
         store16(src, setupArmAddress(address));
@@ -1726,6 +1737,13 @@ public:
         return branch32(cond, addressTempRegister, dataTempRegister);
     }
     
+    ALWAYS_INLINE Jump branch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        load32(left, addressTempRegister);
+        dataLabel = moveWithPatch(initialRightValue, dataTempRegister);
+        return branch32(cond, addressTempRegister, dataTempRegister);
+    }
+    
     PatchableJump patchableBranchPtr(RelationalCondition cond, Address left, TrustedImmPtr right = TrustedImmPtr(0))
     {
         m_makeJumpPatchable = true;
@@ -1758,6 +1776,14 @@ public:
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32WithPatch(cond, left, dataLabel, initialRightValue);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableJump()
     {
         padBeforePatch();
@@ -1796,6 +1822,7 @@ public:
     }
     
     static bool canJumpReplacePatchableBranchPtrWithPatch() { return false; }
+    static bool canJumpReplacePatchableBranch32WithPatch() { return false; }
     
     static CodeLocationLabel startOfBranchPtrWithPatchOnRegister(CodeLocationDataLabelPtr label)
     {
@@ -1819,11 +1846,22 @@ public:
         return CodeLocationLabel();
     }
     
+    static CodeLocationLabel startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32)
+    {
+        UNREACHABLE_FOR_PLATFORM();
+        return CodeLocationLabel();
+    }
+    
     static void revertJumpReplacementToPatchableBranchPtrWithPatch(CodeLocationLabel, Address, void*)
     {
         UNREACHABLE_FOR_PLATFORM();
     }
 
+    static void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel, Address, int32_t)
+    {
+        UNREACHABLE_FOR_PLATFORM();
+    }
+
 #if USE(MASM_PROBE)
     struct CPUState {
         #define DECLARE_REGISTER(_type, _regName) \
index 547158f..2386762 100644 (file)
@@ -257,6 +257,14 @@ public:
         return Jump(m_assembler.jCC(x86Condition(cond)));
     }
 
+    Jump branch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        padBeforePatch();
+        m_assembler.cmpl_im_force32(initialRightValue.m_value, left.offset, left.base);
+        dataLabel = DataLabel32(this);
+        return Jump(m_assembler.jCC(x86Condition(cond)));
+    }
+
     DataLabelPtr storePtrWithPatch(TrustedImmPtr initialValue, ImplicitAddress address)
     {
         padBeforePatch();
@@ -277,6 +285,7 @@ public:
     }
 
     static bool canJumpReplacePatchableBranchPtrWithPatch() { return true; }
+    static bool canJumpReplacePatchableBranch32WithPatch() { return true; }
     
     static CodeLocationLabel startOfBranchPtrWithPatchOnRegister(CodeLocationDataLabelPtr label)
     {
@@ -299,6 +308,17 @@ public:
         return label.labelAtOffset(-totalBytes);
     }
     
+    static CodeLocationLabel startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32 label)
+    {
+        const int opcodeBytes = 1;
+        const int modRMBytes = 1;
+        const int offsetBytes = 0;
+        const int immediateBytes = 4;
+        const int totalBytes = opcodeBytes + modRMBytes + offsetBytes + immediateBytes;
+        ASSERT(totalBytes >= maxJumpReplacementSize());
+        return label.labelAtOffset(-totalBytes);
+    }
+    
     static void revertJumpReplacementToBranchPtrWithPatch(CodeLocationLabel instructionStart, RegisterID reg, void* initialValue)
     {
         X86Assembler::revertJumpTo_cmpl_ir_force32(instructionStart.executableAddress(), reinterpret_cast<intptr_t>(initialValue), reg);
@@ -310,6 +330,12 @@ public:
         X86Assembler::revertJumpTo_cmpl_im_force32(instructionStart.executableAddress(), reinterpret_cast<intptr_t>(initialValue), 0, address.base);
     }
 
+    static void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel instructionStart, Address address, int32_t initialValue)
+    {
+        ASSERT(!address.offset);
+        X86Assembler::revertJumpTo_cmpl_im_force32(instructionStart.executableAddress(), initialValue, 0, address.base);
+    }
+
 #if USE(MASM_PROBE)
     // For details about probe(), see comment in MacroAssemblerX86_64.h.
     void probe(ProbeFunction, void* arg1 = 0, void* arg2 = 0);
index 4fbc5a3..7284f34 100644 (file)
@@ -126,6 +126,16 @@ public:
         move(TrustedImmPtr(address), scratchRegister);
         store32(imm, scratchRegister);
     }
+
+    void store32(RegisterID source, void* address)
+    {
+        if (source == X86Registers::eax)
+            m_assembler.movl_EAXm(address);
+        else {
+            move(TrustedImmPtr(address), scratchRegister);
+            store32(source, scratchRegister);
+        }
+    }
     
     void store8(TrustedImm32 imm, void* address)
     {
@@ -627,6 +637,13 @@ public:
         return DataLabelPtr(this);
     }
 
+    DataLabelPtr moveWithPatch(TrustedImm32 initialValue, RegisterID dest)
+    {
+        padBeforePatch();
+        m_assembler.movq_i64r(initialValue.m_value, dest);
+        return DataLabelPtr(this);
+    }
+
     Jump branchPtrWithPatch(RelationalCondition cond, RegisterID left, DataLabelPtr& dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0))
     {
         dataLabel = moveWithPatch(initialRightValue, scratchRegister);
@@ -639,6 +656,14 @@ public:
         return branch64(cond, left, scratchRegister);
     }
 
+    Jump branch32WithPatch(RelationalCondition cond, Address left, DataLabel32& dataLabel, TrustedImm32 initialRightValue = TrustedImm32(0))
+    {
+        padBeforePatch();
+        m_assembler.movl_i32r(initialRightValue.m_value, scratchRegister);
+        dataLabel = DataLabel32(this);
+        return branch32(cond, left, scratchRegister);
+    }
+
     DataLabelPtr storePtrWithPatch(TrustedImmPtr initialValue, ImplicitAddress address)
     {
         DataLabelPtr label = moveWithPatch(initialValue, scratchRegister);
@@ -687,6 +712,7 @@ public:
     static RegisterID scratchRegisterForBlinding() { return scratchRegister; }
 
     static bool canJumpReplacePatchableBranchPtrWithPatch() { return true; }
+    static bool canJumpReplacePatchableBranch32WithPatch() { return true; }
     
     static CodeLocationLabel startOfBranchPtrWithPatchOnRegister(CodeLocationDataLabelPtr label)
     {
@@ -698,16 +724,36 @@ public:
         return label.labelAtOffset(-totalBytes);
     }
     
+    static CodeLocationLabel startOfBranch32WithPatchOnRegister(CodeLocationDataLabel32 label)
+    {
+        const int rexBytes = 1;
+        const int opcodeBytes = 1;
+        const int immediateBytes = 4;
+        const int totalBytes = rexBytes + opcodeBytes + immediateBytes;
+        ASSERT(totalBytes >= maxJumpReplacementSize());
+        return label.labelAtOffset(-totalBytes);
+    }
+    
     static CodeLocationLabel startOfPatchableBranchPtrWithPatchOnAddress(CodeLocationDataLabelPtr label)
     {
         return startOfBranchPtrWithPatchOnRegister(label);
     }
+
+    static CodeLocationLabel startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32 label)
+    {
+        return startOfBranch32WithPatchOnRegister(label);
+    }
     
     static void revertJumpReplacementToPatchableBranchPtrWithPatch(CodeLocationLabel instructionStart, Address, void* initialValue)
     {
         X86Assembler::revertJumpTo_movq_i64r(instructionStart.executableAddress(), reinterpret_cast<intptr_t>(initialValue), scratchRegister);
     }
 
+    static void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel instructionStart, Address, int32_t initialValue)
+    {
+        X86Assembler::revertJumpTo_movl_i32r(instructionStart.executableAddress(), initialValue, scratchRegister);
+    }
+
     static void revertJumpReplacementToBranchPtrWithPatch(CodeLocationLabel instructionStart, RegisterID, void* initialValue)
     {
         X86Assembler::revertJumpTo_movq_i64r(instructionStart.executableAddress(), reinterpret_cast<intptr_t>(initialValue), scratchRegister);
index 41e950a..241ce14 100644 (file)
@@ -157,6 +157,11 @@ public:
     {
         return MacroAssembler::startOfPatchableBranchPtrWithPatchOnAddress(label);
     }
+
+    static CodeLocationLabel startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32 label)
+    {
+        return MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(label);
+    }
     
     void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
@@ -176,6 +181,11 @@ public:
         MacroAssembler::revertJumpReplacementToPatchableBranchPtrWithPatch(instructionStart, address, value);
     }
 
+    void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel instructionStart, MacroAssembler::Address address, int32_t value)
+    {
+        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(instructionStart, address, value);
+    }
+
 private:
     CodeBlock* m_codeBlock;
 #if ENABLE(ASSEMBLER_WX_EXCLUSIVE)
index 1a43e20..ff60aec 100644 (file)
@@ -1330,7 +1330,7 @@ public:
     {
         m_formatter.oneByteOp8(OP_MOV_EbGb, src, base, index, scale, offset);
     }
-    
+
     void movw_rm(RegisterID src, int offset, RegisterID base, RegisterID index, int scale)
     {
         m_formatter.prefix(PRE_OPERAND_SIZE);
@@ -2054,9 +2054,9 @@ public:
 #if CPU(X86_64)
     static void revertJumpTo_movq_i64r(void* instructionStart, int64_t imm, RegisterID dst)
     {
+        const unsigned instructionSize = 10; // REX.W MOV IMM64
         const int rexBytes = 1;
         const int opcodeBytes = 1;
-        ASSERT(rexBytes + opcodeBytes <= maxJumpReplacementSize());
         uint8_t* ptr = reinterpret_cast<uint8_t*>(instructionStart);
         ptr[0] = PRE_REX | (1 << 3) | (dst >> 3);
         ptr[1] = OP_MOV_EAXIv | (dst & 7);
@@ -2066,11 +2066,33 @@ public:
             uint8_t asBytes[8];
         } u;
         u.asWord = imm;
-        for (unsigned i = rexBytes + opcodeBytes; i < static_cast<unsigned>(maxJumpReplacementSize()); ++i)
+        for (unsigned i = rexBytes + opcodeBytes; i < instructionSize; ++i)
+            ptr[i] = u.asBytes[i - rexBytes - opcodeBytes];
+    }
+
+    static void revertJumpTo_movl_i32r(void* instructionStart, int32_t imm, RegisterID dst)
+    {
+        // We only revert jumps on inline caches, and inline caches always use the scratch register (r11).
+        // FIXME: If the above is ever false then we need to make this smarter with respect to emitting 
+        // the REX byte.
+        ASSERT(dst == X86Registers::r11);
+        const unsigned instructionSize = 6; // REX MOV IMM32
+        const int rexBytes = 1;
+        const int opcodeBytes = 1;
+        uint8_t* ptr = reinterpret_cast<uint8_t*>(instructionStart);
+        ptr[0] = PRE_REX | (dst >> 3);
+        ptr[1] = OP_MOV_EAXIv | (dst & 7);
+        
+        union {
+            uint32_t asWord;
+            uint8_t asBytes[4];
+        } u;
+        u.asWord = imm;
+        for (unsigned i = rexBytes + opcodeBytes; i < instructionSize; ++i)
             ptr[i] = u.asBytes[i - rexBytes - opcodeBytes];
     }
 #endif
-    
+
     static void revertJumpTo_cmpl_ir_force32(void* instructionStart, int32_t imm, RegisterID dst)
     {
         const int opcodeBytes = 1;
index 2b62c73..ef49f20 100644 (file)
@@ -77,24 +77,25 @@ void dumpArrayModes(PrintStream& out, ArrayModes arrayModes)
 
 void ArrayProfile::computeUpdatedPrediction(const ConcurrentJITLocker&, CodeBlock* codeBlock)
 {
-    if (!m_lastSeenStructure)
+    if (!m_lastSeenStructureID)
         return;
     
-    m_observedArrayModes |= arrayModeFromStructure(m_lastSeenStructure);
+    Structure* lastSeenStructure = codeBlock->heap()->structureIDTable().get(m_lastSeenStructureID);
+    m_observedArrayModes |= arrayModeFromStructure(lastSeenStructure);
     
     if (!m_didPerformFirstRunPruning
         && hasTwoOrMoreBitsSet(m_observedArrayModes)) {
-        m_observedArrayModes = arrayModeFromStructure(m_lastSeenStructure);
+        m_observedArrayModes = arrayModeFromStructure(lastSeenStructure);
         m_didPerformFirstRunPruning = true;
     }
     
     m_mayInterceptIndexedAccesses |=
-        m_lastSeenStructure->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero();
+        lastSeenStructure->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero();
     JSGlobalObject* globalObject = codeBlock->globalObject();
-    if (!globalObject->isOriginalArrayStructure(m_lastSeenStructure)
-        && !globalObject->isOriginalTypedArrayStructure(m_lastSeenStructure))
+    if (!globalObject->isOriginalArrayStructure(lastSeenStructure)
+        && !globalObject->isOriginalTypedArrayStructure(lastSeenStructure))
         m_usesOriginalArrayStructures = false;
-    m_lastSeenStructure = 0;
+    m_lastSeenStructureID = 0;
 }
 
 CString ArrayProfile::briefDescription(const ConcurrentJITLocker& locker, CodeBlock* codeBlock)
index c23230e..302365f 100644 (file)
@@ -135,7 +135,7 @@ class ArrayProfile {
 public:
     ArrayProfile()
         : m_bytecodeOffset(std::numeric_limits<unsigned>::max())
-        , m_lastSeenStructure(0)
+        , m_lastSeenStructureID(0)
         , m_mayStoreToHole(false)
         , m_outOfBounds(false)
         , m_mayInterceptIndexedAccesses(false)
@@ -147,7 +147,7 @@ public:
     
     ArrayProfile(unsigned bytecodeOffset)
         : m_bytecodeOffset(bytecodeOffset)
-        , m_lastSeenStructure(0)
+        , m_lastSeenStructureID(0)
         , m_mayStoreToHole(false)
         , m_outOfBounds(false)
         , m_mayInterceptIndexedAccesses(false)
@@ -159,14 +159,14 @@ public:
     
     unsigned bytecodeOffset() const { return m_bytecodeOffset; }
     
-    Structure** addressOfLastSeenStructure() { return &m_lastSeenStructure; }
+    StructureID* addressOfLastSeenStructureID() { return &m_lastSeenStructureID; }
     ArrayModes* addressOfArrayModes() { return &m_observedArrayModes; }
     bool* addressOfMayStoreToHole() { return &m_mayStoreToHole; }
     bool* addressOfOutOfBounds() { return &m_outOfBounds; }
     
     void observeStructure(Structure* structure)
     {
-        m_lastSeenStructure = structure;
+        m_lastSeenStructureID = structure->id();
     }
     
     void computeUpdatedPrediction(const ConcurrentJITLocker&, CodeBlock*);
@@ -188,7 +188,7 @@ private:
     static Structure* polymorphicStructure() { return static_cast<Structure*>(reinterpret_cast<void*>(1)); }
     
     unsigned m_bytecodeOffset;
-    Structure* m_lastSeenStructure;
+    StructureID m_lastSeenStructureID;
     bool m_mayStoreToHole; // This flag may become overloaded to indicate other special cases that were encountered during array access, as it depends on indexing type. Since we currently have basically just one indexing type (two variants of ArrayStorage), this flag for now just means exactly what its name implies.
     bool m_outOfBounds;
     bool m_mayInterceptIndexedAccesses : 1;
index 8378ed7..dab0552 100644 (file)
@@ -669,6 +669,7 @@ public:
         return constantBufferAsVector(index).data();
     }
 
+    Heap* heap() const { return m_heap; }
     JSGlobalObject* globalObject() { return m_globalObject.get(); }
 
     JSGlobalObject* globalObjectFor(CodeOrigin);
index 0308b16..69e2b1c 100644 (file)
@@ -206,7 +206,7 @@ public:
         return Structure::create(vm, globalObject, proto, TypeInfo(UnlinkedFunctionExecutableType, StructureFlags), info());
     }
 
-    static const unsigned StructureFlags = OverridesVisitChildren | JSCell::StructureFlags;
+    static const unsigned StructureFlags = OverridesVisitChildren | StructureIsImmortal | JSCell::StructureFlags;
 
     DECLARE_EXPORT_INFO;
 };
@@ -579,7 +579,7 @@ private:
 
 protected:
 
-    static const unsigned StructureFlags = OverridesVisitChildren | Base::StructureFlags;
+    static const unsigned StructureFlags = OverridesVisitChildren | StructureIsImmortal | Base::StructureFlags;
     static void visitChildren(JSCell*, SlotVisitor&);
 
 public:
index 745f94a..338c99e 100644 (file)
@@ -53,7 +53,10 @@ namespace JSC { namespace DFG {
     macro(JSArrayBufferView_length) \
     macro(JSArrayBufferView_mode) \
     macro(JSArrayBufferView_vector) \
-    macro(JSCell_structure) \
+    macro(JSCell_structureID) \
+    macro(JSCell_indexingType) \
+    macro(JSCell_typeInfoFlags) \
+    macro(JSCell_typeInfoType) \
     macro(JSFunction_executable) \
     macro(JSFunction_scopeChain) \
     macro(JSObject_butterfly) \
index b45b204..6d54b01 100644 (file)
@@ -120,23 +120,14 @@ protected:
         if (m_op == ArrayifyToStructure) {
             ASSERT(m_structure);
             m_badIndexingTypeJump.fill(
-                jit, jit->m_jit.branchWeakPtr(
-                    MacroAssembler::NotEqual,
-                    MacroAssembler::Address(m_baseGPR, JSCell::structureOffset()),
-                    m_structure));
+                jit, jit->m_jit.branchWeakStructure(MacroAssembler::NotEqual, MacroAssembler::Address(m_baseGPR, JSCell::structureIDOffset()), m_structure));
         } else {
-            // Alas, we need to reload the structure because silent spilling does not save
-            // temporaries. Nor would it be useful for it to do so. Either way we're talking
-            // about a load.
-            jit->m_jit.loadPtr(
-                MacroAssembler::Address(m_baseGPR, JSCell::structureOffset()), m_structureGPR);
-            
             // Finally, check that we have the kind of array storage that we wanted to get.
             // Note that this is a backwards speculation check, which will result in the 
             // bytecode operation corresponding to this arrayification being reexecuted.
             // That's fine, since arrayification is not user-visible.
             jit->m_jit.load8(
-                MacroAssembler::Address(m_structureGPR, Structure::indexingTypeOffset()), m_structureGPR);
+                MacroAssembler::Address(m_baseGPR, JSCell::indexingTypeOffset()), m_structureGPR);
             m_badIndexingTypeJump.fill(
                 jit, jit->jumpSlowForUnwantedArrayMode(m_structureGPR, m_arrayMode));
         }
index 5aefcfa..ca37c92 100644 (file)
@@ -60,7 +60,7 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
     //   versions of those nodes that backward-exit instead, but I'm not convinced
     //   of the soundness.
     //
-    // - Some nodes lie, and claim that they do not read the JSCell_structure.
+    // - Some nodes lie, and claim that they do not read the JSCell_structureID, JSCell_typeInfoFlags, etc.
     //   These are nodes that use the structure in a way that does not depend on
     //   things that change under structure transitions.
     //
@@ -79,7 +79,7 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
     //   small hacking.
     
     if (edgesUseStructure(graph, node))
-        read(JSCell_structure);
+        read(JSCell_structureID);
     
     switch (node->op()) {
     case JSConstant:
@@ -412,19 +412,30 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
         
     case CheckStructure:
     case StructureTransitionWatchpoint:
+    case InstanceOf:
+        read(JSCell_structureID);
+        return;
+
     case CheckArray:
+        read(JSCell_indexingType);
+        read(JSCell_typeInfoType);
+        read(JSCell_structureID);
+        return;
+
     case CheckHasInstance:
-    case InstanceOf:
-        read(JSCell_structure);
+        read(JSCell_typeInfoFlags);
         return;
-        
+
     case CheckExecutable:
         read(JSFunction_executable);
         return;
         
     case PutStructure:
     case PhantomPutStructure:
-        write(JSCell_structure);
+        write(JSCell_structureID);
+        write(JSCell_typeInfoType);
+        write(JSCell_typeInfoFlags);
+        write(JSCell_indexingType);
         return;
         
     case AllocatePropertyStorage:
@@ -444,9 +455,11 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
         
     case Arrayify:
     case ArrayifyToStructure:
-        read(JSCell_structure);
+        read(JSCell_structureID);
+        read(JSCell_indexingType);
         read(JSObject_butterfly);
-        write(JSCell_structure);
+        write(JSCell_structureID);
+        write(JSCell_indexingType);
         write(JSObject_butterfly);
         clobberizeForAllocation(read, write);
         return;
@@ -469,17 +482,17 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
         return;
         
     case MultiGetByOffset:
-        read(JSCell_structure);
+        read(JSCell_structureID);
         read(JSObject_butterfly);
         read(AbstractHeap(NamedProperties, node->multiGetByOffsetData().identifierNumber));
         return;
         
     case MultiPutByOffset:
-        read(JSCell_structure);
+        read(JSCell_structureID);
         read(JSObject_butterfly);
         write(AbstractHeap(NamedProperties, node->multiPutByOffsetData().identifierNumber));
         if (node->multiPutByOffsetData().writesStructures())
-            write(JSCell_structure);
+            write(JSCell_structureID);
         if (node->multiPutByOffsetData().reallocatesStorage()) {
             write(JSObject_butterfly);
             clobberizeForAllocation(read, write);
index f178ebc..47f4aa7 100644 (file)
@@ -249,7 +249,29 @@ public:
         addWeakReference(weakPtr);
         return result;
     }
-    
+
+    template<typename T>
+    Jump branchWeakStructure(RelationalCondition cond, T left, Structure* weakStructure)
+    {
+#if USE(JSVALUE64)
+        Jump result = branch32(cond, left, TrustedImm32(weakStructure->id()));
+        addWeakReference(weakStructure);
+        return result;
+#else
+        return branchWeakPtr(cond, left, weakStructure);
+#endif
+    }
+
+    template<typename T>
+    Jump branchStructurePtr(RelationalCondition cond, T left, Structure* structure)
+    {
+#if USE(JSVALUE64)
+        return branch32(cond, left, TrustedImm32(structure->id()));
+#else
+        return branchPtr(cond, left, TrustedImmPtr(structure));
+#endif
+    }
+
     void noticeOSREntry(BasicBlock& basicBlock, JITCompiler::Label blockHead, LinkBuffer& linkBuffer)
     {
         // OSR entry is not allowed into blocks deemed unreachable by control flow analysis.
index d2a0654..ee61f9a 100644 (file)
@@ -121,8 +121,8 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
                 } else
                     value = exit.m_jsValueSource.payloadGPR();
                 
-                m_jit.loadPtr(AssemblyHelpers::Address(value, JSCell::structureOffset()), scratch1);
-                m_jit.storePtr(scratch1, arrayProfile->addressOfLastSeenStructure());
+                m_jit.loadPtr(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
+                m_jit.storePtr(scratch1, arrayProfile->addressOfLastSeenStructureID());
                 m_jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeOffset()), scratch1);
                 m_jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
                 m_jit.lshift32(scratch1, scratch2);
index bdcc883..3ee5cdb 100644 (file)
@@ -116,9 +116,9 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
                 } else
                     value = exit.m_jsValueSource.gpr();
                 
-                m_jit.loadPtr(AssemblyHelpers::Address(value, JSCell::structureOffset()), scratch1);
-                m_jit.storePtr(scratch1, arrayProfile->addressOfLastSeenStructure());
-                m_jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeOffset()), scratch1);
+                m_jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
+                m_jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID());
+                m_jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeOffset()), scratch1);
                 m_jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
                 m_jit.lshift32(scratch1, scratch2);
                 m_jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
index 05437e2..e8bcf5e 100644 (file)
@@ -162,9 +162,9 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
 }
 
 #if ENABLE(GGC)
-static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch1, GPRReg scratch2)
+static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
 {
-    AssemblyHelpers::Jump definitelyNotMarked = jit.genericWriteBarrier(owner, scratch1, scratch2);
+    AssemblyHelpers::Jump definitelyNotMarked = jit.genericWriteBarrier(owner);
 
     // We need these extra slots because setupArgumentsWithExecState will use poke on x86.
 #if CPU(X86)
@@ -172,8 +172,8 @@ static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch1, GP
 #endif
 
     jit.setupArgumentsWithExecState(owner);
-    jit.move(MacroAssembler::TrustedImmPtr(reinterpret_cast<void*>(operationOSRWriteBarrier)), scratch1);
-    jit.call(scratch1);
+    jit.move(MacroAssembler::TrustedImmPtr(reinterpret_cast<void*>(operationOSRWriteBarrier)), scratch);
+    jit.call(scratch);
 
 #if CPU(X86)
     jit.addPtr(MacroAssembler::TrustedImm32(sizeof(void*) * 3), MacroAssembler::stackPointerRegister);
@@ -190,7 +190,7 @@ void adjustAndJumpToTarget(CCallHelpers& jit, const OSRExitBase& exit)
     for (CodeOrigin codeOrigin = exit.m_codeOrigin; ; codeOrigin = codeOrigin.inlineCallFrame->caller) {
         CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin);
         jit.move(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock->ownerExecutable()), GPRInfo::nonArgGPR0); 
-        osrWriteBarrier(jit, GPRInfo::nonArgGPR0, GPRInfo::nonArgGPR1, GPRInfo::nonArgGPR2);
+        osrWriteBarrier(jit, GPRInfo::nonArgGPR0, GPRInfo::nonArgGPR1);
         if (!codeOrigin.inlineCallFrame)
             break;
     }
index 84f7a67..9720df4 100644 (file)
@@ -79,7 +79,7 @@ static inline void putByVal(ExecState* exec, JSValue baseValue, uint32_t index,
             return;
         }
 
-        object->methodTable()->putByIndex(object, exec, index, value, strict);
+        object->methodTable(vm)->putByIndex(object, exec, index, value, strict);
         return;
     }
 
@@ -222,12 +222,12 @@ EncodedJSValue JIT_OPERATION operationToThisStrict(ExecState* exec, EncodedJSVal
 
 JSCell* JIT_OPERATION operationCreateThis(ExecState* exec, JSObject* constructor, int32_t inlineCapacity)
 {
-    VM* vm = &exec->vm();
-    NativeCallFrameTracer tracer(vm, exec);
+    VM& vm = exec->vm();
+    NativeCallFrameTracer tracer(&vm, exec);
 
 #if !ASSERT_DISABLED
     ConstructData constructData;
-    ASSERT(jsCast<JSFunction*>(constructor)->methodTable()->getConstructData(jsCast<JSFunction*>(constructor), constructData) == ConstructTypeJS);
+    ASSERT(jsCast<JSFunction*>(constructor)->methodTable(vm)->getConstructData(jsCast<JSFunction*>(constructor), constructData) == ConstructTypeJS);
 #endif
     
     return constructEmptyObject(exec, jsCast<JSFunction*>(constructor)->allocationProfile(exec, inlineCapacity)->structure());
@@ -392,8 +392,8 @@ void JIT_OPERATION operationPutByValCellNonStrict(ExecState* exec, JSCell* cell,
 
 void JIT_OPERATION operationPutByValBeyondArrayBoundsStrict(ExecState* exec, JSObject* array, int32_t index, EncodedJSValue encodedValue)
 {
-    VM* vm = &exec->vm();
-    NativeCallFrameTracer tracer(vm, exec);
+    VM& vm = exec->vm();
+    NativeCallFrameTracer tracer(&vm, exec);
     
     if (index >= 0) {
         array->putByIndexInline(exec, index, JSValue::decode(encodedValue), true);
index fd49b53..417e760 100644 (file)
@@ -698,9 +698,7 @@ void SpeculativeJIT::checkArray(Node* node)
     case Array::SlowPutArrayStorage: {
         GPRTemporary temp(this);
         GPRReg tempGPR = temp.gpr();
-        m_jit.loadPtr(
-            MacroAssembler::Address(baseReg, JSCell::structureOffset()), tempGPR);
-        m_jit.load8(MacroAssembler::Address(tempGPR, Structure::indexingTypeOffset()), tempGPR);
+        m_jit.load8(MacroAssembler::Address(baseReg, JSCell::indexingTypeOffset()), tempGPR);
         speculationCheck(
             BadIndexingType, JSValueSource::unboxedCell(baseReg), 0,
             jumpSlowForUnwantedArrayMode(tempGPR, node->arrayMode()));
@@ -709,18 +707,29 @@ void SpeculativeJIT::checkArray(Node* node)
         return;
     }
     case Array::Arguments:
-        expectedClassInfo = Arguments::info();
-        break;
+        speculationCheck(BadType, JSValueSource::unboxedCell(baseReg), node,
+            m_jit.branch8(
+                MacroAssembler::NotEqual,
+                MacroAssembler::Address(baseReg, JSCell::typeInfoTypeOffset()),
+                MacroAssembler::TrustedImm32(ArgumentsType)));
+
+        noResult(m_currentNode);
+        return;
     default:
-        expectedClassInfo = classInfoForType(node->arrayMode().typedArrayType());
-        break;
+        speculationCheck(BadType, JSValueSource::unboxedCell(baseReg), node,
+            m_jit.branch8(
+                MacroAssembler::NotEqual,
+                MacroAssembler::Address(baseReg, JSCell::typeInfoTypeOffset()),
+                MacroAssembler::TrustedImm32(typeForTypedArrayType(node->arrayMode().typedArrayType()))));
+        noResult(m_currentNode);
+        return;
     }
     
     RELEASE_ASSERT(expectedClassInfo);
     
     GPRTemporary temp(this);
-    m_jit.loadPtr(
-        MacroAssembler::Address(baseReg, JSCell::structureOffset()), temp.gpr());
+    GPRTemporary temp2(this);
+    m_jit.emitLoadStructure(baseReg, temp.gpr(), temp2.gpr());
     speculationCheck(
         BadType, JSValueSource::unboxedCell(baseReg), node,
         m_jit.branchPtr(
@@ -750,16 +759,13 @@ void SpeculativeJIT::arrayify(Node* node, GPRReg baseReg, GPRReg propertyReg)
     MacroAssembler::JumpList slowPath;
     
     if (node->op() == ArrayifyToStructure) {
-        slowPath.append(m_jit.branchWeakPtr(
+        slowPath.append(m_jit.branchWeakStructure(
             JITCompiler::NotEqual,
-            JITCompiler::Address(baseReg, JSCell::structureOffset()),
+            JITCompiler::Address(baseReg, JSCell::structureIDOffset()),
             node->structure()));
     } else {
-        m_jit.loadPtr(
-            MacroAssembler::Address(baseReg, JSCell::structureOffset()), structureGPR);
-        
         m_jit.load8(
-            MacroAssembler::Address(structureGPR, Structure::indexingTypeOffset()), tempGPR);
+            MacroAssembler::Address(baseReg, JSCell::indexingTypeOffset()), tempGPR);
         
         slowPath.append(jumpSlowForUnwantedArrayMode(tempGPR, node->arrayMode()));
     }
@@ -1131,24 +1137,25 @@ void SpeculativeJIT::compilePeepHoleObjectEquality(Node* node, Node* branchNode)
         if (m_state.forNode(node->child1()).m_type & ~SpecObject) {
             speculationCheck(
                 BadType, JSValueSource::unboxedCell(op1GPR), node->child1(), 
-                m_jit.branchPtr(
+                m_jit.branchStructurePtr(
                     MacroAssembler::Equal, 
-                    MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
-                    MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                    MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+                    m_jit.vm()->stringStructure.get()));
         }
         if (m_state.forNode(node->child2()).m_type & ~SpecObject) {
             speculationCheck(
                 BadType, JSValueSource::unboxedCell(op2GPR), node->child2(),
-                m_jit.branchPtr(
+                m_jit.branchStructurePtr(
                     MacroAssembler::Equal, 
-                    MacroAssembler::Address(op2GPR, JSCell::structureOffset()), 
-                    MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                    MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
+                    m_jit.vm()->stringStructure.get()));
         }
     } else {
         GPRTemporary structure(this);
+        GPRTemporary temp(this);
         GPRReg structureGPR = structure.gpr();
 
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
+        m_jit.emitLoadStructure(op1GPR, structureGPR, temp.gpr());
         if (m_state.forNode(node->child1()).m_type & ~SpecObject) {
             speculationCheck(
                 BadType, JSValueSource::unboxedCell(op1GPR), node->child1(),
@@ -1160,10 +1167,10 @@ void SpeculativeJIT::compilePeepHoleObjectEquality(Node* node, Node* branchNode)
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), node->child1(),
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
 
-        m_jit.loadPtr(MacroAssembler::Address(op2GPR, JSCell::structureOffset()), structureGPR);
+        m_jit.emitLoadStructure(op2GPR, structureGPR, temp.gpr());
         if (m_state.forNode(node->child2()).m_type & ~SpecObject) {
             speculationCheck(
                 BadType, JSValueSource::unboxedCell(op2GPR), node->child2(),
@@ -1175,7 +1182,7 @@ void SpeculativeJIT::compilePeepHoleObjectEquality(Node* node, Node* branchNode)
         speculationCheck(BadType, JSValueSource::unboxedCell(op2GPR), node->child2(),
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
 
@@ -2524,18 +2531,17 @@ void SpeculativeJIT::compilePutByValForFloatTypedArray(GPRReg base, GPRReg prope
     noResult(node);
 }
 
-void SpeculativeJIT::compileInstanceOfForObject(Node*, GPRReg valueReg, GPRReg prototypeReg, GPRReg scratchReg)
+void SpeculativeJIT::compileInstanceOfForObject(Node*, GPRReg valueReg, GPRReg prototypeReg, GPRReg scratchReg, GPRReg scratch2Reg)
 {
     // Check that prototype is an object.
-    m_jit.loadPtr(MacroAssembler::Address(prototypeReg, JSCell::structureOffset()), scratchReg);
-    speculationCheck(BadType, JSValueRegs(), 0, m_jit.branchIfNotObject(scratchReg));
+    speculationCheck(BadType, JSValueRegs(), 0, m_jit.branchIfCellNotObject(prototypeReg));
     
     // Initialize scratchReg with the value being checked.
     m_jit.move(valueReg, scratchReg);
     
     // Walk up the prototype chain of the value (in scratchReg), comparing to prototypeReg.
     MacroAssembler::Label loop(&m_jit);
-    m_jit.loadPtr(MacroAssembler::Address(scratchReg, JSCell::structureOffset()), scratchReg);
+    m_jit.emitLoadStructure(scratchReg, scratchReg, scratch2Reg);
 #if USE(JSVALUE64)
     m_jit.load64(MacroAssembler::Address(scratchReg, Structure::prototypeOffset()), scratchReg);
     MacroAssembler::Jump isInstance = m_jit.branch64(MacroAssembler::Equal, scratchReg, prototypeReg);
@@ -2574,9 +2580,11 @@ void SpeculativeJIT::compileInstanceOf(Node* node)
         JSValueOperand value(this, node->child1());
         SpeculateCellOperand prototype(this, node->child2());
         GPRTemporary scratch(this);
+        GPRTemporary scratch2(this);
         
         GPRReg prototypeReg = prototype.gpr();
         GPRReg scratchReg = scratch.gpr();
+        GPRReg scratch2Reg = scratch2.gpr();
         
 #if USE(JSVALUE64)
         GPRReg valueReg = value.gpr();
@@ -2593,7 +2601,7 @@ void SpeculativeJIT::compileInstanceOf(Node* node)
         
         isCell.link(&m_jit);
         
-        compileInstanceOfForObject(node, valueReg, prototypeReg, scratchReg);
+        compileInstanceOfForObject(node, valueReg, prototypeReg, scratchReg, scratch2Reg);
         
         done.link(&m_jit);
 
@@ -2609,12 +2617,14 @@ void SpeculativeJIT::compileInstanceOf(Node* node)
     SpeculateCellOperand prototype(this, node->child2());
     
     GPRTemporary scratch(this);
+    GPRTemporary scratch2(this);
     
     GPRReg valueReg = value.gpr();
     GPRReg prototypeReg = prototype.gpr();
     GPRReg scratchReg = scratch.gpr();
+    GPRReg scratch2Reg = scratch2.gpr();
     
-    compileInstanceOfForObject(node, valueReg, prototypeReg, scratchReg);
+    compileInstanceOfForObject(node, valueReg, prototypeReg, scratchReg, scratch2Reg);
 
 #if USE(JSVALUE64)
     jsValueResult(scratchReg, node, DataFormatJSBoolean);
@@ -4362,10 +4372,12 @@ void SpeculativeJIT::compileToStringOnCell(Node* node)
     case StringOrStringObjectUse: {
         GPRTemporary result(this);
         GPRReg resultGPR = result.gpr();
-        
-        m_jit.loadPtr(JITCompiler::Address(op1GPR, JSCell::structureOffset()), resultGPR);
-        JITCompiler::Jump isString = m_jit.branchPtr(
-            JITCompiler::Equal, resultGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()));
+
+        m_jit.load32(JITCompiler::Address(op1GPR, JSCell::structureIDOffset()), resultGPR);
+        JITCompiler::Jump isString = m_jit.branchStructurePtr(
+            JITCompiler::Equal, 
+            resultGPR,
+            m_jit.vm()->stringStructure.get());
         
         speculateStringObjectForStructure(node->child1(), resultGPR);
         
@@ -4392,10 +4404,10 @@ void SpeculativeJIT::compileToStringOnCell(Node* node)
         flushRegisters();
         JITCompiler::Jump done;
         if (node->child1()->prediction() & SpecString) {
-            JITCompiler::Jump needCall = m_jit.branchPtr(
+            JITCompiler::Jump needCall = m_jit.branchStructurePtr(
                 JITCompiler::NotEqual,
-                JITCompiler::Address(op1GPR, JSCell::structureOffset()),
-                TrustedImmPtr(m_jit.vm()->stringStructure.get()));
+                JITCompiler::Address(op1GPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get());
             m_jit.move(op1GPR, resultGPR);
             done = m_jit.jump();
             needCall.link(&m_jit);
@@ -4596,10 +4608,10 @@ void SpeculativeJIT::speculateObject(Edge edge)
     SpeculateCellOperand operand(this, edge);
     GPRReg gpr = operand.gpr();
     DFG_TYPE_CHECK(
-        JSValueSource::unboxedCell(gpr), edge, SpecObject, m_jit.branchPtr(
+        JSValueSource::unboxedCell(gpr), edge, SpecObject, m_jit.branchStructurePtr(
             MacroAssembler::Equal, 
-            MacroAssembler::Address(gpr, JSCell::structureOffset()), 
-            MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+            MacroAssembler::Address(gpr, JSCell::structureIDOffset()), 
+            m_jit.vm()->stringStructure.get()));
 }
 
 void SpeculativeJIT::speculateFinalObject(Edge edge)
@@ -4608,14 +4620,11 @@ void SpeculativeJIT::speculateFinalObject(Edge edge)
         return;
     
     SpeculateCellOperand operand(this, edge);
-    GPRTemporary structure(this);
     GPRReg gpr = operand.gpr();
-    GPRReg structureGPR = structure.gpr();
-    m_jit.loadPtr(MacroAssembler::Address(gpr, JSCell::structureOffset()), structureGPR);
     DFG_TYPE_CHECK(
         JSValueSource::unboxedCell(gpr), edge, SpecFinalObject, m_jit.branch8(
             MacroAssembler::NotEqual,
-            MacroAssembler::Address(structureGPR, Structure::typeInfoTypeOffset()),
+            MacroAssembler::Address(gpr, JSCell::typeInfoTypeOffset()),
             TrustedImm32(FinalObjectType)));
 }
 
@@ -4632,10 +4641,10 @@ void SpeculativeJIT::speculateObjectOrOther(Edge edge)
     MacroAssembler::Jump notCell = m_jit.branchTest64(
         MacroAssembler::NonZero, gpr, GPRInfo::tagMaskRegister);
     DFG_TYPE_CHECK(
-        JSValueRegs(gpr), edge, (~SpecCell) | SpecObject, m_jit.branchPtr(
+        JSValueRegs(gpr), edge, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
             MacroAssembler::Equal, 
-            MacroAssembler::Address(gpr, JSCell::structureOffset()), 
-            MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+            MacroAssembler::Address(gpr, JSCell::structureIDOffset()), 
+            m_jit.vm()->stringStructure.get()));
     MacroAssembler::Jump done = m_jit.jump();
     notCell.link(&m_jit);
     if (needsTypeCheck(edge, SpecCell | SpecOther)) {
@@ -4655,10 +4664,10 @@ void SpeculativeJIT::speculateObjectOrOther(Edge edge)
     MacroAssembler::Jump notCell =
         m_jit.branch32(MacroAssembler::NotEqual, tagGPR, TrustedImm32(JSValue::CellTag));
     DFG_TYPE_CHECK(
-        JSValueRegs(tagGPR, payloadGPR), edge, (~SpecCell) | SpecObject, m_jit.branchPtr(
+        JSValueRegs(tagGPR, payloadGPR), edge, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
             MacroAssembler::Equal, 
-            MacroAssembler::Address(payloadGPR, JSCell::structureOffset()), 
-            MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+            MacroAssembler::Address(payloadGPR, JSCell::structureIDOffset()), 
+            m_jit.vm()->stringStructure.get()));
     MacroAssembler::Jump done = m_jit.jump();
     notCell.link(&m_jit);
     if (needsTypeCheck(edge, SpecCell | SpecOther)) {
@@ -4678,10 +4687,10 @@ void SpeculativeJIT::speculateObjectOrOther(Edge edge)
 void SpeculativeJIT::speculateString(Edge edge, GPRReg cell)
 {
     DFG_TYPE_CHECK(
-        JSValueSource::unboxedCell(cell), edge, SpecString, m_jit.branchPtr(
+        JSValueSource::unboxedCell(cell), edge, SpecString, m_jit.branchStructurePtr(
             MacroAssembler::NotEqual, 
-            MacroAssembler::Address(cell, JSCell::structureOffset()), 
-            MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+            MacroAssembler::Address(cell, JSCell::structureIDOffset()), 
+            m_jit.vm()->stringStructure.get()));
 }
 
 void SpeculativeJIT::speculateStringIdentAndLoadStorage(Edge edge, GPRReg string, GPRReg storage)
@@ -4734,7 +4743,7 @@ void SpeculativeJIT::speculateString(Edge edge)
 
 void SpeculativeJIT::speculateStringObject(Edge edge, GPRReg gpr)
 {
-    speculateStringObjectForStructure(edge, JITCompiler::Address(gpr, JSCell::structureOffset()));
+    speculateStringObjectForStructure(edge, JITCompiler::Address(gpr, JSCell::structureIDOffset()));
 }
 
 void SpeculativeJIT::speculateStringObject(Edge edge)
@@ -4760,16 +4769,17 @@ void SpeculativeJIT::speculateStringOrStringObject(Edge edge)
     GPRReg gpr = operand.gpr();
     if (!needsTypeCheck(edge, SpecString | SpecStringObject))
         return;
+
+    GPRTemporary structureID(this);
+    GPRReg structureIDGPR = structureID.gpr();
+
+    m_jit.load32(JITCompiler::Address(gpr, JSCell::structureIDOffset()), structureIDGPR); 
+    JITCompiler::Jump isString = m_jit.branchStructurePtr(
+        JITCompiler::Equal,
+        structureIDGPR, 
+        m_jit.vm()->stringStructure.get());
     
-    GPRTemporary structure(this);
-    GPRReg structureGPR = structure.gpr();
-    
-    m_jit.loadPtr(JITCompiler::Address(gpr, JSCell::structureOffset()), structureGPR);
-    
-    JITCompiler::Jump isString = m_jit.branchPtr(
-        JITCompiler::Equal, structureGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()));
-    
-    speculateStringObjectForStructure(edge, structureGPR);
+    speculateStringObjectForStructure(edge, structureIDGPR);
     
     isString.link(&m_jit);
     
@@ -5034,10 +5044,10 @@ void SpeculativeJIT::emitSwitchChar(Node* node, SwitchData* data)
 #endif
         
         addBranch(
-            m_jit.branchPtr(
+            m_jit.branchStructurePtr(
                 MacroAssembler::NotEqual,
-                MacroAssembler::Address(op1Regs.payloadGPR(), JSCell::structureOffset()),
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())),
+                MacroAssembler::Address(op1Regs.payloadGPR(), JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()),
             data->fallThrough.block);
         
         emitSwitchCharStringJump(data, op1Regs.payloadGPR(), tempGPR);
@@ -5334,10 +5344,10 @@ void SpeculativeJIT::emitSwitchString(Node* node, SwitchData* data)
 #endif
         
         addBranch(
-            m_jit.branchPtr(
+            m_jit.branchStructurePtr(
                 MacroAssembler::NotEqual,
-                MacroAssembler::Address(op1Regs.payloadGPR(), JSCell::structureOffset()),
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())),
+                MacroAssembler::Address(op1Regs.payloadGPR(), JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()),
             data->fallThrough.block);
         
         emitSwitchStringOnString(data, op1Regs.payloadGPR());
@@ -5426,30 +5436,14 @@ void SpeculativeJIT::compileStoreBarrier(Node* node)
     noResult(node);
 }
 
-JITCompiler::Jump SpeculativeJIT::genericWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch1, GPRReg scratch2)
+JITCompiler::Jump SpeculativeJIT::genericWriteBarrier(CCallHelpers& jit, GPRReg owner)
 {
-    jit.move(owner, scratch1);
-    jit.move(owner, scratch2);
-
-    jit.andPtr(MacroAssembler::TrustedImmPtr(MarkedBlock::blockMask), scratch1);
-    jit.andPtr(MacroAssembler::TrustedImmPtr(~MarkedBlock::blockMask), scratch2);
-
-    // Shift index
-#if USE(JSVALUE64)
-    jit.rshift64(MacroAssembler::TrustedImm32(MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount), scratch2);
-#else
-    jit.rshift32(MacroAssembler::TrustedImm32(MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount), scratch2);
-#endif
-
-    // Emit load and branch
-    return jit.branchTest8(MacroAssembler::Zero, MacroAssembler::BaseIndex(scratch1, scratch2, MacroAssembler::TimesOne, MarkedBlock::offsetOfMarks()));
+    return jit.branchTest8(MacroAssembler::Zero, MacroAssembler::Address(owner, JSCell::gcDataOffset()));
 }
 
 JITCompiler::Jump SpeculativeJIT::genericWriteBarrier(CCallHelpers& jit, JSCell* owner)
 {
-    MarkedBlock* block = MarkedBlock::blockFor(owner);
-    size_t markIndex = (reinterpret_cast<size_t>(owner) & ~MarkedBlock::blockMask) >> (MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount);
-    uint8_t* address = reinterpret_cast<uint8_t*>(reinterpret_cast<char*>(block) + MarkedBlock::offsetOfMarks()) + markIndex;
+    uint8_t* address = reinterpret_cast<uint8_t*>(owner) + JSCell::gcDataOffset();
     return jit.branchTest8(MacroAssembler::Zero, MacroAssembler::AbsoluteAddress(address));
 }
 
@@ -5509,14 +5503,14 @@ void SpeculativeJIT::writeBarrier(GPRReg ownerGPR, JSCell* value, GPRReg scratch
     if (Heap::isMarked(value))
         return;
 
-    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR, scratch1, scratch2);
+    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR);
     storeToWriteBarrierBuffer(ownerGPR, scratch1, scratch2);
     definitelyNotMarked.link(&m_jit);
 }
 
 void SpeculativeJIT::writeBarrier(GPRReg ownerGPR, GPRReg scratch1, GPRReg scratch2)
 {
-    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR, scratch1, scratch2);
+    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR);
     storeToWriteBarrierBuffer(ownerGPR, scratch1, scratch2);
     definitelyNotMarked.link(&m_jit);
 }
index ce738a8..2cc1bc0 100644 (file)
@@ -296,7 +296,7 @@ public:
     void storeToWriteBarrierBuffer(GPRReg cell, GPRReg scratch1, GPRReg scratch2);
     void storeToWriteBarrierBuffer(JSCell*, GPRReg scratch1, GPRReg scratch2);
 
-    static JITCompiler::Jump genericWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch1, GPRReg scratch2);
+    static JITCompiler::Jump genericWriteBarrier(CCallHelpers& jit, GPRReg owner);
     static JITCompiler::Jump genericWriteBarrier(CCallHelpers& jit, JSCell* owner);
     void writeBarrier(GPRReg owner, GPRReg scratch1, GPRReg scratch2);
     void writeBarrier(GPRReg owner, JSCell* value, GPRReg scratch1, GPRReg scratch2);
@@ -739,7 +739,7 @@ public:
     void nonSpeculativeNonPeepholeStrictEq(Node*, bool invert = false);
     bool nonSpeculativeStrictEq(Node*, bool invert = false);
     
-    void compileInstanceOfForObject(Node*, GPRReg valueReg, GPRReg prototypeReg, GPRReg scratchAndResultReg);
+    void compileInstanceOfForObject(Node*, GPRReg valueReg, GPRReg prototypeReg, GPRReg scratchAndResultReg, GPRReg scratch2Reg);
     void compileInstanceOf(Node*);
     
     ptrdiff_t calleeFrameOffset(int numArgs)
@@ -2105,7 +2105,7 @@ public:
         
         return slowPath;
     }
-    
+
     // Allocator for a cell of a specific size.
     template <typename StructureType> // StructureType can be GPR or ImmPtr.
     void emitAllocateJSCell(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure,
@@ -2120,7 +2120,7 @@ public:
         m_jit.storePtr(scratchGPR, MacroAssembler::Address(allocatorGPR, MarkedAllocator::offsetOfFreeListHead()));
 
         // Initialize the object's Structure.
-        m_jit.storePtr(structure, MacroAssembler::Address(resultGPR, JSCell::structureOffset()));
+        m_jit.emitStoreStructureWithTypeInfo(structure, resultGPR, scratchGPR);
     }
 
     // Allocator for an object of a specific size.
@@ -3045,8 +3045,8 @@ void SpeculativeJIT::speculateStringObjectForStructure(Edge edge, StructureLocat
     if (!m_state.forNode(edge).m_currentKnownStructure.isSubsetOf(StructureSet(stringObjectStructure))) {
         speculationCheck(
             NotStringObject, JSValueRegs(), 0,
-            m_jit.branchPtr(
-                JITCompiler::NotEqual, structureLocation, TrustedImmPtr(stringObjectStructure)));
+            m_jit.branchStructurePtr(
+                JITCompiler::NotEqual, structureLocation, stringObjectStructure));
     }
 }
 
index 57db500..b94513e 100644 (file)
@@ -247,8 +247,10 @@ void SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull(Edge operand, bool inv
         if (!isKnownCell(operand.node()))
             notCell = m_jit.branch32(MacroAssembler::NotEqual, argTagGPR, TrustedImm32(JSValue::CellTag));
 
-        m_jit.loadPtr(JITCompiler::Address(argPayloadGPR, JSCell::structureOffset()), resultPayloadGPR);
-        JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::NonZero, JITCompiler::Address(resultPayloadGPR, Structure::typeInfoFlagsOffset()), JITCompiler::TrustedImm32(MasqueradesAsUndefined));
+        JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(
+            JITCompiler::NonZero, 
+            JITCompiler::Address(argPayloadGPR, JSCell::typeInfoFlagsOffset()), 
+            JITCompiler::TrustedImm32(MasqueradesAsUndefined));
         
         m_jit.move(invert ? TrustedImm32(1) : TrustedImm32(0), resultPayloadGPR);
         notMasqueradesAsUndefined = m_jit.jump();
@@ -312,12 +314,15 @@ void SpeculativeJIT::nonSpeculativePeepholeBranchNull(Edge operand, Node* branch
         if (!isKnownCell(operand.node()))
             notCell = m_jit.branch32(MacroAssembler::NotEqual, argTagGPR, TrustedImm32(JSValue::CellTag));
 
-        m_jit.loadPtr(JITCompiler::Address(argPayloadGPR, JSCell::structureOffset()), resultGPR);
-        branchTest8(JITCompiler::Zero, JITCompiler::Address(resultGPR, Structure::typeInfoFlagsOffset()), JITCompiler::TrustedImm32(MasqueradesAsUndefined), invert ? taken : notTaken);
+        branchTest8(JITCompiler::Zero, 
+            JITCompiler::Address(argPayloadGPR, JSCell::typeInfoFlagsOffset()), 
+            JITCompiler::TrustedImm32(MasqueradesAsUndefined), 
+            invert ? taken : notTaken);
    
         GPRReg localGlobalObjectGPR = localGlobalObject.gpr();
         GPRReg remoteGlobalObjectGPR = remoteGlobalObject.gpr();
         m_jit.move(TrustedImmPtr(m_jit.graph().globalObjectFor(m_currentNode->origin.semantic)), localGlobalObjectGPR);
+        m_jit.loadPtr(JITCompiler::Address(argPayloadGPR, JSCell::structureIDOffset()), resultGPR);
         m_jit.loadPtr(JITCompiler::Address(resultGPR, Structure::globalObjectOffset()), remoteGlobalObjectGPR);
         branchPtr(JITCompiler::Equal, localGlobalObjectGPR, remoteGlobalObjectGPR, invert ? notTaken : taken);
     }
@@ -1136,39 +1141,34 @@ void SpeculativeJIT::compileObjectEquality(Node* node)
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2GPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        GPRTemporary structure(this);
-        GPRReg structureGPR = structure.gpr();
-
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                structureGPR
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset())
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), node->child1(), 
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
 
-        m_jit.loadPtr(MacroAssembler::Address(op2GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                structureGPR
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset())
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueSource::unboxedCell(op2GPR), node->child2(), 
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1213,19 +1213,18 @@ void SpeculativeJIT::compileObjectToObjectOrOtherEquality(Edge leftChild, Edge r
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal,
-                structureGPR,
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), leftChild, 
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1241,20 +1240,19 @@ void SpeculativeJIT::compileObjectToObjectOrOtherEquality(Edge leftChild, Edge r
             JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2PayloadGPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op2PayloadGPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op2PayloadGPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal,
-                structureGPR,
+                MacroAssembler::Address(op2PayloadGPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild, 
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2PayloadGPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1320,19 +1318,18 @@ void SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality(Edge leftChild
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                structureGPR, 
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()),
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), leftChild,
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1347,20 +1344,19 @@ void SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality(Edge leftChild
             JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2PayloadGPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(op2PayloadGPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op2PayloadGPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
             JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                structureGPR
+                MacroAssembler::Address(op2PayloadGPR, JSCell::structureIDOffset())
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
         speculationCheck(BadType, JSValueRegs(op2TagGPR, op2PayloadGPR), rightChild,
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2PayloadGPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1443,10 +1439,10 @@ void SpeculativeJIT::compileObjectOrOtherLogicalNot(Edge nodeUse)
             JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal,
-                MacroAssembler::Address(valuePayloadGPR, JSCell::structureOffset()),
+                MacroAssembler::Address(valuePayloadGPR, JSCell::structureIDOffset()),
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(valuePayloadGPR, JSCell::structureOffset()), structureGPR);
+        m_jit.loadPtr(MacroAssembler::Address(valuePayloadGPR, JSCell::structureIDOffset()), structureGPR);
 
         DFG_TYPE_CHECK(
             JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse, (~SpecCell) | SpecObject,
@@ -1458,7 +1454,7 @@ void SpeculativeJIT::compileObjectOrOtherLogicalNot(Edge nodeUse)
         MacroAssembler::Jump isNotMasqueradesAsUndefined = 
             m_jit.branchTest8(
                 MacroAssembler::Zero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(valuePayloadGPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined));
 
         speculationCheck(BadType, JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse, 
@@ -1573,10 +1569,10 @@ void SpeculativeJIT::emitObjectOrOtherBranch(Edge nodeUse, BasicBlock* taken, Ba
             JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse, (~SpecCell) | SpecObject,
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(valuePayloadGPR, JSCell::structureOffset()), 
+                MacroAssembler::Address(valuePayloadGPR, JSCell::structureIDOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(valuePayloadGPR, JSCell::structureOffset()), scratchGPR);
+        m_jit.loadPtr(MacroAssembler::Address(valuePayloadGPR, JSCell::structureIDOffset()), scratchGPR);
 
         DFG_TYPE_CHECK(
             JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse, (~SpecCell) | SpecObject,
@@ -1585,7 +1581,10 @@ void SpeculativeJIT::emitObjectOrOtherBranch(Edge nodeUse, BasicBlock* taken, Ba
                 scratchGPR,
                 MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
 
-        JITCompiler::Jump isNotMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::Zero, MacroAssembler::Address(scratchGPR, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+        JITCompiler::Jump isNotMasqueradesAsUndefined = m_jit.branchTest8(
+            JITCompiler::Zero, 
+            MacroAssembler::Address(valuePayloadGPR, JSCell::typeInfoFlagsOffset()), 
+            TrustedImm32(MasqueradesAsUndefined));
 
         speculationCheck(BadType, JSValueRegs(valueTagGPR, valuePayloadGPR), nodeUse,
             m_jit.branchPtr(
@@ -3077,7 +3076,7 @@ void SpeculativeJIT::compile(Node* node)
             m_jit.move(op1PayloadGPR, resultPayloadGPR);
         } else {
             MacroAssembler::Jump alreadyPrimitive = m_jit.branch32(MacroAssembler::NotEqual, op1TagGPR, TrustedImm32(JSValue::CellTag));
-            MacroAssembler::Jump notPrimitive = m_jit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(op1PayloadGPR, JSCell::structureOffset()), MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get()));
+            MacroAssembler::Jump notPrimitive = m_jit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(op1PayloadGPR, JSCell::structureIDOffset()), MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get()));
             
             alreadyPrimitive.link(&m_jit);
             m_jit.move(op1TagGPR, resultTagGPR);
@@ -3110,7 +3109,7 @@ void SpeculativeJIT::compile(Node* node)
                     JITCompiler::NotEqual, op1TagGPR, TrustedImm32(JSValue::CellTag));
                 JITCompiler::Jump slowPath2 = m_jit.branchPtr(
                     JITCompiler::NotEqual,
-                    JITCompiler::Address(op1PayloadGPR, JSCell::structureOffset()),
+                    JITCompiler::Address(op1PayloadGPR, JSCell::structureIDOffset()),
                     TrustedImmPtr(m_jit.vm()->stringStructure.get()));
                 m_jit.move(op1PayloadGPR, resultGPR);
                 done = m_jit.jump();
@@ -3476,11 +3475,9 @@ void SpeculativeJIT::compile(Node* node)
         MacroAssembler::JumpList slowCases;
         slowCases.append(m_jit.branch32(
             MacroAssembler::NotEqual, thisValueTagGPR, TrustedImm32(JSValue::CellTag)));
-        m_jit.loadPtr(
-            MacroAssembler::Address(thisValuePayloadGPR, JSCell::structureOffset()), tempGPR);
         slowCases.append(m_jit.branch8(
             MacroAssembler::NotEqual,
-            MacroAssembler::Address(tempGPR, Structure::typeInfoTypeOffset()),
+            MacroAssembler::Address(thisValuePayloadGPR, JSCell::typeInfoTypeOffset()),
             TrustedImm32(FinalObjectType)));
         m_jit.move(thisValuePayloadGPR, tempGPR);
         m_jit.move(thisValueTagGPR, tempTagGPR);
@@ -3787,12 +3784,12 @@ void SpeculativeJIT::compile(Node* node)
                 BadCache, JSValueSource::unboxedCell(base.gpr()), 0,
                 m_jit.branchWeakPtr(
                     JITCompiler::NotEqual,
-                    JITCompiler::Address(base.gpr(), JSCell::structureOffset()),
+                    JITCompiler::Address(base.gpr(), JSCell::structureIDOffset()),
                     node->structureSet()[0]));
         } else {
             GPRTemporary structure(this);
             
-            m_jit.loadPtr(JITCompiler::Address(base.gpr(), JSCell::structureOffset()), structure.gpr());
+            m_jit.loadPtr(JITCompiler::Address(base.gpr(), JSCell::structureIDOffset()), structure.gpr());
             
             JITCompiler::JumpList done;
             
@@ -3823,7 +3820,7 @@ void SpeculativeJIT::compile(Node* node)
         
 #if !ASSERT_DISABLED
         SpeculateCellOperand op1(this, node->child1());
-        JITCompiler::Jump isOK = m_jit.branchPtr(JITCompiler::Equal, JITCompiler::Address(op1.gpr(), JSCell::structureOffset()), TrustedImmPtr(node->structure()));
+        JITCompiler::Jump isOK = m_jit.branchPtr(JITCompiler::Equal, JITCompiler::Address(op1.gpr(), JSCell::structureIDOffset()), TrustedImmPtr(node->structure()));
         m_jit.breakpoint();
         isOK.link(&m_jit);
 #else
@@ -3847,7 +3844,7 @@ void SpeculativeJIT::compile(Node* node)
         SpeculateCellOperand base(this, node->child1());
         GPRReg baseGPR = base.gpr();
         
-        m_jit.storePtr(MacroAssembler::TrustedImmPtr(node->structureTransitionData().newStructure), MacroAssembler::Address(baseGPR, JSCell::structureOffset()));
+        m_jit.storePtr(MacroAssembler::TrustedImmPtr(node->structureTransitionData().newStructure), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()));
         
         noResult(node);
         break;
@@ -4051,8 +4048,10 @@ void SpeculativeJIT::compile(Node* node)
         GPRTemporary structure(this);
 
         // Speculate that base 'ImplementsDefaultHasInstance'.
-        m_jit.loadPtr(MacroAssembler::Address(base.gpr(), JSCell::structureOffset()), structure.gpr());
-        speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branchTest8(MacroAssembler::Zero, MacroAssembler::Address(structure.gpr(), Structure::typeInfoFlagsOffset()), MacroAssembler::TrustedImm32(ImplementsDefaultHasInstance)));
+        speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branchTest8(
+            MacroAssembler::Zero, 
+            MacroAssembler::Address(base.gpr(), JSCell::typeInfoFlagsOffset()), 
+            MacroAssembler::TrustedImm32(ImplementsDefaultHasInstance)));
 
         noResult(node);
         break;
@@ -4080,8 +4079,10 @@ void SpeculativeJIT::compile(Node* node)
             m_jit.move(TrustedImm32(0), result.gpr());
             notMasqueradesAsUndefined = m_jit.jump();
         } else {
-            m_jit.loadPtr(JITCompiler::Address(value.payloadGPR(), JSCell::structureOffset()), result.gpr());
-            JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::NonZero, JITCompiler::Address(result.gpr(), Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+            JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(
+                JITCompiler::NonZero, 
+                JITCompiler::Address(value.payloadGPR(), JSCell::typeInfoFlagsOffset()), 
+                TrustedImm32(MasqueradesAsUndefined));
             m_jit.move(TrustedImm32(0), result.gpr());
             notMasqueradesAsUndefined = m_jit.jump();
             
@@ -4124,8 +4125,10 @@ void SpeculativeJIT::compile(Node* node)
         
         JITCompiler::Jump isNotCell = m_jit.branch32(JITCompiler::NotEqual, value.tagGPR(), JITCompiler::TrustedImm32(JSValue::CellTag));
         
-        m_jit.loadPtr(JITCompiler::Address(value.payloadGPR(), JSCell::structureOffset()), result.gpr());
-        m_jit.compare8(JITCompiler::Equal, JITCompiler::Address(result.gpr(), Structure::typeInfoTypeOffset()), TrustedImm32(StringType), result.gpr());
+        m_jit.compare8(JITCompiler::Equal, 
+            JITCompiler::Address(value.payloadGPR(), JSCell::typeInfoTypeOffset()), 
+            TrustedImm32(StringType), 
+            result.gpr());
         JITCompiler::Jump done = m_jit.jump();
         
         isNotCell.link(&m_jit);
@@ -4178,8 +4181,10 @@ void SpeculativeJIT::compile(Node* node)
             DFG_TYPE_CHECK(JSValueRegs(tagGPR, payloadGPR), node->child1(), SpecCell, isNotCell);
 
         if (!node->child1()->shouldSpeculateObject() || node->child1().useKind() == StringUse) {
-            m_jit.loadPtr(JITCompiler::Address(payloadGPR, JSCell::structureOffset()), tempGPR);
-            JITCompiler::Jump notString = m_jit.branch8(JITCompiler::NotEqual, JITCompiler::Address(tempGPR, Structure::typeInfoTypeOffset()), TrustedImm32(StringType));
+            JITCompiler::Jump notString = m_jit.branch8(
+                JITCompiler::NotEqual, 
+                JITCompiler::Address(payloadGPR, JSCell::typeInfoTypeOffset()), 
+                TrustedImm32(StringType));
             if (node->child1().useKind() == StringUse)
                 DFG_TYPE_CHECK(JSValueRegs(tagGPR, payloadGPR), node->child1(), SpecString, notString);
             m_jit.move(TrustedImmPtr(m_jit.vm()->smallStrings.stringString()), resultGPR);
@@ -4708,7 +4713,7 @@ void SpeculativeJIT::writeBarrier(GPRReg ownerGPR, GPRReg valueTagGPR, Edge valu
     if (!isKnownCell(valueUse.node()))
         isNotCell = m_jit.branch32(JITCompiler::NotEqual, valueTagGPR, JITCompiler::TrustedImm32(JSValue::CellTag));
 
-    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR, scratch1, scratch2);
+    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR);
     storeToWriteBarrierBuffer(ownerGPR, scratch1, scratch2);
     definitelyNotMarked.link(&m_jit);
 
index 8cfc700..475b4db 100644 (file)
@@ -247,12 +247,15 @@ void SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull(Edge operand, bool inv
     } else {
         GPRTemporary localGlobalObject(this);
         GPRTemporary remoteGlobalObject(this);
+        GPRTemporary scratch(this);
 
         if (!isKnownCell(operand.node()))
             notCell = m_jit.branchTest64(MacroAssembler::NonZero, argGPR, GPRInfo::tagMaskRegister);
 
-        m_jit.loadPtr(JITCompiler::Address(argGPR, JSCell::structureOffset()), resultGPR);
-        JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::NonZero, JITCompiler::Address(resultGPR, Structure::typeInfoFlagsOffset()), JITCompiler::TrustedImm32(MasqueradesAsUndefined));
+        JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(
+            JITCompiler::NonZero, 
+            JITCompiler::Address(argGPR, JSCell::typeInfoFlagsOffset()), 
+            JITCompiler::TrustedImm32(MasqueradesAsUndefined));
 
         m_jit.move(invert ? TrustedImm32(1) : TrustedImm32(0), resultGPR);
         notMasqueradesAsUndefined = m_jit.jump();
@@ -261,6 +264,7 @@ void SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull(Edge operand, bool inv
         GPRReg localGlobalObjectGPR = localGlobalObject.gpr();
         GPRReg remoteGlobalObjectGPR = remoteGlobalObject.gpr();
         m_jit.move(JITCompiler::TrustedImmPtr(m_jit.graph().globalObjectFor(m_currentNode->origin.semantic)), localGlobalObjectGPR);
+        m_jit.emitLoadStructure(argGPR, resultGPR, scratch.gpr());
         m_jit.loadPtr(JITCompiler::Address(resultGPR, Structure::globalObjectOffset()), remoteGlobalObjectGPR);
         m_jit.comparePtr(invert ? JITCompiler::NotEqual : JITCompiler::Equal, localGlobalObjectGPR, remoteGlobalObjectGPR, resultGPR);
     }
@@ -311,16 +315,20 @@ void SpeculativeJIT::nonSpeculativePeepholeBranchNull(Edge operand, Node* branch
     } else {
         GPRTemporary localGlobalObject(this);
         GPRTemporary remoteGlobalObject(this);
+        GPRTemporary scratch(this);
 
         if (!isKnownCell(operand.node()))
             notCell = m_jit.branchTest64(MacroAssembler::NonZero, argGPR, GPRInfo::tagMaskRegister);
 
-        m_jit.loadPtr(JITCompiler::Address(argGPR, JSCell::structureOffset()), resultGPR);
-        branchTest8(JITCompiler::Zero, JITCompiler::Address(resultGPR, Structure::typeInfoFlagsOffset()), JITCompiler::TrustedImm32(MasqueradesAsUndefined), invert ? taken : notTaken);
+        branchTest8(JITCompiler::Zero, 
+            JITCompiler::Address(argGPR, JSCell::typeInfoFlagsOffset()), 
+            JITCompiler::TrustedImm32(MasqueradesAsUndefined), 
+            invert ? taken : notTaken);
 
         GPRReg localGlobalObjectGPR = localGlobalObject.gpr();
         GPRReg remoteGlobalObjectGPR = remoteGlobalObject.gpr();
         m_jit.move(TrustedImmPtr(m_jit.graph().globalObjectFor(m_currentNode->origin.semantic)), localGlobalObjectGPR);
+        m_jit.emitLoadStructure(argGPR, resultGPR, scratch.gpr());
         m_jit.loadPtr(JITCompiler::Address(resultGPR, Structure::globalObjectOffset()), remoteGlobalObjectGPR);
         branchPtr(JITCompiler::Equal, localGlobalObjectGPR, remoteGlobalObjectGPR, invert ? notTaken : taken);
     }
@@ -1519,41 +1527,36 @@ void SpeculativeJIT::compileObjectEquality(Node* node)
    
     if (masqueradesAsUndefinedWatchpointIsStillValid()) {
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
     } else {
-        GPRTemporary structure(this);
-        GPRReg structureGPR = structure.gpr();
-
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), node->child1(), SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                structureGPR
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset())
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), node->child1(),
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
 
-        m_jit.loadPtr(MacroAssembler::Address(op2GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op2GPR), node->child2(), SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                structureGPR, 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueSource::unboxedCell(op2GPR), node->child2(),
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1576,37 +1579,26 @@ void SpeculativeJIT::compileObjectToObjectOrOtherEquality(Edge leftChild, Edge r
     GPRReg op1GPR = op1.gpr();
     GPRReg op2GPR = op2.gpr();
     GPRReg resultGPR = result.gpr();
-    GPRTemporary structure;
-    GPRReg structureGPR = InvalidGPRReg;
 
     bool masqueradesAsUndefinedWatchpointValid =
         masqueradesAsUndefinedWatchpointIsStillValid();
 
-    if (!masqueradesAsUndefinedWatchpointValid) {
-        // The masquerades as undefined case will use the structure register, so allocate it here.
-        // Do this at the top of the function to avoid branching around a register allocation.
-        GPRTemporary realStructure(this);
-        structure.adopt(realStructure);
-        structureGPR = structure.gpr();
-    }
-
     if (masqueradesAsUndefinedWatchpointValid) {
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal,
-                structureGPR,
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), leftChild,
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1618,21 +1610,20 @@ void SpeculativeJIT::compileObjectToObjectOrOtherEquality(Edge leftChild, Edge r
     // We know that within this branch, rightChild must be a cell. 
     if (masqueradesAsUndefinedWatchpointValid) {
         DFG_TYPE_CHECK(
-            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op2GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal,
-                structureGPR,
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueRegs(op2GPR), rightChild,
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1679,37 +1670,26 @@ void SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality(Edge leftChild
     GPRReg op1GPR = op1.gpr();
     GPRReg op2GPR = op2.gpr();
     GPRReg resultGPR = result.gpr();
-    GPRTemporary structure;
-    GPRReg structureGPR = InvalidGPRReg;
     
-    bool masqueradesAsUndefinedWatchpointValid =
+    bool masqueradesAsUndefinedWatchpointValid = 
         masqueradesAsUndefinedWatchpointIsStillValid();
 
-    if (!masqueradesAsUndefinedWatchpointValid) {
-        // The masquerades as undefined case will use the structure register, so allocate it here.
-        // Do this at the top of the function to avoid branching around a register allocation.
-        GPRTemporary realStructure(this);
-        structure.adopt(realStructure);
-        structureGPR = structure.gpr();
-    }
-
     if (masqueradesAsUndefinedWatchpointValid) {
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op1GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op1GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchPtr(
+            JSValueSource::unboxedCell(op1GPR), leftChild, SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                structureGPR, 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueSource::unboxedCell(op1GPR), leftChild, 
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op1GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
 
@@ -1721,21 +1701,20 @@ void SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality(Edge leftChild
     // We know that within this branch, rightChild must be a cell. 
     if (masqueradesAsUndefinedWatchpointValid) {
         DFG_TYPE_CHECK(
-            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(op2GPR, JSCell::structureOffset()), 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(op2GPR, JSCell::structureOffset()), structureGPR);
         DFG_TYPE_CHECK(
-            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(op2GPR), rightChild, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                structureGPR, 
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(op2GPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));
         speculationCheck(BadType, JSValueRegs(op2GPR), rightChild,
             m_jit.branchTest8(
                 MacroAssembler::NonZero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(op2GPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined)));
     }
     
@@ -1833,6 +1812,8 @@ void SpeculativeJIT::compileObjectOrOtherLogicalNot(Edge nodeUse)
     GPRReg resultGPR = result.gpr();
     GPRTemporary structure;
     GPRReg structureGPR = InvalidGPRReg;
+    GPRTemporary scratch;
+    GPRReg scratchGPR = InvalidGPRReg;
 
     bool masqueradesAsUndefinedWatchpointValid =
         masqueradesAsUndefinedWatchpointIsStillValid();
@@ -1841,32 +1822,34 @@ void SpeculativeJIT::compileObjectOrOtherLogicalNot(Edge nodeUse)
         // The masquerades as undefined case will use the structure register, so allocate it here.
         // Do this at the top of the function to avoid branching around a register allocation.
         GPRTemporary realStructure(this);
+        GPRTemporary realScratch(this);
         structure.adopt(realStructure);
+        scratch.adopt(realScratch);
         structureGPR = structure.gpr();
+        scratchGPR = scratch.gpr();
     }
 
     MacroAssembler::Jump notCell = m_jit.branchTest64(MacroAssembler::NonZero, valueGPR, GPRInfo::tagMaskRegister);
     if (masqueradesAsUndefinedWatchpointValid) {
         DFG_TYPE_CHECK(
-            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal,
-                MacroAssembler::Address(valueGPR, JSCell::structureOffset()),
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(valueGPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(valueGPR, JSCell::structureOffset()), structureGPR);
-
         DFG_TYPE_CHECK(
-            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal,
-                structureGPR,
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(valueGPR, JSCell::structureIDOffset()), 
+                m_jit.vm()->stringStructure.get()));
 
         MacroAssembler::Jump isNotMasqueradesAsUndefined = 
             m_jit.branchTest8(
                 MacroAssembler::Zero, 
-                MacroAssembler::Address(structureGPR, Structure::typeInfoFlagsOffset()), 
+                MacroAssembler::Address(valueGPR, JSCell::typeInfoFlagsOffset()), 
                 MacroAssembler::TrustedImm32(MasqueradesAsUndefined));
 
+        m_jit.emitLoadStructure(valueGPR, structureGPR, scratchGPR);
         speculationCheck(BadType, JSValueRegs(valueGPR), nodeUse, 
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
@@ -1985,31 +1968,41 @@ void SpeculativeJIT::emitObjectOrOtherBranch(Edge nodeUse, BasicBlock* taken, Ba
 {
     JSValueOperand value(this, nodeUse, ManualOperandSpeculation);
     GPRTemporary scratch(this);
+    GPRTemporary structure;
     GPRReg valueGPR = value.gpr();
     GPRReg scratchGPR = scratch.gpr();
-    
+    GPRReg structureGPR = InvalidGPRReg;
+
+    if (!masqueradesAsUndefinedWatchpointIsStillValid()) {
+        GPRTemporary realStructure(this);
+        structure.adopt(realStructure);
+        structureGPR = structure.gpr();
+    }
+
     MacroAssembler::Jump notCell = m_jit.branchTest64(MacroAssembler::NonZero, valueGPR, GPRInfo::tagMaskRegister);
     if (masqueradesAsUndefinedWatchpointIsStillValid()) {
         DFG_TYPE_CHECK(
-            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(valueGPR, JSCell::structureOffset()),
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(valueGPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));                
     } else {
-        m_jit.loadPtr(MacroAssembler::Address(valueGPR, JSCell::structureOffset()), scratchGPR);
-
         DFG_TYPE_CHECK(
-            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchPtr(
+            JSValueRegs(valueGPR), nodeUse, (~SpecCell) | SpecObject, m_jit.branchStructurePtr(
                 MacroAssembler::Equal, 
-                scratchGPR,
-                MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get())));
+                MacroAssembler::Address(valueGPR, JSCell::structureIDOffset()),
+                m_jit.vm()->stringStructure.get()));
 
-        JITCompiler::Jump isNotMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::Zero, MacroAssembler::Address(scratchGPR, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+        JITCompiler::Jump isNotMasqueradesAsUndefined = m_jit.branchTest8(
+            JITCompiler::Zero, 
+            MacroAssembler::Address(valueGPR, JSCell::typeInfoFlagsOffset()), 
+            TrustedImm32(MasqueradesAsUndefined));
 
+        m_jit.emitLoadStructure(valueGPR, structureGPR, scratchGPR);
         speculationCheck(BadType, JSValueRegs(valueGPR), nodeUse,
             m_jit.branchPtr(
                 MacroAssembler::Equal, 
-                MacroAssembler::Address(scratchGPR, Structure::globalObjectOffset()), 
+                MacroAssembler::Address(structureGPR, Structure::globalObjectOffset()), 
                 MacroAssembler::TrustedImmPtr(m_jit.graph().globalObjectFor(m_currentNode->origin.semantic))));
 
         isNotMasqueradesAsUndefined.link(&m_jit);
@@ -3417,7 +3410,10 @@ void SpeculativeJIT::compile(Node* node)
         op1.use();
         
         MacroAssembler::Jump alreadyPrimitive = m_jit.branchTest64(MacroAssembler::NonZero, op1GPR, GPRInfo::tagMaskRegister);
-        MacroAssembler::Jump notPrimitive = m_jit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(op1GPR, JSCell::structureOffset()), MacroAssembler::TrustedImmPtr(m_jit.vm()->stringStructure.get()));
+        MacroAssembler::Jump notPrimitive = m_jit.branchStructurePtr(
+            MacroAssembler::NotEqual, 
+            MacroAssembler::Address(op1GPR, JSCell::structureIDOffset()), 
+            m_jit.vm()->stringStructure.get());
         
         alreadyPrimitive.link(&m_jit);
         m_jit.move(op1GPR, resultGPR);
@@ -3443,10 +3439,10 @@ void SpeculativeJIT::compile(Node* node)
             if (node->child1()->prediction() & SpecString) {
                 JITCompiler::Jump slowPath1 = m_jit.branchTest64(
                     JITCompiler::NonZero, op1GPR, GPRInfo::tagMaskRegister);
-                JITCompiler::Jump slowPath2 = m_jit.branchPtr(
+                JITCompiler::Jump slowPath2 = m_jit.branchStructurePtr(
                     JITCompiler::NotEqual,
-                    JITCompiler::Address(op1GPR, JSCell::structureOffset()),
-                    TrustedImmPtr(m_jit.vm()->stringStructure.get()));
+                    JITCompiler::Address(op1GPR, JSCell::structureIDOffset()),
+                    m_jit.vm()->stringStructure.get());
                 m_jit.move(op1GPR, resultGPR);
                 done = m_jit.jump();
                 slowPath1.link(&m_jit);
@@ -3665,7 +3661,7 @@ void SpeculativeJIT::compile(Node* node)
                 emitAllocateBasicStorage(resultGPR, storageGPR));
             m_jit.subPtr(scratchGPR, storageGPR);
             Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(node->indexingType());
-            emitAllocateJSObject<JSArray>(resultGPR, ImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
+            emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
             
             m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfPublicLength()));
             m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
@@ -3803,11 +3799,9 @@ void SpeculativeJIT::compile(Node* node)
         MacroAssembler::JumpList slowCases;
         slowCases.append(m_jit.branchTest64(
             MacroAssembler::NonZero, thisValueGPR, GPRInfo::tagMaskRegister));
-        m_jit.loadPtr(
-            MacroAssembler::Address(thisValueGPR, JSCell::structureOffset()), tempGPR);
         slowCases.append(m_jit.branch8(
             MacroAssembler::NotEqual,
-            MacroAssembler::Address(tempGPR, Structure::typeInfoTypeOffset()),
+            MacroAssembler::Address(thisValueGPR, JSCell::typeInfoTypeOffset()),
             TrustedImm32(FinalObjectType)));
         m_jit.move(thisValueGPR, tempGPR);
         J_JITOperation_EJ function;
@@ -4098,24 +4092,20 @@ void SpeculativeJIT::compile(Node* node)
         if (node->structureSet().size() == 1) {
             speculationCheck(
                 exitKind, JSValueSource::unboxedCell(base.gpr()), 0,
-                m_jit.branchWeakPtr(
+                m_jit.branchWeakStructure(
                     JITCompiler::NotEqual,
-                    JITCompiler::Address(base.gpr(), JSCell::structureOffset()),
+                    JITCompiler::Address(base.gpr(), JSCell::structureIDOffset()),
                     node->structureSet()[0]));
         } else {
-            GPRTemporary structure(this);
-            
-            m_jit.loadPtr(JITCompiler::Address(base.gpr(), JSCell::structureOffset()), structure.gpr());
-            
             JITCompiler::JumpList done;
             
             for (size_t i = 0; i < node->structureSet().size() - 1; ++i)
-                done.append(m_jit.branchWeakPtr(JITCompiler::Equal, structure.gpr(), node->structureSet()[i]));
+                done.append(m_jit.branchWeakStructure(JITCompiler::Equal, MacroAssembler::Address(base.gpr(), JSCell::structureIDOffset()), node->structureSet()[i]));
             
             speculationCheck(
                 exitKind, JSValueSource::unboxedCell(base.gpr()), 0,
-                m_jit.branchWeakPtr(
-                    JITCompiler::NotEqual, structure.gpr(), node->structureSet().last()));
+                m_jit.branchWeakStructure(
+                    JITCompiler::NotEqual, MacroAssembler::Address(base.gpr(), JSCell::structureIDOffset()), node->structureSet().last()));
             
             done.link(&m_jit);
         }
@@ -4131,12 +4121,15 @@ void SpeculativeJIT::compile(Node* node)
         // we shouldn't really load it since that could be a waste. For now though,
         // we'll just rely on the fact that when a watchpoint fires then that's
         // quite a hint already.
-        
+
         m_jit.addWeakReference(node->structure());
 
 #if !ASSERT_DISABLED
         SpeculateCellOperand op1(this, node->child1());
-        JITCompiler::Jump isOK = m_jit.branchPtr(JITCompiler::Equal, JITCompiler::Address(op1.gpr(), JSCell::structureOffset()), TrustedImmPtr(node->structure()));
+        JITCompiler::Jump isOK = m_jit.branchStructurePtr(
+            JITCompiler::Equal, 
+            JITCompiler::Address(op1.gpr(), JSCell::structureIDOffset()), 
+            node->structure());
         m_jit.breakpoint();
         isOK.link(&m_jit);
 #else
@@ -4155,14 +4148,18 @@ void SpeculativeJIT::compile(Node* node)
     }
         
     case PutStructure: {
+        Structure* oldStructure = node->structureTransitionData().previousStructure;
+        Structure* newStructure = node->structureTransitionData().newStructure;
+
         m_jit.jitCode()->common.notifyCompilingStructureTransition(m_jit.graph().m_plan, m_jit.codeBlock(), node);
 
         SpeculateCellOperand base(this, node->child1());
-        GPRTemporary scratch1(this);
-        GPRTemporary scratch2(this);
         GPRReg baseGPR = base.gpr();
         
-        m_jit.storePtr(MacroAssembler::TrustedImmPtr(node->structureTransitionData().newStructure), MacroAssembler::Address(baseGPR, JSCell::structureOffset()));
+        ASSERT_UNUSED(oldStructure, oldStructure->indexingType() == newStructure->indexingType());
+        ASSERT(oldStructure->typeInfo().type() == newStructure->typeInfo().type());
+        ASSERT(oldStructure->typeInfo().inlineTypeFlags() == newStructure->typeInfo().inlineTypeFlags());
+        m_jit.store32(MacroAssembler::TrustedImm32(newStructure->id()), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()));
         
         noResult(node);
         break;
@@ -4342,8 +4339,10 @@ void SpeculativeJIT::compile(Node* node)
         GPRTemporary structure(this);
 
         // Speculate that base 'ImplementsDefaultHasInstance'.
-        m_jit.loadPtr(MacroAssembler::Address(base.gpr(), JSCell::structureOffset()), structure.gpr());
-        speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branchTest8(MacroAssembler::Zero, MacroAssembler::Address(structure.gpr(), Structure::typeInfoFlagsOffset()), MacroAssembler::TrustedImm32(ImplementsDefaultHasInstance)));
+        speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branchTest8(
+            MacroAssembler::Zero, 
+            MacroAssembler::Address(base.gpr(), JSCell::typeInfoFlagsOffset()), 
+            MacroAssembler::TrustedImm32(ImplementsDefaultHasInstance)));
 
         noResult(node);
         break;
@@ -4371,8 +4370,10 @@ void SpeculativeJIT::compile(Node* node)
             m_jit.move(TrustedImm32(0), result.gpr());
             notMasqueradesAsUndefined = m_jit.jump();
         } else {
-            m_jit.loadPtr(JITCompiler::Address(value.gpr(), JSCell::structureOffset()), result.gpr());
-            JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(JITCompiler::NonZero, JITCompiler::Address(result.gpr(), Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+            JITCompiler::Jump isMasqueradesAsUndefined = m_jit.branchTest8(
+                JITCompiler::NonZero, 
+                JITCompiler::Address(value.gpr(), JSCell::typeInfoFlagsOffset()), 
+                TrustedImm32(MasqueradesAsUndefined));
             m_jit.move(TrustedImm32(0), result.gpr());
             notMasqueradesAsUndefined = m_jit.jump();
 
@@ -4419,8 +4420,10 @@ void SpeculativeJIT::compile(Node* node)
         
         JITCompiler::Jump isNotCell = m_jit.branchTest64(JITCompiler::NonZero, value.gpr(), GPRInfo::tagMaskRegister);
         
-        m_jit.loadPtr(JITCompiler::Address(value.gpr(), JSCell::structureOffset()), result.gpr());
-        m_jit.compare8(JITCompiler::Equal, JITCompiler::Address(result.gpr(), Structure::typeInfoTypeOffset()), TrustedImm32(StringType), result.gpr());
+        m_jit.compare8(JITCompiler::Equal, 
+            JITCompiler::Address(value.gpr(), JSCell::typeInfoTypeOffset()), 
+            TrustedImm32(StringType), 
+            result.gpr());
         m_jit.or32(TrustedImm32(ValueFalse), result.gpr());
         JITCompiler::Jump done = m_jit.jump();
         
@@ -4459,8 +4462,6 @@ void SpeculativeJIT::compile(Node* node)
     case TypeOf: {
         JSValueOperand value(this, node->child1(), ManualOperandSpeculation);
         GPRReg valueGPR = value.gpr();
-        GPRTemporary temp(this);
-        GPRReg tempGPR = temp.gpr();
         GPRResult result(this);
         GPRReg resultGPR = result.gpr();
         JITCompiler::JumpList doneJumps;
@@ -4474,8 +4475,10 @@ void SpeculativeJIT::compile(Node* node)
             DFG_TYPE_CHECK(JSValueSource(valueGPR), node->child1(), SpecCell, isNotCell);
 
         if (!node->child1()->shouldSpeculateObject() || node->child1().useKind() == StringUse) {
-            m_jit.loadPtr(JITCompiler::Address(valueGPR, JSCell::structureOffset()), tempGPR);
-            JITCompiler::Jump notString = m_jit.branch8(JITCompiler::NotEqual, JITCompiler::Address(tempGPR, Structure::typeInfoTypeOffset()), TrustedImm32(StringType));
+            JITCompiler::Jump notString = m_jit.branch8(
+                JITCompiler::NotEqual, 
+                JITCompiler::Address(valueGPR, JSCell::typeInfoTypeOffset()), 
+                TrustedImm32(StringType));
             if (node->child1().useKind() == StringUse)
                 DFG_TYPE_CHECK(JSValueSource(valueGPR), node->child1(), SpecString, notString);
             m_jit.move(TrustedImmPtr(m_jit.vm()->smallStrings.stringString()), resultGPR);
@@ -5025,7 +5028,7 @@ void SpeculativeJIT::writeBarrier(GPRReg ownerGPR, GPRReg valueGPR, Edge valueUs
     if (!isKnownCell(valueUse.node()))
         isNotCell = m_jit.branchTest64(JITCompiler::NonZero, valueGPR, GPRInfo::tagMaskRegister);
 
-    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR, scratch1, scratch2);
+    JITCompiler::Jump definitelyNotMarked = genericWriteBarrier(m_jit, ownerGPR);
     storeToWriteBarrierBuffer(ownerGPR, scratch1, scratch2);
     definitelyNotMarked.link(&m_jit);
 
index c429f78..bf58ecb 100644 (file)
@@ -28,6 +28,7 @@
 
 #if ENABLE(DFG_JIT)
 
+#include "APIShims.h"
 #include "CodeBlock.h"
 #include "DeferGC.h"
 #include "DFGLongLivedState.h"
index 3b992e1..2189cd9 100644 (file)
@@ -45,7 +45,7 @@ AbstractHeapRepository::AbstractHeapRepository(LContext context)
     FOR_EACH_ABSTRACT_FIELD(ABSTRACT_FIELD_INITIALIZATION)
 #undef ABSTRACT_FIELD_INITIALIZATION
     
-    , JSCell_freeListNext(JSCell_structure)
+    , JSCell_freeListNext(JSCell_structureID)
     
 #define INDEXED_ABSTRACT_HEAP_INITIALIZATION(name, offset, size) , name(context, &root, #name, offset, size)
     FOR_EACH_INDEXED_ABSTRACT_HEAP(INDEXED_ABSTRACT_HEAP_INITIALIZATION)
index e656228..f3e2bdd 100644 (file)
@@ -48,7 +48,11 @@ namespace JSC { namespace FTL {
     macro(JSArrayBufferView_length, JSArrayBufferView::offsetOfLength()) \
     macro(JSArrayBufferView_mode, JSArrayBufferView::offsetOfMode()) \
     macro(JSArrayBufferView_vector, JSArrayBufferView::offsetOfVector()) \
-    macro(JSCell_structure, JSCell::structureOffset()) \
+    macro(JSCell_structureID, JSCell::structureIDOffset()) \
+    macro(JSCell_typeInfoFlags, JSCell::typeInfoFlagsOffset()) \
+    macro(JSCell_typeInfoType, JSCell::typeInfoTypeOffset()) \
+    macro(JSCell_indexingType, JSCell::indexingTypeOffset()) \
+    macro(JSCell_gcData, JSCell::gcDataOffset()) \
     macro(JSFunction_executable, JSFunction::offsetOfExecutable()) \
     macro(JSFunction_scope, JSFunction::offsetOfScopeChain()) \
     macro(JSObject_butterfly, JSObject::butterflyOffset()) \
@@ -62,11 +66,9 @@ namespace JSC { namespace FTL {
     macro(MarkedBlock_markBits, MarkedBlock::offsetOfMarks()) \
     macro(StringImpl_data, StringImpl::dataOffset()) \
     macro(StringImpl_hashAndFlags, StringImpl::flagsOffset()) \
+    macro(Structure_structureID, Structure::structureIDOffset()) \
     macro(Structure_classInfo, Structure::classInfoOffset()) \
     macro(Structure_globalObject, Structure::globalObjectOffset()) \
-    macro(Structure_indexingType, Structure::indexingTypeOffset()) \
-    macro(Structure_typeInfoFlags, Structure::typeInfoFlagsOffset()) \
-    macro(Structure_typeInfoType, Structure::typeInfoTypeOffset())
 
 #define FOR_EACH_INDEXED_ABSTRACT_HEAP(macro) \
     macro(JSRopeString_fibers, JSRopeString::offsetOfFibers(), sizeof(WriteBarrier<JSString>)) \
index 99351a6..fd2f793 100644 (file)
@@ -1540,12 +1540,12 @@ private:
         else
             exitKind = BadCache;
         
-        LValue structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
+        LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
         
         if (m_node->structureSet().size() == 1) {
             speculate(
                 exitKind, jsValueValue(cell), 0,
-                m_out.notEqual(structure, weakPointer(m_node->structureSet()[0])));
+                m_out.notEqual(structureID, weakStructure(m_node->structureSet()[0])));
             return;
         }
         
@@ -1555,14 +1555,14 @@ private:
         for (unsigned i = 0; i < m_node->structureSet().size() - 1; ++i) {
             LBasicBlock nextStructure = FTL_NEW_BLOCK(m_out, ("CheckStructure nextStructure"));
             m_out.branch(
-                m_out.equal(structure, weakPointer(m_node->structureSet()[i])),
+                m_out.equal(structureID, weakStructure(m_node->structureSet()[i])),
                 unsure(continuation), unsure(nextStructure));
             m_out.appendTo(nextStructure);
         }
         
         speculate(
             exitKind, jsValueValue(cell), 0,
-            m_out.notEqual(structure, weakPointer(m_node->structureSet().last())));
+            m_out.notEqual(structureID, weakStructure(m_node->structureSet().last())));
         
         m_out.jump(continuation);
         m_out.appendTo(continuation, lastNext);
@@ -1602,10 +1602,10 @@ private:
         LBasicBlock unexpectedStructure = FTL_NEW_BLOCK(m_out, ("ArrayifyToStructure unexpected structure"));
         LBasicBlock continuation = FTL_NEW_BLOCK(m_out, ("ArrayifyToStructure continuation"));
         
-        LValue structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
+        LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
         
         m_out.branch(
-            m_out.notEqual(structure, weakPointer(m_node->structure())),
+            m_out.notEqual(structureID, weakStructure(m_node->structure())),
             rarely(unexpectedStructure), usually(continuation));
         
         LBasicBlock lastNext = m_out.appendTo(unexpectedStructure, continuation);
@@ -1646,10 +1646,9 @@ private:
             break;
         }
         
-        structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
         speculate(
             BadIndexingType, jsValueValue(cell), 0,
-            m_out.notEqual(structure, weakPointer(m_node->structure())));
+            m_out.notEqual(structureID, weakStructure(m_node->structure())));
         m_out.jump(continuation);
         
         m_out.appendTo(continuation, lastNext);
@@ -1658,10 +1657,17 @@ private:
     void compilePutStructure()
     {
         m_ftlState.jitCode->common.notifyCompilingStructureTransition(m_graph.m_plan, codeBlock(), m_node);
-        
-        m_out.storePtr(
-            weakPointer(m_node->structureTransitionData().newStructure),
-            lowCell(m_node->child1()), m_heaps.JSCell_structure);
+
+        Structure* oldStructure = m_node->structureTransitionData().previousStructure;
+        Structure* newStructure = m_node->structureTransitionData().newStructure;
+        ASSERT_UNUSED(oldStructure, oldStructure->indexingType() == newStructure->indexingType());
+        ASSERT(oldStructure->typeInfo().inlineTypeFlags() == newStructure->typeInfo().inlineTypeFlags());
+        ASSERT(oldStructure->typeInfo().type() == newStructure->typeInfo().type());
+
+        LValue cell = lowCell(m_node->child1()); 
+        m_out.store32(
+            weakStructure(newStructure),
+            cell, m_heaps.JSCell_structureID);
     }
     
     void compilePhantomPutStructure()
@@ -2467,7 +2473,7 @@ private:
         LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
         
         ValueFromBlock fastResult = m_out.anchor(allocateObject(
-            m_out.constIntPtr(allocator), m_out.constIntPtr(structure), m_out.intPtrZero, slowPath));
+            m_out.constIntPtr(allocator), structure, m_out.intPtrZero, slowPath));
         
         m_out.jump(continuation);
         
@@ -2644,7 +2650,7 @@ private:
             LValue butterfly = m_out.sub(endOfStorage, payloadSize);
             
             LValue object = allocateObject<JSArray>(
-                m_out.constIntPtr(structure), butterfly, failCase);
+                structure, butterfly, failCase);
             
             m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
             m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
@@ -2741,18 +2747,18 @@ private:
             
         case StringOrStringObjectUse: {
             LValue cell = lowCell(m_node->child1());
-            LValue structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
+            LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
             
             LBasicBlock notString = FTL_NEW_BLOCK(m_out, ("ToString StringOrStringObject not string case"));
             LBasicBlock continuation = FTL_NEW_BLOCK(m_out, ("ToString StringOrStringObject continuation"));
             
             ValueFromBlock simpleResult = m_out.anchor(cell);
             m_out.branch(
-                m_out.equal(structure, m_out.constIntPtr(vm().stringStructure.get())),
+                m_out.equal(structureID, m_out.constInt32(vm().stringStructure->id())),
                 unsure(continuation), unsure(notString));
             
             LBasicBlock lastNext = m_out.appendTo(notString, continuation);
-            speculateStringObjectForStructure(m_node->child1(), structure);
+            speculateStringObjectForStructureID(m_node->child1(), structureID);
             ValueFromBlock unboxedResult = m_out.anchor(
                 m_out.loadPtr(cell, m_heaps.JSWrapperObject_internalValue));
             m_out.jump(continuation);
@@ -2788,8 +2794,8 @@ private:
             LValue isStringPredicate;
             if (m_node->child1()->prediction() & SpecString) {
                 isStringPredicate = m_out.equal(
-                    m_out.loadPtr(value, m_heaps.JSCell_structure),
-                    m_out.constIntPtr(vm().stringStructure.get()));
+                    m_out.load32(value, m_heaps.JSCell_structureID),
+                    m_out.constInt32(vm().stringStructure->id()));
             } else
                 isStringPredicate = m_out.booleanFalse;
             m_out.branch(isStringPredicate, unsure(continuation), unsure(notString));
@@ -2864,7 +2870,7 @@ private:
         
         LValue result = allocateCell(
             m_out.constIntPtr(&allocator),
-            m_out.constIntPtr(vm().stringStructure.get()),
+            vm().stringStructure.get(),
             slowPath);
         
         m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value);
@@ -3081,11 +3087,11 @@ private:
             GetByIdVariant variant = data.variants[i];
             for (unsigned j = variant.structureSet().size(); j--;) {
                 cases.append(SwitchCase(
-                    weakPointer(variant.structureSet()[j]), blocks[i], Weight(1)));
+                    weakStructure(variant.structureSet()[j]), blocks[i], Weight(1)));
             }
         }
         m_out.switchInstruction(
-            m_out.loadPtr(base, m_heaps.JSCell_structure), cases, exit, Weight(0));
+            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
         
         LBasicBlock lastNext = m_out.m_nextBlock;
         
@@ -3147,10 +3153,10 @@ private:
         for (unsigned i = data.variants.size(); i--;) {
             PutByIdVariant variant = data.variants[i];
             cases.append(
-                SwitchCase(weakPointer(variant.oldStructure()), blocks[i], Weight(1)));
+                SwitchCase(weakStructure(variant.oldStructure()), blocks[i], Weight(1)));
         }
         m_out.switchInstruction(
-            m_out.loadPtr(base, m_heaps.JSCell_structure), cases, exit, Weight(0));
+            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
         
         LBasicBlock lastNext = m_out.m_nextBlock;
         
@@ -3172,8 +3178,12 @@ private:
                 
                 storage = storageForTransition(
                     base, variant.offset(), variant.oldStructure(), variant.newStructure());
-                m_out.storePtr(
-                    weakPointer(variant.newStructure()), base, m_heaps.JSCell_structure);
+
+                ASSERT(variant.oldStructure()->indexingType() == variant.newStructure()->indexingType());
+                ASSERT(variant.oldStructure()->typeInfo().inlineTypeFlags() == variant.newStructure()->typeInfo().inlineTypeFlags());
+                ASSERT(variant.oldStructure()->typeInfo().type() == variant.newStructure()->typeInfo().type());
+                m_out.store32(
+                    weakStructure(variant.newStructure()), base, m_heaps.JSCell_structureID);
             }
             
             storeProperty(value, storage, data.identifierNumber, variant.offset());
@@ -3942,14 +3952,14 @@ private:
             return;
         }
         
-        LValue structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
+        LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
         FTL_TYPE_CHECK(
             jsValueValue(cell), edge, filter,
-            m_out.equal(structure, m_out.constIntPtr(vm().stringStructure.get())));
+            m_out.equal(structureID, m_out.constInt32(vm().stringStructure->id())));
         speculate(
             BadType, jsValueValue(cell), edge.node(),
             m_out.testNonZero8(
-                m_out.load8(structure, m_heaps.Structure_typeInfoFlags),
+                m_out.load8(cell, m_heaps.JSCell_typeInfoFlags),
                 m_out.constInt8(MasqueradesAsUndefined)));
     }
     
@@ -3981,8 +3991,8 @@ private:
         m_out.appendTo(continuation, lastNext);
         setBoolean(m_out.phi(m_out.boolean, fastResult, slowResult));
     }
-    
-    LValue allocateCell(LValue allocator, LValue structure, LBasicBlock slowPath)
+
+    LValue allocateCell(LValue allocator, Structure* structure, LBasicBlock slowPath)
     {
         LBasicBlock success = FTL_NEW_BLOCK(m_out, ("object allocation success"));
         
@@ -3997,13 +4007,17 @@ private:
             m_out.loadPtr(result, m_heaps.JSCell_freeListNext),
             allocator, m_heaps.MarkedAllocator_freeListHead);
         
-        m_out.storePtr(structure, result, m_heaps.JSCell_structure);
+        m_out.store32(m_out.constInt32(structure->id()), result, m_heaps.JSCell_structureID);
+        m_out.store8(m_out.constInt8(structure->indexingType()), result, m_heaps.JSCell_indexingType);
+        m_out.store8(m_out.constInt8(structure->typeInfo().type()), result, m_heaps.JSCell_typeInfoType);
+        m_out.store8(m_out.constInt8(structure->typeInfo().inlineTypeFlags()), result, m_heaps.JSCell_typeInfoFlags);
+        m_out.store8(m_out.constInt8(0), result, m_heaps.JSCell_gcData);
         
         return result;
     }
-    
+
     LValue allocateObject(
-        LValue allocator, LValue structure, LValue butterfly, LBasicBlock slowPath)
+        LValue allocator, Structure* structure, LValue butterfly, LBasicBlock slowPath)
     {
         LValue result = allocateCell(allocator, structure, slowPath);
         m_out.storePtr(butterfly, result, m_heaps.JSObject_butterfly);
@@ -4011,7 +4025,7 @@ private:
     }
     
     template<typename ClassType>
-    LValue allocateObject(LValue structure, LValue butterfly, LBasicBlock slowPath)
+    LValue allocateObject(Structure* structure, LValue butterfly, LBasicBlock slowPath)
     {
         MarkedAllocator* allocator;
         size_t size = ClassType::allocationSize(0);
@@ -4080,7 +4094,7 @@ private:
             endOfStorage, m_out.constIntPtr(sizeof(JSValue) * vectorLength));
         
         LValue object = allocateObject<JSArray>(
-            m_out.constIntPtr(structure), butterfly, slowPath);
+            structure, butterfly, slowPath);
         
         m_out.store32(m_out.constInt32(numElements), butterfly, m_heaps.Butterfly_publicLength);
         m_out.store32(m_out.constInt32(vectorLength), butterfly, m_heaps.Butterfly_vectorLength);
@@ -4219,8 +4233,8 @@ private:
             FTL_TYPE_CHECK(
                 jsValueValue(value), edge, (~SpecCell) | SpecObject,
                 m_out.equal(
-                    m_out.loadPtr(value, m_heaps.JSCell_structure),
-                    m_out.constIntPtr(vm().stringStructure.get())));
+                    m_out.load32(value, m_heaps.JSCell_structureID),
+                    m_out.constInt32(vm().stringStructure->id())));
             break;
         }
         
@@ -4231,18 +4245,18 @@ private:
             LBasicBlock masqueradesCase =
                 FTL_NEW_BLOCK(m_out, ("EqualNullOrUndefined masquerades case"));
                 
-            LValue structure = m_out.loadPtr(value, m_heaps.JSCell_structure);
-            
             results.append(m_out.anchor(m_out.booleanFalse));
             
             m_out.branch(
                 m_out.testNonZero8(
-                    m_out.load8(structure, m_heaps.Structure_typeInfoFlags),
+                    m_out.load8(value, m_heaps.JSCell_typeInfoFlags),
                     m_out.constInt8(MasqueradesAsUndefined)),
                 rarely(masqueradesCase), usually(continuation));
             
             m_out.appendTo(masqueradesCase, primitiveCase);
             
+            LValue structure = loadStructure(value);
+            
             results.append(m_out.anchor(
                 m_out.equal(
                     m_out.constIntPtr(m_graph.globalObjectFor(m_node->origin.semantic)),
@@ -5003,8 +5017,8 @@ private:
     LValue isObject(LValue cell)
     {
         return m_out.notEqual(
-            m_out.loadPtr(cell, m_heaps.JSCell_structure),
-            m_out.constIntPtr(vm().stringStructure.get()));
+            m_out.load32(cell, m_heaps.JSCell_structureID),
+            m_out.constInt32(vm().stringStructure->id()));
     }
     
     LValue isNotString(LValue cell)
@@ -5015,8 +5029,8 @@ private:
     LValue isString(LValue cell)
     {
         return m_out.equal(
-            m_out.loadPtr(cell, m_heaps.JSCell_structure),
-            m_out.constIntPtr(vm().stringStructure.get()));
+            m_out.load32(cell, m_heaps.JSCell_structureID),
+            m_out.constInt32(vm().stringStructure->id()));
     }
     
     LValue isNotObject(LValue cell)
@@ -5030,9 +5044,7 @@ private:
         case Array::Int32:
         case Array::Double:
         case Array::Contiguous: {
-            LValue indexingType = m_out.load8(
-                m_out.loadPtr(cell, m_heaps.JSCell_structure),
-                m_heaps.Structure_indexingType);
+            LValue indexingType = m_out.load8(cell, m_heaps.JSCell_indexingType);
             
             switch (arrayMode.arrayClass()) {
             case Array::OriginalArray:
@@ -5060,7 +5072,9 @@ private:
         }
             
         default:
-            return hasClassInfo(cell, classInfoForType(arrayMode.typedArrayType()));
+            return m_out.equal(
+                m_out.load8(cell, m_heaps.JSCell_typeInfoType), 
+                m_out.constInt8(typeForTypedArrayType(arrayMode.typedArrayType())));
         }
     }
     
@@ -5068,7 +5082,7 @@ private:
     {
         return m_out.equal(
             m_out.loadPtr(
-                m_out.loadPtr(cell, m_heaps.JSCell_structure),
+                loadStructure(cell),
                 m_heaps.Structure_classInfo),
             m_out.constIntPtr(classInfo));
     }
@@ -5076,9 +5090,7 @@ private:
     LValue isType(LValue cell, JSType type)
     {
         return m_out.equal(
-            m_out.load8(
-                m_out.loadPtr(cell, m_heaps.JSCell_structure),
-                m_heaps.Structure_typeInfoType),
+            m_out.load8(cell, m_heaps.JSCell_typeInfoType),
             m_out.constInt8(type));
     }
     
@@ -5165,13 +5177,13 @@ private:
         LBasicBlock notString = FTL_NEW_BLOCK(m_out, ("Speculate StringOrStringObject not string case"));
         LBasicBlock continuation = FTL_NEW_BLOCK(m_out, ("Speculate StringOrStringObject continuation"));
         
-        LValue structure = m_out.loadPtr(lowCell(edge), m_heaps.JSCell_structure);
+        LValue structureID = m_out.load32(lowCell(edge), m_heaps.JSCell_structureID);
         m_out.branch(
-            m_out.equal(structure, m_out.constIntPtr(vm().stringStructure.get())),
+            m_out.equal(structureID, m_out.constInt32(vm().stringStructure->id())),
             unsure(continuation), unsure(notString));
         
         LBasicBlock lastNext = m_out.appendTo(notString, continuation);
-        speculateStringObjectForStructure(edge, structure);
+        speculateStringObjectForStructureID(edge, structureID);
         m_out.jump(continuation);
         
         m_out.appendTo(continuation, lastNext);
@@ -5181,10 +5193,10 @@ private:
     
     void speculateStringObjectForCell(Edge edge, LValue cell)
     {
-        speculateStringObjectForStructure(edge, m_out.loadPtr(cell, m_heaps.JSCell_structure));
+        speculateStringObjectForStructureID(edge, m_out.load32(cell, m_heaps.JSCell_structureID));
     }
     
-    void speculateStringObjectForStructure(Edge edge, LValue structure)
+    void speculateStringObjectForStructureID(Edge edge, LValue structureID)
     {
         Structure* stringObjectStructure =
             m_graph.globalObjectFor(m_node->origin.semantic)->stringObjectStructure();
@@ -5194,22 +5206,23 @@ private:
         
         speculate(
             NotStringObject, noValue(), 0,
-            m_out.notEqual(structure, weakPointer(stringObjectStructure)));
+            m_out.notEqual(structureID, weakStructure(stringObjectStructure)));
     }
     
     void speculateNonNullObject(Edge edge, LValue cell)
     {
-        LValue structure = m_out.loadPtr(cell, m_heaps.JSCell_structure);
         FTL_TYPE_CHECK(
             jsValueValue(cell), edge, SpecObject, 
-            m_out.equal(structure, m_out.constIntPtr(vm().stringStructure.get())));
+            m_out.equal(
+                m_out.load32(cell, m_heaps.JSCell_structureID),
+                m_out.constInt32(vm().stringStructure->id())));
         if (masqueradesAsUndefinedWatchpointIsStillValid())
             return;
         
         speculate(
             BadType, jsValueValue(cell), edge.node(),
             m_out.testNonZero8(
-                m_out.load8(structure, m_heaps.Structure_typeInfoFlags),
+                m_out.load8(cell, m_heaps.JSCell_typeInfoFlags),
                 m_out.constInt8(MasqueradesAsUndefined)));
     }
     
@@ -5264,10 +5277,7 @@ private:
     
     LValue loadMarkByte(LValue base)
     {
-        LValue markedBlock = m_out.bitAnd(base, m_out.constInt64(MarkedBlock::blockMask));
-        LValue baseOffset = m_out.bitAnd(base, m_out.constInt64(~MarkedBlock::blockMask));
-        LValue markByteIndex = m_out.lShr(baseOffset, m_out.constInt64(MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount));
-        return m_out.load8(m_out.baseIndex(m_heaps.MarkedBlock_markBits, markedBlock, markByteIndex, ScaleOne, MarkedBlock::offsetOfMarks()));
+        return m_out.load8(base, m_heaps.JSCell_gcData);
     }
 
     void emitStoreBarrier(LValue base, LValue value, Edge valueEdge)
@@ -5732,12 +5742,25 @@ private:
     {
         m_graph.m_plan.weakReferences.addLazily(target);
     }
-    
+
+    LValue loadStructure(LValue value)
+    {
+        LValue tableIndex = m_out.load32(value, m_heaps.JSCell_structureID);
+        LValue tableBase = m_out.get(m_out.constIntPtr(vm().heap.structureIDTable().base()));
+        return m_out.get(m_out.baseIndex(tableBase, tableIndex, ScaleEight));
+    }
+
     LValue weakPointer(JSCell* pointer)
     {
         addWeakReference(pointer);
         return m_out.constIntPtr(pointer);
     }
+
+    LValue weakStructure(Structure* structure)
+    {
+        addWeakReference(structure);
+        return m_out.constInt32(structure->id());
+    }
     
     TypedPointer addressFor(LValue base, int operand, ptrdiff_t offset = 0)
     {
index 86a2dcb..b9e3247 100644 (file)
@@ -106,9 +106,9 @@ static void compileStub(
         if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
             CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
             if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
-                jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureOffset()), GPRInfo::regT1);
-                jit.storePtr(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructure());
-                jit.load8(MacroAssembler::Address(GPRInfo::regT1, Structure::indexingTypeOffset()), GPRInfo::regT1);
+                jit.load32(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureIDOffset()), GPRInfo::regT1);
+                jit.store32(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructureID());
+                jit.load8(MacroAssembler::Address(GPRInfo::regT0, JSCell::indexingTypeOffset()), GPRInfo::regT1);
                 jit.move(MacroAssembler::TrustedImm32(1), GPRInfo::regT2);
                 jit.lshift32(GPRInfo::regT1, GPRInfo::regT2);
                 jit.or32(GPRInfo::regT2, MacroAssembler::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
index 2d5d68b..c4a4002 100644 (file)
@@ -267,6 +267,7 @@ public:
     LValue load64(LValue base, const AbstractField& field) { return load64(address(base, field)); }
     LValue loadPtr(LValue base, const AbstractField& field) { return loadPtr(address(base, field)); }
     LValue loadDouble(LValue base, const AbstractField& field) { return loadDouble(address(base, field)); }
+    void store8(LValue value, LValue base, const AbstractField& field) { store8(value, address(base, field)); }
     void store32(LValue value, LValue base, const AbstractField& field) { store32(value, address(base, field)); }
     void store64(LValue value, LValue base, const AbstractField& field) { store64(value, address(base, field)); }
     void storePtr(LValue value, LValue base, const AbstractField& field) { storePtr(value, address(base, field)); }
index b0676bf..c163a19 100644 (file)
@@ -32,7 +32,7 @@
 #if ENABLE(GC_VALIDATION)
 #define ASSERT_GC_OBJECT_LOOKS_VALID(cell) do { \
     RELEASE_ASSERT(cell);\
-    RELEASE_ASSERT(cell->unvalidatedStructure()->unvalidatedStructure() == cell->unvalidatedStructure()->unvalidatedStructure()->unvalidatedStructure()); \
+    RELEASE_ASSERT(cell->structure()->structure() == cell->structure()->structure()->structure()); \
 } while (0)
 
 #define ASSERT_GC_OBJECT_INHERITS(object, classInfo) do {\
index ea4378e..0bc4f94 100644 (file)
@@ -437,6 +437,7 @@ void Heap::getConservativeRegisterRoots(HashSet<JSCell*>& roots)
     JSCell** registerRoots = stackRoots.roots();
     for (size_t i = 0; i < stackRootCount; i++) {
         setMarked(registerRoots[i]);
+        registerRoots[i]->mark();
         roots.add(registerRoots[i]);
     }
 }
@@ -857,6 +858,7 @@ void Heap::collect()
 
     {
         GCPHASE(StopAllocation);
+        m_structureIDTable.flushOldTables();
         m_objectSpace.stopAllocating();
         if (m_operationInProgress == FullCollection)
             m_storageSpace.didStartFullCollection();
@@ -1110,8 +1112,11 @@ void Heap::writeBarrier(const JSCell* from)
 {
 #if ENABLE(GGC)
     ASSERT_GC_OBJECT_LOOKS_VALID(const_cast<JSCell*>(from));
-    if (!from || !isMarked(from))
+    if (!from || !from->isMarked()) {
+        ASSERT(!from || !isMarked(from));
         return;
+    }
+    ASSERT(isMarked(from));
     addToRememberedSet(from);
 #else
     UNUSED_PARAM(from);
index 3070839..4d49d80 100644 (file)
@@ -38,6 +38,7 @@
 #include "MarkedSpace.h"
 #include "Options.h"
 #include "SlotVisitor.h"
+#include "StructureIDTable.h"
 #include "WeakHandleOwner.h"
 #include "WriteBarrierBuffer.h"
 #include "WriteBarrierSupport.h"
@@ -201,6 +202,9 @@ namespace JSC {
         
         bool isDeferred() const { return !!m_deferralDepth || Options::disableGC(); }
 
+        BlockAllocator& blockAllocator();
+        StructureIDTable& structureIDTable() { return m_structureIDTable; }
+
 #if USE(CF)
         template<typename T> void releaseSoon(RetainPtr<T>&&);
 #endif
@@ -259,7 +263,6 @@ namespace JSC {
         size_t sizeAfterCollect();
 
         JSStack& stack();
-        BlockAllocator& blockAllocator();
         
         JS_EXPORT_PRIVATE void incrementDeferralDepth();
         void decrementDeferralDepth();
@@ -280,6 +283,7 @@ namespace JSC {
         
         HeapOperation m_operationInProgress;
         BlockAllocator m_blockAllocator;
+        StructureIDTable m_structureIDTable;
         MarkedSpace m_objectSpace;
         CopiedSpace m_storageSpace;
         GCIncomingRefCountedSet<ArrayBuffer> m_arrayBuffers;
@@ -393,18 +397,6 @@ namespace JSC {
 #endif
     }
 
-    inline void Heap::writeBarrier(const JSCell* from, JSCell* to)
-    {
-#if ENABLE(WRITE_BARRIER_PROFILING)
-        WriteBarrierCounters::countWriteBarrier();
-#endif
-        if (!from || !isMarked(from))
-            return;
-        if (!to || isMarked(to))
-            return;
-        addToRememberedSet(from);
-    }
-
     inline void Heap::writeBarrier(const JSCell* from, JSValue to)
     {
 #if ENABLE(WRITE_BARRIER_PROFILING)
index e853d66..752d846 100644 (file)
@@ -244,20 +244,22 @@ inline void* MarkedSpace::allocateWithNormalDestructor(size_t bytes)
 
 template <typename Functor> inline typename Functor::ReturnType MarkedSpace::forEachBlock(Functor& functor)
 {
-    for (size_t i = 0; i < preciseCount; ++i) {
+    for (size_t i = 0; i < preciseCount; ++i)
         m_normalSpace.preciseAllocators[i].forEachBlock(functor);
-        m_normalDestructorSpace.preciseAllocators[i].forEachBlock(functor);
-        m_immortalStructureDestructorSpace.preciseAllocators[i].forEachBlock(functor);
-    }
-
-    for (size_t i = 0; i < impreciseCount; ++i) {
+    for (size_t i = 0; i < impreciseCount; ++i)
         m_normalSpace.impreciseAllocators[i].forEachBlock(functor);
-        m_normalDestructorSpace.impreciseAllocators[i].forEachBlock(functor);
-        m_immortalStructureDestructorSpace.impreciseAllocators[i].forEachBlock(functor);
-    }
-
     m_normalSpace.largeAllocator.forEachBlock(functor);
+
+    for (size_t i = 0; i < preciseCount; ++i)
+        m_normalDestructorSpace.preciseAllocators[i].forEachBlock(functor);
+    for (size_t i = 0; i < impreciseCount; ++i)
+        m_normalDestructorSpace.impreciseAllocators[i].forEachBlock(functor);
     m_normalDestructorSpace.largeAllocator.forEachBlock(functor);
+
+    for (size_t i = 0; i < preciseCount; ++i)
+        m_immortalStructureDestructorSpace.preciseAllocators[i].forEachBlock(functor);
+    for (size_t i = 0; i < impreciseCount; ++i)
+        m_immortalStructureDestructorSpace.impreciseAllocators[i].forEachBlock(functor);
     m_immortalStructureDestructorSpace.largeAllocator.forEachBlock(functor);
 
     return functor.returnValue();
index 42cc9e2..6378961 100644 (file)
@@ -54,6 +54,8 @@ public:
 
     MarkStackArray& markStack() { return m_stack; }
 
+    VM& vm();
+    const VM& vm() const;
     Heap* heap() const;
 
     void append(ConservativeRoots&);
index 5408bb5..15e39b1 100644 (file)
@@ -108,9 +108,12 @@ ALWAYS_INLINE void SlotVisitor::internalAppend(void* from, JSCell* cell)
 #if ENABLE(GC_VALIDATION)
     validate(cell);
 #endif
-    if (Heap::testAndSetMarked(cell) || !cell->structure())
+    if (Heap::testAndSetMarked(cell) || !cell->structure()) {
+        ASSERT(cell->structure());
         return;
+    }
 
+    cell->mark();
     m_bytesVisited += MarkedBlock::blockFor(cell)->cellSize();
         
     MARK_LOG_CHILD(*this, cell);
@@ -279,6 +282,16 @@ inline Heap* SlotVisitor::heap() const
     return &sharedData().m_vm->heap;
 }
 
+inline VM& SlotVisitor::vm()
+{
+    return *sharedData().m_vm;
+}
+
+inline const VM& SlotVisitor::vm() const
+{
+    return *sharedData().m_vm;
+}
+
 } // namespace JSC
 
 #endif // SlotVisitorInlines_h
index fb997dd..04beda7 100644 (file)
@@ -286,9 +286,9 @@ public:
         return payloadFor(static_cast<VirtualRegister>(operand));
     }
 
-    Jump branchIfNotObject(GPRReg structureReg)
+    Jump branchIfCellNotObject(GPRReg cellReg)
     {
-        return branch8(Below, Address(structureReg, Structure::typeInfoTypeOffset()), TrustedImm32(ObjectType));
+        return branch8(Below, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType));
     }
 
     static GPRReg selectScratchGPR(GPRReg preserve1 = InvalidGPRReg, GPRReg preserve2 = InvalidGPRReg, GPRReg preserve3 = InvalidGPRReg, GPRReg preserve4 = InvalidGPRReg)
@@ -387,21 +387,9 @@ public:
     void jitAssertArgumentCountSane() { }
 #endif
 
-    Jump genericWriteBarrier(GPRReg owner, GPRReg scratch1, GPRReg scratch2)
+    Jump genericWriteBarrier(GPRReg owner)
     {
-        move(owner, scratch1);
-        move(owner, scratch2);
-    
-        andPtr(TrustedImmPtr(MarkedBlock::blockMask), scratch1);
-        andPtr(TrustedImmPtr(~MarkedBlock::blockMask), scratch2);
-    
-#if USE(JSVALUE64)
-        rshift64(TrustedImm32(MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount), scratch2);
-#else
-        rshift32(TrustedImm32(MarkedBlock::atomShiftAmount + MarkedBlock::markByteShiftAmount), scratch2);
-#endif
-    
-        return branchTest8(Zero, BaseIndex(scratch1, scratch2, TimesOne, MarkedBlock::offsetOfMarks()));
+        return branchTest8(Zero, Address(owner, JSCell::gcDataOffset()));
     }
 
     // These methods convert between doubles, and doubles boxed and JSValues.
@@ -559,6 +547,79 @@ public:
         return offsetOfArgumentsIncludingThis(codeOrigin.inlineCallFrame);
     }
 
+    void emitLoadStructure(RegisterID source, RegisterID dest, RegisterID scratch)
+    {
+#if USE(JSVALUE64)
+        load32(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
+        loadPtr(vm()->heap.structureIDTable().base(), scratch);
+        loadPtr(MacroAssembler::BaseIndex(scratch, dest, MacroAssembler::TimesEight), dest);
+#else
+        UNUSED_PARAM(scratch);
+        loadPtr(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
+#endif
+    }
+
+    static void emitLoadStructure(AssemblyHelpers& jit, RegisterID base, RegisterID dest, RegisterID scratch)
+    {
+#if USE(JSVALUE64)
+        jit.load32(MacroAssembler::Address(base, JSCell::structureIDOffset()), dest);
+        jit.loadPtr(jit.vm()->heap.structureIDTable().base(), scratch);
+        jit.loadPtr(MacroAssembler::BaseIndex(scratch, dest, MacroAssembler::TimesEight), dest);
+#else
+        UNUSED_PARAM(scratch);
+        jit.loadPtr(MacroAssembler::Address(base, JSCell::structureIDOffset()), dest);
+#endif
+    }
+
+    void emitStoreStructureWithTypeInfo(TrustedImmPtr structure, RegisterID dest, RegisterID)
+    {
+        emitStoreStructureWithTypeInfo(*this, structure, dest);
+    }
+
+    void emitStoreStructureWithTypeInfo(RegisterID structure, RegisterID dest, RegisterID scratch)
+    {
+#if USE(JSVALUE64)
+        load64(MacroAssembler::Address(structure, Structure::structureIDOffset()), scratch);
+        store64(scratch, MacroAssembler::Address(dest, JSCell::structureIDOffset()));
+#else
+        // Store all the info flags using a single 32-bit wide load and store.
+        load32(MacroAssembler::Address(structure, Structure::indexingTypeOffset()), scratch);
+        store32(scratch, MacroAssembler::Address(dest, JSCell::indexingTypeOffset()));
+
+        // Store the StructureID
+        storePtr(structure, MacroAssembler::Address(dest, JSCell::structureIDOffset()));
+#endif
+    }
+
+    static void emitStoreStructureWithTypeInfo(AssemblyHelpers& jit, TrustedImmPtr structure, RegisterID dest)
+    {
+        const Structure* structurePtr = static_cast<const Structure*>(structure.m_value);
+#if USE(JSVALUE64)
+        jit.store64(TrustedImm64(structurePtr->idBlob()), MacroAssembler::Address(dest, JSCell::structureIDOffset()));
+#ifndef NDEBUG
+        Jump correctStructure = jit.branch32(Equal, MacroAssembler::Address(dest, JSCell::structureIDOffset()), TrustedImm32(structurePtr->id()));
+        jit.breakpoint();
+        correctStructure.link(&jit);
+
+        Jump correctIndexingType = jit.branch8(Equal, MacroAssembler::Address(dest, JSCell::indexingTypeOffset()), TrustedImm32(structurePtr->indexingType()));
+        jit.breakpoint();
+        correctIndexingType.link(&jit);
+
+        Jump correctType = jit.branch8(Equal, MacroAssembler::Address(dest, JSCell::typeInfoTypeOffset()), TrustedImm32(structurePtr->typeInfo().type()));
+        jit.breakpoint();
+        correctType.link(&jit);
+
+        Jump correctFlags = jit.branch8(Equal, MacroAssembler::Address(dest, JSCell::typeInfoFlagsOffset()), TrustedImm32(structurePtr->typeInfo().inlineTypeFlags()));
+        jit.breakpoint();
+        correctFlags.link(&jit);
+#endif
+#else
+        // Do a 32-bit wide store to initialize the cell's fields.
+        jit.store32(TrustedImm32(structurePtr->objectInitializationBlob()), MacroAssembler::Address(dest, JSCell::indexingTypeOffset()));
+        jit.storePtr(structure, MacroAssembler::Address(dest, JSCell::structureIDOffset()));
+#endif
+    }
+
     void writeBarrier(GPRReg owner, GPRReg scratch1, GPRReg scratch2, WriteBarrierUseKind useKind)
     {
         UNUSED_PARAM(owner);
index c4ad2c2..3833616 100644 (file)
@@ -305,7 +305,7 @@ namespace JSC {
         
         void emitLoadDouble(int index, FPRegisterID value);
         void emitLoadInt32ToDouble(int index, FPRegisterID value);
-        Jump emitJumpIfNotObject(RegisterID structureReg);
+        Jump emitJumpIfCellNotObject(RegisterID cellReg);
 
         Jump addStructureTransitionCheck(JSCell*, Structure*, StructureStubInfo*, RegisterID scratch);
         void addStructureTransitionCheck(JSCell*, Structure*, StructureStubInfo*, JumpList& failureCases, RegisterID scratch);
@@ -314,7 +314,7 @@ namespace JSC {
         enum WriteBarrierMode { UnconditionalWriteBarrier, ShouldFilterValue, ShouldFilterBaseAndValue };
         // value register in write barrier is used before any scratch registers
         // so may safely be the same as either of the scratch registers.
-        Jump checkMarkWord(RegisterID owner, RegisterID scratch1, RegisterID scratch2);
+        Jump checkMarkWord(RegisterID owner);
         Jump checkMarkWord(JSCell* owner);
         void emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode);
         void emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode);
@@ -328,8 +328,8 @@ namespace JSC {
         void emitValueProfilingSite(ValueProfile*);
         void emitValueProfilingSite(unsigned bytecodeOffset);
         void emitValueProfilingSite();
-        void emitArrayProfilingSite(RegisterID structureAndIndexingType, RegisterID scratch, ArrayProfile*);
-        void emitArrayProfilingSiteForBytecodeIndex(RegisterID structureAndIndexingType, RegisterID scratch, unsigned bytecodeIndex);
+        void emitArrayProfilingSiteWithCell(RegisterID cell, RegisterID indexingType, ArrayProfile*);
+        void emitArrayProfilingSiteForBytecodeIndexWithCell(RegisterID cell, RegisterID indexingType, unsigned bytecodeIndex);
         void emitArrayProfileStoreToHoleSpecialCase(ArrayProfile*);
         void emitArrayProfileOutOfBoundsSpecialCase(ArrayProfile*);
         
@@ -369,6 +369,8 @@ namespace JSC {
         
         enum FinalObjectMode { MayBeFinal, KnownNotFinal };
 
+        template <typename T> Jump branchStructure(RelationalCondition, T leftHandSide, Structure*);
+
 #if USE(JSVALUE32_64)
         bool getOperandConstantImmediateInt(int op1, int op2, int& op, int32_t& constant);
 
index aecb8ed..b39a10d 100644 (file)
@@ -188,8 +188,8 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
         if (opcodeID == op_call && shouldEmitProfiling()) {
             emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0);
             Jump done = emitJumpIfNotJSCell(regT0);
-            loadPtr(Address(regT0, JSCell::structureOffset()), regT0);
-            storePtr(regT0, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructure());
+            load32(Address(regT0, JSCell::structureIDOffset()), regT0);
+            store32(regT0, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID());
             done.link(this);
         }
     
@@ -260,7 +260,7 @@ void JIT::privateCompileClosureCall(CallLinkInfo* callLinkInfo, CodeBlock* calle
     JumpList slowCases;
 
     slowCases.append(branchTestPtr(NonZero, regT0, tagMaskRegister));
-    slowCases.append(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(expectedStructure)));
+    slowCases.append(branchStructure(NotEqual, Address(regT0, JSCell::structureIDOffset()), expectedStructure));
     slowCases.append(branchPtr(NotEqual, Address(regT0, JSFunction::offsetOfExecutable()), TrustedImmPtr(expectedExecutable)));
     
     loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT1);
index 7f97f5b..3799cbe 100644 (file)
@@ -70,8 +70,7 @@ void JIT::emit_op_ret_object_or_this(Instruction* currentInstruction)
 
     emitLoad(result, regT1, regT0);
     Jump notJSCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump notObject = emitJumpIfNotObject(regT2);
+    Jump notObject = emitJumpIfCellNotObject(regT0);
 
     checkStackPointerAlignment();
     emitFunctionEpilogue();
@@ -266,8 +265,8 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
         if (opcodeID == op_call && shouldEmitProfiling()) {
             emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1);
             Jump done = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag));
-            loadPtr(Address(regT1, JSCell::structureOffset()), regT1);
-            storePtr(regT1, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructure());
+            loadPtr(Address(regT1, JSCell::structureIDOffset()), regT1);
+            storePtr(regT1, instruction[OPCODE_LENGTH(op_call) - 2].u.arrayProfile->addressOfLastSeenStructureID());
             done.link(this);
         }
     
@@ -342,7 +341,7 @@ void JIT::privateCompileClosureCall(CallLinkInfo* callLinkInfo, CodeBlock* calle
     JumpList slowCases;
 
     slowCases.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)));
-    slowCases.append(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(expectedStructure)));
+    slowCases.append(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), TrustedImmPtr(expectedStructure)));
     slowCases.append(branchPtr(NotEqual, Address(regT0, JSFunction::offsetOfExecutable()), TrustedImmPtr(expectedExecutable)));
     
     loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT1);
index 4ed01d6..b2f2127 100644 (file)
@@ -101,10 +101,10 @@ void JITByIdGenerator::finalize(LinkBuffer& linkBuffer)
 
 void JITByIdGenerator::generateFastPathChecks(MacroAssembler& jit, GPRReg butterfly)
 {
-    m_structureCheck = jit.patchableBranchPtrWithPatch(
+    m_structureCheck = jit.patchableBranch32WithPatch(
         MacroAssembler::NotEqual,
-        MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureOffset()),
-        m_structureImm, MacroAssembler::TrustedImmPtr(reinterpret_cast<void*>(unusedPointer)));
+        MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()),
+        m_structureImm, MacroAssembler::TrustedImm32(0));
     
     m_propertyStorageLoad = jit.convertibleLoadPtr(
         MacroAssembler::Address(m_base.payloadGPR(), JSObject::butterflyOffset()), butterfly);
index 6db9836..0e446e9 100644 (file)
@@ -78,7 +78,7 @@ protected:
     JSValueRegs m_base;
     JSValueRegs m_value;
     
-    MacroAssembler::DataLabelPtr m_structureImm;
+    MacroAssembler::DataLabel32 m_structureImm;
     MacroAssembler::PatchableJump m_structureCheck;
     MacroAssembler::ConvertibleLoadLabel m_propertyStorageLoad;
     AssemblerLabel m_loadOrStore;
index 248c8ea..1e76d25 100644 (file)
@@ -72,7 +72,7 @@ ALWAYS_INLINE void JIT::emitGetFromCallFrameHeader64(JSStack::CallFrameHeaderEnt
 
 ALWAYS_INLINE void JIT::emitLoadCharacterString(RegisterID src, RegisterID dst, JumpList& failures)
 {
-    failures.append(branchPtr(NotEqual, Address(src, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    failures.append(branchStructure(NotEqual, Address(src, JSCell::structureIDOffset()), m_vm->stringStructure.get()));
     failures.append(branch32(NotEqual, MacroAssembler::Address(src, ThunkHelpers::jsStringLengthOffset()), TrustedImm32(1)));
     loadPtr(MacroAssembler::Address(src, ThunkHelpers::jsStringValueOffset()), dst);
     failures.append(branchTest32(Zero, dst));
@@ -557,7 +557,7 @@ ALWAYS_INLINE MacroAssembler::Call JIT::callOperation(V_JITOperation_EJZJ operat
 
 ALWAYS_INLINE JIT::Jump JIT::checkStructure(RegisterID reg, Structure* structure)
 {
-    return branchPtr(NotEqual, Address(reg, JSCell::structureOffset()), TrustedImmPtr(structure));
+    return branchStructure(NotEqual, Address(reg, JSCell::structureIDOffset()), structure);
 }
 
 ALWAYS_INLINE void JIT::linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator& iter, int vReg)
@@ -605,9 +605,9 @@ ALWAYS_INLINE void JIT::emitJumpSlowToHot(Jump jump, int relativeOffset)
     jump.linkTo(m_labels[m_bytecodeOffset + relativeOffset], this);
 }
 
-ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotObject(RegisterID structureReg)
+ALWAYS_INLINE JIT::Jump JIT::emitJumpIfCellNotObject(RegisterID cellReg)
 {
-    return branch8(Below, Address(structureReg, Structure::typeInfoTypeOffset()), TrustedImm32(ObjectType));
+    return branch8(Below, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType));
 }
 
 #if ENABLE(SAMPLING_FLAGS)
@@ -678,11 +678,11 @@ inline void JIT::emitAllocateJSObject(RegisterID allocator, StructureType struct
     loadPtr(Address(result), scratch);
     storePtr(scratch, Address(allocator, MarkedAllocator::offsetOfFreeListHead()));
 
-    // initialize the object's structure
-    storePtr(structure, Address(result, JSCell::structureOffset()));
-
     // initialize the object's property storage pointer
     storePtr(TrustedImmPtr(0), Address(result, JSObject::butterflyOffset()));
+
+    // initialize the object's structure
+    emitStoreStructureWithTypeInfo(structure, result, scratch);
 }
 
 inline void JIT::emitValueProfilingSite(ValueProfile* valueProfile)
@@ -718,22 +718,19 @@ inline void JIT::emitValueProfilingSite()
     emitValueProfilingSite(m_bytecodeOffset);
 }
 
-inline void JIT::emitArrayProfilingSite(RegisterID structureAndIndexingType, RegisterID scratch, ArrayProfile* arrayProfile)
+inline void JIT::emitArrayProfilingSiteWithCell(RegisterID cell, RegisterID indexingType, ArrayProfile* arrayProfile)
 {
-    UNUSED_PARAM(scratch); // We had found this scratch register useful here before, so I will keep it for now.
-    
-    RegisterID structure = structureAndIndexingType;
-    RegisterID indexingType = structureAndIndexingType;
-    
-    if (shouldEmitProfiling())
-        storePtr(structure, arrayProfile->addressOfLastSeenStructure());
+    if (shouldEmitProfiling()) {
+        load32(MacroAssembler::Address(cell, JSCell::structureIDOffset()), indexingType);
+        store32(indexingType, arrayProfile->addressOfLastSeenStructureID());
+    }
 
-    load8(Address(structure, Structure::indexingTypeOffset()), indexingType);
+    load8(Address(cell, JSCell::indexingTypeOffset()), indexingType);
 }
 
-inline void JIT::emitArrayProfilingSiteForBytecodeIndex(RegisterID structureAndIndexingType, RegisterID scratch, unsigned bytecodeIndex)
+inline void JIT::emitArrayProfilingSiteForBytecodeIndexWithCell(RegisterID cell, RegisterID indexingType, unsigned bytecodeIndex)
 {
-    emitArrayProfilingSite(structureAndIndexingType, scratch, m_codeBlock->getOrAddArrayProfile(bytecodeIndex));
+    emitArrayProfilingSiteWithCell(cell, indexingType, m_codeBlock->getOrAddArrayProfile(bytecodeIndex));
 }
 
 inline void JIT::emitArrayProfileStoreToHoleSpecialCase(ArrayProfile* arrayProfile)
@@ -1082,6 +1079,26 @@ ALWAYS_INLINE void JIT::emitTagAsBoolImmediate(RegisterID reg)
 
 #endif // USE(JSVALUE32_64)
 
+template <typename T>
+JIT::Jump JIT::branchStructure(RelationalCondition condition, T leftHandSide, Structure* structure)
+{
+#if USE(JSVALUE64)
+    return branch32(condition, leftHandSide, TrustedImm32(structure->id()));
+#else
+    return branchPtr(condition, leftHandSide, TrustedImmPtr(structure));
+#endif
+}
+
+template <typename T>
+MacroAssembler::Jump branchStructure(MacroAssembler& jit, MacroAssembler::RelationalCondition condition, T leftHandSide, Structure* structure)
+{
+#if USE(JSVALUE64)
+    return jit.branch32(condition, leftHandSide, MacroAssembler::TrustedImm32(structure->id()));
+#else
+    return jit.branchPtr(condition, leftHandSide, MacroAssembler::TrustedImmPtr(structure));
+#endif
+}
+
 } // namespace JSC
 
 #endif // ENABLE(JIT)
index 1963c3b..58437cb 100644 (file)
@@ -118,8 +118,7 @@ void JIT::emit_op_check_has_instance(Instruction* currentInstruction)
     emitJumpSlowCaseIfNotJSCell(regT0, baseVal);
 
     // Check that baseVal 'ImplementsHasInstance'.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT0);
-    addSlowCase(branchTest8(Zero, Address(regT0, Structure::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance)));
+    addSlowCase(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance)));
 }
 
 void JIT::emit_op_instanceof(Instruction* currentInstruction)
@@ -138,8 +137,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction)
     emitJumpSlowCaseIfNotJSCell(regT1, proto);
 
     // Check that prototype is an object
-    loadPtr(Address(regT1, JSCell::structureOffset()), regT3);
-    addSlowCase(emitJumpIfNotObject(regT3));
+    addSlowCase(emitJumpIfCellNotObject(regT1));
     
     // Optimistically load the result true, and start looping.
     // Initially, regT1 still contains proto and regT2 still contains value.
@@ -149,7 +147,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction)
 
     // Load the prototype of the object in regT2.  If this is equal to regT1 - WIN!
     // Otherwise, check if we've hit null - if we have then drop out of the loop, if not go again.
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT2);
+    emitLoadStructure(regT2, regT2, regT3);
     load64(Address(regT2, Structure::prototypeOffset()), regT2);
     Jump isInstance = branchPtr(Equal, regT2, regT1);
     emitJumpIfJSCell(regT2).linkTo(loop, this);
@@ -174,12 +172,12 @@ void JIT::emit_op_is_undefined(Instruction* currentInstruction)
     Jump done = jump();
     
     isCell.link(this);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT1, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(0), regT0);
     Jump notMasqueradesAsUndefined = jump();
 
     isMasqueradesAsUndefined.link(this);
+    emitLoadStructure(regT0, regT1, regT2);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1);
     comparePtr(Equal, regT0, regT1, regT0);
@@ -221,8 +219,7 @@ void JIT::emit_op_is_string(Instruction* currentInstruction)
     emitGetVirtualRegister(value, regT0);
     Jump isNotCell = emitJumpIfNotJSCell(regT0);
     
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    compare8(Equal, Address(regT1, Structure::typeInfoTypeOffset()), TrustedImm32(StringType), regT0);
+    compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType), regT0);
     emitTagAsBoolImmediate(regT0);
     Jump done = jump();
     
@@ -277,8 +274,7 @@ void JIT::emit_op_ret_object_or_this(Instruction* currentInstruction)
     // Return the result in %eax.
     emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR);
     Jump notJSCell = emitJumpIfNotJSCell(returnValueGPR);
-    loadPtr(Address(returnValueGPR, JSCell::structureOffset()), regT2);
-    Jump notObject = emitJumpIfNotObject(regT2);
+    Jump notObject = emitJumpIfCellNotObject(returnValueGPR);
 
     // Return.
     emitFunctionEpilogue();
@@ -302,7 +298,9 @@ void JIT::emit_op_to_primitive(Instruction* currentInstruction)
     emitGetVirtualRegister(src, regT0);
     
     Jump isImm = emitJumpIfNotJSCell(regT0);
-    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    addSlowCase(branchStructure(NotEqual, 
+        Address(regT0, JSCell::structureIDOffset()), 
+        m_vm->stringStructure.get()));
     isImm.link(this);
 
     if (dst != src)
@@ -353,8 +351,8 @@ void JIT::emit_op_jeq_null(Instruction* currentInstruction)
     Jump isImmediate = emitJumpIfNotJSCell(regT0);
 
     // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    emitLoadStructure(regT0, regT2, regT1);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target);
     Jump masqueradesGlobalObjectIsForeign = jump();
@@ -376,8 +374,8 @@ void JIT::emit_op_jneq_null(Instruction* currentInstruction)
     Jump isImmediate = emitJumpIfNotJSCell(regT0);
 
     // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    addJump(branchTest8(Zero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target);
+    addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target);
+    emitLoadStructure(regT0, regT2, regT1);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target);
     Jump wasNotImmediate = jump();
@@ -472,10 +470,8 @@ void JIT::emit_op_get_pnames(Instruction* currentInstruction)
     emitGetVirtualRegister(base, regT0);
     if (!m_codeBlock->isKnownNotImmediate(base))
         isNotObject.append(emitJumpIfNotJSCell(regT0));
-    if (base != m_codeBlock->thisRegister().offset() || m_codeBlock->isStrictMode()) {
-        loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-        isNotObject.append(emitJumpIfNotObject(regT2));
-    }
+    if (base != m_codeBlock->thisRegister().offset() || m_codeBlock->isStrictMode())
+        isNotObject.append(emitJumpIfCellNotObject(regT0));
 
     // We could inline the case where you have a valid cache, but
     // this call doesn't seem to be hot.
@@ -529,7 +525,7 @@ void JIT::emit_op_next_pname(Instruction* currentInstruction)
     emitGetVirtualRegister(base, regT0);
 
     // Test base's structure
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
+    emitLoadStructure(regT0, regT2, regT3);
     callHasProperty.append(branchPtr(NotEqual, regT2, Address(Address(regT1, OBJECT_OFFSETOF(JSPropertyNameIterator, m_cachedStructure)))));
 
     // Test base's prototype chain
@@ -540,7 +536,7 @@ void JIT::emit_op_next_pname(Instruction* currentInstruction)
     Label checkPrototype(this);
     load64(Address(regT2, Structure::prototypeOffset()), regT2);
     callHasProperty.append(emitJumpIfNotJSCell(regT2));
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT2);
+    emitLoadStructure(regT2, regT2, regT1);
     callHasProperty.append(branchPtr(NotEqual, regT2, Address(regT3)));
     addPtr(TrustedImm32(sizeof(Structure*)), regT3);
     branchTestPtr(NonZero, Address(regT3)).linkTo(checkPrototype, this);
@@ -710,12 +706,12 @@ void JIT::emit_op_eq_null(Instruction* currentInstruction)
     emitGetVirtualRegister(src1, regT0);
     Jump isImmediate = emitJumpIfNotJSCell(regT0);
 
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(0), regT0);
     Jump wasNotMasqueradesAsUndefined = jump();
 
     isMasqueradesAsUndefined.link(this);
+    emitLoadStructure(regT0, regT2, regT1);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2);
     comparePtr(Equal, regT0, regT2, regT0);
@@ -742,12 +738,12 @@ void JIT::emit_op_neq_null(Instruction* currentInstruction)
     emitGetVirtualRegister(src1, regT0);
     Jump isImmediate = emitJumpIfNotJSCell(regT0);
 
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(1), regT0);
     Jump wasNotMasqueradesAsUndefined = jump();
 
     isMasqueradesAsUndefined.link(this);
+    emitLoadStructure(regT0, regT2, regT1);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2);
     comparePtr(NotEqual, regT0, regT2, regT0);
@@ -815,11 +811,12 @@ void JIT::emit_op_to_this(Instruction* currentInstruction)
     emitGetVirtualRegister(currentInstruction[1].u.operand, regT1);
 
     emitJumpSlowCaseIfNotJSCell(regT1);
-    loadPtr(Address(regT1, JSCell::structureOffset()), regT0);
 
-    addSlowCase(branch8(NotEqual, Address(regT0, Structure::typeInfoTypeOffset()), TrustedImm32(FinalObjectType)));
+    addSlowCase(branch8(NotEqual, Address(regT1, JSCell::typeInfoTypeOffset()), TrustedImm32(FinalObjectType)));
     loadPtr(cachedStructure, regT2);
-    addSlowCase(branchPtr(NotEqual, regT0, regT2));
+    addSlowCase(branchTestPtr(Zero, regT2));
+    load32(Address(regT2, Structure::structureIDOffset()), regT2);
+    addSlowCase(branch32(NotEqual, Address(regT1, JSCell::structureIDOffset()), regT2));
 }
 
 void JIT::emit_op_get_callee(Instruction* currentInstruction)
@@ -893,6 +890,7 @@ void JIT::emitSlow_op_to_this(Instruction* currentInstruction, Vector<SlowCaseEn
     linkSlowCase(iter);
     linkSlowCase(iter);
     linkSlowCase(iter);
+    linkSlowCase(iter);
 
     JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_to_this);
     slowPathCall.call();
index 98d10a3..20a76fd 100644 (file)
@@ -210,8 +210,7 @@ void JIT::emit_op_check_has_instance(Instruction* currentInstruction)
     emitJumpSlowCaseIfNotJSCell(baseVal);
     
     // Check that baseVal 'ImplementsHasInstance'.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT0);
-    addSlowCase(branchTest8(Zero, Address(regT0, Structure::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance)));
+    addSlowCase(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(ImplementsDefaultHasInstance)));
 }
 
 void JIT::emit_op_instanceof(Instruction* currentInstruction)
@@ -230,8 +229,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction)
     emitJumpSlowCaseIfNotJSCell(proto);
     
     // Check that prototype is an object
-    loadPtr(Address(regT1, JSCell::structureOffset()), regT3);
-    addSlowCase(emitJumpIfNotObject(regT3));
+    addSlowCase(emitJumpIfCellNotObject(regT1));
 
     // Optimistically load the result true, and start looping.
     // Initially, regT1 still contains proto and regT2 still contains value.
@@ -241,7 +239,7 @@ void JIT::emit_op_instanceof(Instruction* currentInstruction)
 
     // Load the prototype of the cell in regT2.  If this is equal to regT1 - WIN!
     // Otherwise, check if we've hit null - if we have then drop out of the loop, if not go again.
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT2);
+    loadPtr(Address(regT2, JSCell::structureIDOffset()), regT2);
     load32(Address(regT2, Structure::prototypeOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT2);
     Jump isInstance = branchPtr(Equal, regT2, regT1);
     branchTest32(NonZero, regT2).linkTo(loop, this);
@@ -297,12 +295,12 @@ void JIT::emit_op_is_undefined(Instruction* currentInstruction)
     Jump done = jump();
     
     isCell.link(this);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT1, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(0), regT0);
     Jump notMasqueradesAsUndefined = jump();
     
     isMasqueradesAsUndefined.link(this);
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT1);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT1, Structure::globalObjectOffset()), regT1);
     compare32(Equal, regT0, regT1, regT0);
@@ -341,8 +339,7 @@ void JIT::emit_op_is_string(Instruction* currentInstruction)
     emitLoad(value, regT1, regT0);
     Jump isNotCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
     
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    compare8(Equal, Address(regT1, Structure::typeInfoTypeOffset()), TrustedImm32(StringType), regT0);
+    compare8(Equal, Address(regT0, JSCell::typeInfoTypeOffset()), TrustedImm32(StringType), regT0);
     Jump done = jump();
     
     isNotCell.link(this);
@@ -381,7 +378,7 @@ void JIT::emit_op_to_primitive(Instruction* currentInstruction)
     emitLoad(src, regT1, regT0);
 
     Jump isImm = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
-    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
     isImm.link(this);
 
     if (dst != src)
@@ -501,9 +498,8 @@ void JIT::emit_op_jeq_null(Instruction* currentInstruction)
 
     Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
 
-    // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isNotMasqueradesAsUndefined = branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     addJump(branchPtr(Equal, Address(regT2, Structure::globalObjectOffset()), regT0), target);
     Jump masqueradesGlobalObjectIsForeign = jump();
@@ -527,9 +523,8 @@ void JIT::emit_op_jneq_null(Instruction* currentInstruction)
 
     Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
 
-    // First, handle JSCell cases - check MasqueradesAsUndefined bit on the structure.
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    addJump(branchTest8(Zero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target);
+    addJump(branchTest8(Zero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined)), target);
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     addJump(branchPtr(NotEqual, Address(regT2, Structure::globalObjectOffset()), regT0), target);
     Jump wasNotImmediate = jump();
@@ -583,8 +578,8 @@ void JIT::emitSlow_op_eq(Instruction* currentInstruction, Vector<SlowCaseEntry>:
     genericCase.append(getSlowCase(iter)); // tags not equal
 
     linkSlowCase(iter); // tags equal and JSCell
-    genericCase.append(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
-    genericCase.append(branchPtr(NotEqual, Address(regT2, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    genericCase.append(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    genericCase.append(branchPtr(NotEqual, Address(regT2, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
 
     // String case.
     callOperation(operationCompareStringEq, regT0, regT2);
@@ -627,8 +622,8 @@ void JIT::emitSlow_op_neq(Instruction* currentInstruction, Vector<SlowCaseEntry>
     genericCase.append(getSlowCase(iter)); // tags not equal
 
     linkSlowCase(iter); // tags equal and JSCell
-    genericCase.append(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
-    genericCase.append(branchPtr(NotEqual, Address(regT2, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    genericCase.append(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    genericCase.append(branchPtr(NotEqual, Address(regT2, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
 
     // String case.
     callOperation(operationCompareStringEq, regT0, regT2);
@@ -658,8 +653,8 @@ void JIT::compileOpStrictEq(Instruction* currentInstruction, CompileOpStrictEqTy
 
     // Jump to a slow case if both are strings.
     Jump notCell = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
-    Jump firstNotString = branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get()));
-    addSlowCase(branchPtr(Equal, Address(regT2, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
+    Jump firstNotString = branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get()));
+    addSlowCase(branchPtr(Equal, Address(regT2, JSCell::structureIDOffset()), TrustedImmPtr(m_vm->stringStructure.get())));
     notCell.link(this);
     firstNotString.link(this);
 
@@ -710,12 +705,12 @@ void JIT::emit_op_eq_null(Instruction* currentInstruction)
     emitLoad(src, regT1, regT0);
     Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
 
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(0), regT1);
     Jump wasNotMasqueradesAsUndefined = jump();
 
     isMasqueradesAsUndefined.link(this);
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2);
     compare32(Equal, regT0, regT2, regT1);
@@ -741,12 +736,12 @@ void JIT::emit_op_neq_null(Instruction* currentInstruction)
     emitLoad(src, regT1, regT0);
     Jump isImmediate = branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag));
 
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT2, Structure::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
+    Jump isMasqueradesAsUndefined = branchTest8(NonZero, Address(regT0, JSCell::typeInfoFlagsOffset()), TrustedImm32(MasqueradesAsUndefined));
     move(TrustedImm32(1), regT1);
     Jump wasNotMasqueradesAsUndefined = jump();
 
     isMasqueradesAsUndefined.link(this);
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2);
     move(TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     loadPtr(Address(regT2, Structure::globalObjectOffset()), regT2);
     compare32(NotEqual, regT0, regT2, regT1);
@@ -785,10 +780,8 @@ void JIT::emit_op_get_pnames(Instruction* currentInstruction)
     emitLoad(base, regT1, regT0);
     if (!m_codeBlock->isKnownNotImmediate(base))
         isNotObject.append(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag)));
-    if (VirtualRegister(base) != m_codeBlock->thisRegister() || m_codeBlock->isStrictMode()) {
-        loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-        isNotObject.append(emitJumpIfNotObject(regT2));
-    }
+    if (VirtualRegister(base) != m_codeBlock->thisRegister() || m_codeBlock->isStrictMode())
+        isNotObject.append(emitJumpIfCellNotObject(regT0));
 
     // We could inline the case where you have a valid cache, but
     // this call doesn't seem to be hot.
@@ -841,7 +834,7 @@ void JIT::emit_op_next_pname(Instruction* currentInstruction)
     loadPtr(payloadFor(base), regT0);
 
     // Test base's structure
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
+    loadPtr(Address(regT0, JSCell::structureIDOffset()), regT2);
     callHasProperty.append(branchPtr(NotEqual, regT2, Address(Address(regT1, OBJECT_OFFSETOF(JSPropertyNameIterator, m_cachedStructure)))));
 
     // Test base's prototype chain
@@ -852,7 +845,7 @@ void JIT::emit_op_next_pname(Instruction* currentInstruction)
     Label checkPrototype(this);
     callHasProperty.append(branch32(Equal, Address(regT2, Structure::prototypeOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::NullTag)));
     loadPtr(Address(regT2, Structure::prototypeOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), regT2);
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT2);
+    loadPtr(Address(regT2, JSCell::structureIDOffset()), regT2);
     callHasProperty.append(branchPtr(NotEqual, regT2, Address(regT3)));
     addPtr(TrustedImm32(sizeof(Structure*)), regT3);
     branchTestPtr(NonZero, Address(regT3)).linkTo(checkPrototype, this);
@@ -1091,8 +1084,8 @@ void JIT::emit_op_to_this(Instruction* currentInstruction)
     emitLoad(thisRegister, regT3, regT2);
 
     addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::CellTag)));
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT0);
-    addSlowCase(branch8(NotEqual, Address(regT0, Structure::typeInfoTypeOffset()), TrustedImm32(FinalObjectType)));
+    addSlowCase(branch8(NotEqual, Address(regT2, JSCell::typeInfoTypeOffset()), TrustedImm32(FinalObjectType)));
+    loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0);
     loadPtr(cachedStructure, regT2);
     addSlowCase(branchPtr(NotEqual, regT0, regT2));
 }
index e5a8b40..7241bdf 100644 (file)
@@ -439,7 +439,7 @@ void JIT_OPERATION operationReallocateStorageAndFinishPut(ExecState* exec, JSObj
     VM& vm = exec->vm();
     NativeCallFrameTracer tracer(&vm, exec);
 
-    ASSERT(structure->outOfLineCapacity() > base->structure()->outOfLineCapacity());
+    ASSERT(structure->outOfLineCapacity() > base->structure(vm)->outOfLineCapacity());
     ASSERT(!vm.heap.storageAllocator().fastPathShouldSucceed(structure->outOfLineCapacity() * sizeof(JSValue)));
     base->setStructureAndReallocateStorageIfNecessary(vm, structure);
     base->putDirect(vm, offset, JSValue::decode(value));
@@ -447,6 +447,7 @@ void JIT_OPERATION operationReallocateStorageAndFinishPut(ExecState* exec, JSObj
 
 static void putByVal(CallFrame* callFrame, JSValue baseValue, JSValue subscript, JSValue value)
 {
+    VM& vm = callFrame->vm();
     if (LIKELY(subscript.isUInt32())) {
         uint32_t i = subscript.asUInt32();
         if (baseValue.isObject()) {
@@ -454,7 +455,7 @@ static void putByVal(CallFrame* callFrame, JSValue baseValue, JSValue subscript,
             if (object->canSetIndexQuickly(i))
                 object->setIndexQuickly(callFrame->vm(), i, value);
             else
-                object->methodTable()->putByIndex(object, callFrame, i, value, callFrame->codeBlock()->isStrictMode());
+                object->methodTable(vm)->putByIndex(object, callFrame, i, value, callFrame->codeBlock()->isStrictMode());
         } else
             baseValue.putByIndex(callFrame, i, value, callFrame->codeBlock()->isStrictMode());
     } else if (isName(subscript)) {
@@ -504,9 +505,9 @@ void JIT_OPERATION operationPutByVal(ExecState* exec, EncodedJSValue encodedBase
         ByValInfo& byValInfo = exec->codeBlock()->getByValInfo(bytecodeOffset - 1);
         ASSERT(!byValInfo.stubRoutine);
 
-        if (hasOptimizableIndexing(object->structure())) {
+        if (hasOptimizableIndexing(object->structure(vm))) {
             // Attempt to optimize.
-            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure());
+            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure(vm));
             if (arrayMode != byValInfo.arrayMode) {
                 JIT::compilePutByVal(&vm, exec->codeBlock(), &byValInfo, ReturnAddressPtr(OUR_RETURN_ADDRESS), arrayMode);
                 didOptimize = true;
@@ -520,7 +521,7 @@ void JIT_OPERATION operationPutByVal(ExecState* exec, EncodedJSValue encodedBase
             // where we see non-index-intercepting objects, this gives 10 iterations worth of
             // opportunity for us to observe that the get_by_val may be polymorphic.
             if (++byValInfo.slowPathCount >= 10
-                || object->structure()->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
+                || object->structure(vm)->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
                 // Don't ever try to optimize.
                 RepatchBuffer repatchBuffer(exec->codeBlock());
                 repatchBuffer.relinkCallerToFunction(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationPutByValGeneric));
@@ -550,9 +551,9 @@ void JIT_OPERATION operationDirectPutByVal(ExecState* callFrame, EncodedJSValue
         ByValInfo& byValInfo = callFrame->codeBlock()->getByValInfo(bytecodeOffset - 1);
         ASSERT(!byValInfo.stubRoutine);
         
-        if (hasOptimizableIndexing(object->structure())) {
+        if (hasOptimizableIndexing(object->structure(vm))) {
             // Attempt to optimize.
-            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure());
+            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure(vm));
             if (arrayMode != byValInfo.arrayMode) {
                 JIT::compileDirectPutByVal(&vm, callFrame->codeBlock(), &byValInfo, ReturnAddressPtr(OUR_RETURN_ADDRESS), arrayMode);
                 didOptimize = true;
@@ -566,7 +567,7 @@ void JIT_OPERATION operationDirectPutByVal(ExecState* callFrame, EncodedJSValue
             // where we see non-index-intercepting objects, this gives 10 iterations worth of
             // opportunity for us to observe that the get_by_val may be polymorphic.
             if (++byValInfo.slowPathCount >= 10
-                || object->structure()->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
+                || object->structure(vm)->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
                 // Don't ever try to optimize.
                 RepatchBuffer repatchBuffer(callFrame->codeBlock());
                 repatchBuffer.relinkCallerToFunction(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationDirectPutByValGeneric));
@@ -781,11 +782,12 @@ static bool attemptToOptimizeClosureCall(
     if (!calleeAsFunctionCell)
         return false;
     
+    VM& vm = execCallee->vm();
     JSFunction* callee = jsCast<JSFunction*>(calleeAsFunctionCell);
     JSFunction* oldCallee = callLinkInfo.callee.get();
     
     if (!oldCallee
-        || oldCallee->structure() != callee->structure()
+        || oldCallee->structure(vm) != callee->structure(vm)
         || oldCallee->executable() != callee->executable())
         return false;
     
@@ -1361,22 +1363,22 @@ void JIT_OPERATION operationProfileWillCall(ExecState* exec, EncodedJSValue enco
 
 EncodedJSValue JIT_OPERATION operationCheckHasInstance(ExecState* exec, EncodedJSValue encodedValue, EncodedJSValue encodedBaseVal)
 {
-    VM* vm = &exec->vm();
-    NativeCallFrameTracer tracer(vm, exec);
+    VM& vm = exec->vm();
+    NativeCallFrameTracer tracer(&vm, exec);
 
     JSValue value = JSValue::decode(encodedValue);
     JSValue baseVal = JSValue::decode(encodedBaseVal);
 
     if (baseVal.isObject()) {
         JSObject* baseObject = asObject(baseVal);
-        ASSERT(!baseObject->structure()->typeInfo().implementsDefaultHasInstance());
-        if (baseObject->structure()->typeInfo().implementsHasInstance()) {
-            bool result = baseObject->methodTable()->customHasInstance(baseObject, exec, value);
+        ASSERT(!baseObject->structure(vm)->typeInfo().implementsDefaultHasInstance());
+        if (baseObject->structure(vm)->typeInfo().implementsHasInstance()) {
+            bool result = baseObject->methodTable(vm)->customHasInstance(baseObject, exec, value);
             return JSValue::encode(jsBoolean(result));
         }
     }
 
-    vm->throwException(exec, createInvalidParameterError(exec, "instanceof", baseVal));
+    vm.throwException(exec, createInvalidParameterError(exec, "instanceof", baseVal));
     return JSValue::encode(JSValue());
 }
 
@@ -1467,9 +1469,9 @@ EncodedJSValue JIT_OPERATION operationGetByValDefault(ExecState* exec, EncodedJS
         ByValInfo& byValInfo = exec->codeBlock()->getByValInfo(bytecodeOffset - 1);
         ASSERT(!byValInfo.stubRoutine);
         
-        if (hasOptimizableIndexing(object->structure())) {
+        if (hasOptimizableIndexing(object->structure(vm))) {
             // Attempt to optimize.
-            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure());
+            JITArrayMode arrayMode = jitArrayModeForStructure(object->structure(vm));
             if (arrayMode != byValInfo.arrayMode) {
                 JIT::compileGetByVal(&vm, exec->codeBlock(), &byValInfo, ReturnAddressPtr(OUR_RETURN_ADDRESS), arrayMode);
                 didOptimize = true;
@@ -1483,7 +1485,7 @@ EncodedJSValue JIT_OPERATION operationGetByValDefault(ExecState* exec, EncodedJS
             // where we see non-index-intercepting objects, this gives 10 iterations worth of
             // opportunity for us to observe that the get_by_val may be polymorphic.
             if (++byValInfo.slowPathCount >= 10
-                || object->structure()->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
+                || object->structure(vm)->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) {
                 // Don't ever try to optimize.
                 RepatchBuffer repatchBuffer(exec->codeBlock());
                 repatchBuffer.relinkCallerToFunction(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationGetByValGeneric));
@@ -1547,7 +1549,7 @@ EncodedJSValue JIT_OPERATION operationDeleteById(ExecState* exec, EncodedJSValue
     NativeCallFrameTracer tracer(&vm, exec);
 
     JSObject* baseObj = JSValue::decode(encodedBase).toObject(exec);
-    bool couldDelete = baseObj->methodTable()->deleteProperty(baseObj, exec, *identifier);
+    bool couldDelete = baseObj->methodTable(vm)->deleteProperty(baseObj, exec, *identifier);
     JSValue result = jsBoolean(couldDelete);
     if (!couldDelete && exec->codeBlock()->isStrictMode())
         vm.throwException(exec, createTypeError(exec, "Unable to delete property."));
@@ -1559,7 +1561,7 @@ JSCell* JIT_OPERATION operationGetPNames(ExecState* exec, JSObject* obj)
     VM& vm = exec->vm();
     NativeCallFrameTracer tracer(&vm, exec);
 
-    Structure* structure = obj->structure();
+    Structure* structure = obj->structure(vm);
     JSPropertyNameIterator* jsPropertyNameIterator = structure->enumerationCache();
     if (!jsPropertyNameIterator || jsPropertyNameIterator->cachedPrototypeChain() != structure->prototypeChain(exec))
         jsPropertyNameIterator = JSPropertyNameIterator::create(exec, obj);
@@ -1689,10 +1691,10 @@ EncodedJSValue JIT_OPERATION operationGetFromScope(ExecState* exec, Instruction*
     }
 
     // Covers implicit globals. Since they don't exist until they first execute, we didn't know how to cache them at compile time.
-    if (slot.isCacheableValue() && slot.slotBase() == scope && scope->structure()->propertyAccessesAreCacheable()) {
+    if (slot.isCacheableValue() && slot.slotBase() == scope && scope->structure(vm)->propertyAccessesAreCacheable()) {
         if (modeAndType.type() == GlobalProperty || modeAndType.type() == GlobalPropertyWithVarInjectionChecks) {
             ConcurrentJITLocker locker(codeBlock->m_lock);
-            pc[5].u.structure.set(exec->vm(), codeBlock->ownerExecutable(), scope->structure());
+            pc[5].u.structure.set(exec->vm(), codeBlock->ownerExecutable(), scope->structure(vm));
             pc[6].u.operand = slot.cachedOffset();
         }
     }
index 98f39c3..94bd09e 100644 (file)
@@ -51,7 +51,10 @@ JIT::CodeRef JIT::stringGetByValStubGenerator(VM* vm)
 {
     JSInterfaceJIT jit(vm);
     JumpList failures;
-    failures.append(jit.branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(vm->stringStructure.get())));
+    failures.append(JSC::branchStructure(jit,
+        NotEqual, 
+        Address(regT0, JSCell::structureIDOffset()), 
+        vm->stringStructure.get()));
 
     // Load string length to regT2, and start the process of loading the data pointer into regT0
     jit.load32(Address(regT0, ThunkHelpers::jsStringLengthOffset()), regT2);
@@ -106,8 +109,7 @@ void JIT::emit_op_get_by_val(Instruction* currentInstruction)
     zeroExtend32ToPtr(regT1, regT1);
 
     emitJumpSlowCaseIfNotJSCell(regT0, base);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    emitArrayProfilingSite(regT2, regT3, profile);
+    emitArrayProfilingSiteWithCell(regT0, regT2, profile);
     and32(TrustedImm32(IndexingShapeMask), regT2);
 
     PatchableJump badType;
@@ -204,7 +206,9 @@ void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCas
     linkSlowCaseIfNotJSCell(iter, base); // base cell check
     Jump nonCell = jump();
     linkSlowCase(iter); // base array check
-    Jump notString = branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get()));
+    Jump notString = branchStructure(NotEqual, 
+        Address(regT0, JSCell::structureIDOffset()), 
+        m_vm->stringStructure.get());
     emitNakedCall(CodeLocationLabel(m_vm->getCTIStub(stringGetByValStubGenerator).code()));
     Jump failed = branchTest64(Zero, regT0);
     emitPutVirtualRegister(dst, regT0);
@@ -275,7 +279,7 @@ void JIT::emit_op_get_by_pname(Instruction* currentInstruction)
     emitJumpSlowCaseIfNotJSCell(regT0, base);
 
     // Test base's structure
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
+    emitLoadStructure(regT0, regT2, regT3);
     addSlowCase(branchPtr(NotEqual, regT2, Address(regT1, OBJECT_OFFSETOF(JSPropertyNameIterator, m_cachedStructure))));
     load32(addressFor(i), regT3);
     sub32(TrustedImm32(1), regT3);
@@ -316,8 +320,7 @@ void JIT::emit_op_put_by_val(Instruction* currentInstruction)
     // See comment in op_get_by_val.
     zeroExtend32ToPtr(regT1, regT1);
     emitJumpSlowCaseIfNotJSCell(regT0, base);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-    emitArrayProfilingSite(regT2, regT3, profile);
+    emitArrayProfilingSiteWithCell(regT0, regT2, profile);
     and32(TrustedImm32(IndexingShapeMask), regT2);
     
     PatchableJump badType;
@@ -513,10 +516,8 @@ void JIT::emit_op_get_by_id(Instruction* currentInstruction)
     
     emitJumpSlowCaseIfNotJSCell(regT0, baseVReg);
     
-    if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) {
-        loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-        emitArrayProfilingSiteForBytecodeIndex(regT1, regT2, m_bytecodeOffset);
-    }
+    if (*ident == m_vm->propertyNames->length && shouldEmitProfiling())
+        emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT1, m_bytecodeOffset);
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), RegisterSet::specialRegisters(),
@@ -693,7 +694,9 @@ void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot)
 {
     emitGetVirtualRegister(scope, regT0);
     loadPtr(structureSlot, regT1);
-    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), regT1));
+    addSlowCase(branchTestPtr(Zero, regT1));
+    load32(Address(regT1, Structure::structureIDOffset()), regT1);
+    addSlowCase(branch32(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT1));
 }
 
 void JIT::emitGetGlobalProperty(uintptr_t* operandSlot)
@@ -754,6 +757,8 @@ void JIT::emitSlow_op_get_from_scope(Instruction* currentInstruction, Vector<Slo
     if (resolveType == GlobalVar || resolveType == ClosureVar)
         return;
 
+    if (resolveType == GlobalProperty || resolveType == GlobalPropertyWithVarInjectionChecks)
+        linkSlowCase(iter);
     linkSlowCase(iter);
     callOperation(WithProfile, operationGetFromScope, dst, currentInstruction);
 }
@@ -855,6 +860,8 @@ void JIT::emitSlow_op_put_to_scope(Instruction* currentInstruction, Vector<SlowC
     if ((resolveType == GlobalVar || resolveType == GlobalVarWithVarInjectionChecks)
         && currentInstruction[5].u.watchpointSet->state() != IsInvalidated)
         linkCount++;
+    if (resolveType == GlobalProperty || resolveType == GlobalPropertyWithVarInjectionChecks)
+        linkCount++;
     if (!linkCount)
         return;
     while (linkCount--)
@@ -872,25 +879,14 @@ void JIT::emit_op_init_global_const(Instruction* currentInstruction)
 
 #endif // USE(JSVALUE64)
 
-JIT::Jump JIT::checkMarkWord(RegisterID owner, RegisterID scratch1, RegisterID scratch2)
+JIT::Jump JIT::checkMarkWord(RegisterID owner)
 {
-    move(owner, scratch1);
-    move(owner, scratch2);
-
-    andPtr(TrustedImmPtr(MarkedBlock::blockMask), scratch1);
-    andPtr(TrustedImmPtr(~MarkedBlock::blockMask), scratch2);
-
-    rshift32(TrustedImm32(3 + 4), scratch2);
-
-    return branchTest8(Zero, BaseIndex(scratch1, scratch2, TimesOne, MarkedBlock::offsetOfMarks()));
+    return branchTest8(Zero, Address(owner, JSCell::gcDataOffset()));
 }
 
 JIT::Jump JIT::checkMarkWord(JSCell* owner)
 {
-    MarkedBlock* block = MarkedBlock::blockFor(owner);
-    size_t index = (reinterpret_cast<size_t>(owner) & ~MarkedBlock::blockMask) >> (3 + 4);
-    void* address = (reinterpret_cast<char*>(block) + MarkedBlock::offsetOfMarks()) + index;
-
+    uint8_t* address = reinterpret_cast<uint8_t*>(owner) + JSCell::gcDataOffset();
     return branchTest8(Zero, AbsoluteAddress(address));
 }
 
@@ -908,7 +904,7 @@ void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode
     if (mode == ShouldFilterBaseAndValue)
         ownerNotCell = branchTest64(NonZero, regT0, tagMaskRegister);
 
-    Jump ownerNotMarked = checkMarkWord(regT0, regT1, regT2);
+    Jump ownerNotMarked = checkMarkWord(regT0);
     callOperation(operationUnconditionalWriteBarrier, regT0);
     ownerNotMarked.link(this);
 
@@ -957,7 +953,7 @@ void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode
     if (mode == ShouldFilterBaseAndValue)
         ownerNotCell = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag));
 
-    Jump ownerNotMarked = checkMarkWord(regT1, regT0, regT2);
+    Jump ownerNotMarked = checkMarkWord(regT1);
     callOperation(operationUnconditionalWriteBarrier, regT1);
     ownerNotMarked.link(this);
 
@@ -1013,7 +1009,7 @@ JIT::Jump JIT::addStructureTransitionCheck(JSCell* object, Structure* structure,
         structure->addTransitionWatchpoint(stubInfo->addWatchpoint(m_codeBlock));
 #if !ASSERT_DISABLED
         move(TrustedImmPtr(object), scratch);
-        Jump ok = branchPtr(Equal, Address(scratch, JSCell::structureOffset()), TrustedImmPtr(structure));
+        Jump ok = branchStructure(Equal, Address(scratch, JSCell::structureIDOffset()), structure);
         breakpoint();
         ok.link(this);
 #endif
@@ -1022,7 +1018,7 @@ JIT::Jump JIT::addStructureTransitionCheck(JSCell* object, Structure* structure,
     }
     
     move(TrustedImmPtr(object), scratch);
-    return branchPtr(NotEqual, Address(scratch, JSCell::structureOffset()), TrustedImmPtr(structure));
+    return branchStructure(NotEqual, Address(scratch, JSCell::structureIDOffset()), structure);
 }
 
 void JIT::addStructureTransitionCheck(JSCell* object, Structure* structure, StructureStubInfo* stubInfo, JumpList& failureCases, RegisterID scratch)
@@ -1180,8 +1176,8 @@ JIT::JumpList JIT::emitIntTypedArrayGetByVal(Instruction*, PatchableJump& badTyp
     
     JumpList slowCases;
     
-    loadPtr(Address(base, JSCell::structureOffset()), scratch);
-    badType = patchableBranchPtr(NotEqual, Address(scratch, Structure::classInfoOffset()), TrustedImmPtr(classInfoForType(type)));
+    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
+    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
     slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), base);
     
@@ -1250,9 +1246,9 @@ JIT::JumpList JIT::emitFloatTypedArrayGetByVal(Instruction*, PatchableJump& badT
 #endif
     
     JumpList slowCases;
-    
-    loadPtr(Address(base, JSCell::structureOffset()), scratch);
-    badType = patchableBranchPtr(NotEqual, Address(scratch, Structure::classInfoOffset()), TrustedImmPtr(classInfoForType(type)));
+
+    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
+    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
     slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), base);
     
@@ -1304,8 +1300,8 @@ JIT::JumpList JIT::emitIntTypedArrayPutByVal(Instruction* currentInstruction, Pa
     
     JumpList slowCases;
     
-    loadPtr(Address(base, JSCell::structureOffset()), earlyScratch);
-    badType = patchableBranchPtr(NotEqual, Address(earlyScratch, Structure::classInfoOffset()), TrustedImmPtr(classInfoForType(type)));
+    load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch);
+    badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type)));
     Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength()));
     emitArrayProfileOutOfBoundsSpecialCase(profile);
     Jump done = jump();
@@ -1376,8 +1372,8 @@ JIT::JumpList JIT::emitFloatTypedArrayPutByVal(Instruction* currentInstruction,
     
     JumpList slowCases;
     
-    loadPtr(Address(base, JSCell::structureOffset()), earlyScratch);
-    badType = patchableBranchPtr(NotEqual, Address(earlyScratch, Structure::classInfoOffset()), TrustedImmPtr(classInfoForType(type)));
+    load8(Address(base, JSCell::typeInfoTypeOffset()), earlyScratch);
+    badType = patchableBranch32(NotEqual, earlyScratch, TrustedImm32(typeForTypedArrayType(type)));
     Jump inBounds = branch32(Below, property, Address(base, JSArrayBufferView::offsetOfLength()));
     emitArrayProfileOutOfBoundsSpecialCase(profile);
     Jump done = jump();
index d18fea7..daf6d2a 100644 (file)
@@ -83,7 +83,7 @@ JIT::CodeRef JIT::stringGetByValStubGenerator(VM* vm)
 {
     JSInterfaceJIT jit(vm);
     JumpList failures;
-    failures.append(jit.branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(vm->stringStructure.get())));
+    failures.append(JSC::branchStructure(jit, NotEqual, Address(regT0, JSCell::structureIDOffset()), vm->stringStructure.get()));
     
     // Load string length to regT1, and start the process of loading the data pointer into regT0
     jit.load32(Address(regT0, ThunkHelpers::jsStringLengthOffset()), regT1);
@@ -132,8 +132,7 @@ void JIT::emit_op_get_by_val(Instruction* currentInstruction)
     
     addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)));
     emitJumpSlowCaseIfNotJSCell(base, regT1);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    emitArrayProfilingSite(regT1, regT3, profile);
+    emitArrayProfilingSiteWithCell(regT0, regT1, profile);
     and32(TrustedImm32(IndexingShapeMask), regT1);
 
     PatchableJump badType;
@@ -235,7 +234,7 @@ void JIT::emitSlow_op_get_by_val(Instruction* currentInstruction, Vector<SlowCas
 
     Jump nonCell = jump();
     linkSlowCase(iter); // base array check
-    Jump notString = branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), TrustedImmPtr(m_vm->stringStructure.get()));
+    Jump notString = branchStructure(NotEqual, Address(regT0, JSCell::structureIDOffset()), m_vm->stringStructure.get());
     emitNakedCall(m_vm->getCTIStub(stringGetByValStubGenerator).code());
     Jump failed = branchTestPtr(Zero, regT0);
     emitStore(dst, regT1, regT0);
@@ -276,8 +275,7 @@ void JIT::emit_op_put_by_val(Instruction* currentInstruction)
     
     addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)));
     emitJumpSlowCaseIfNotJSCell(base, regT1);
-    loadPtr(Address(regT0, JSCell::structureOffset()), regT1);
-    emitArrayProfilingSite(regT1, regT3, profile);
+    emitArrayProfilingSiteWithCell(regT0, regT1, profile);
     and32(TrustedImm32(IndexingShapeMask), regT1);
     
     PatchableJump badType;
@@ -475,10 +473,8 @@ void JIT::emit_op_get_by_id(Instruction* currentInstruction)
     emitLoad(base, regT1, regT0);
     emitJumpSlowCaseIfNotJSCell(base, regT1);
 
-    if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) {
-        loadPtr(Address(regT0, JSCell::structureOffset()), regT2);
-        emitArrayProfilingSiteForBytecodeIndex(regT2, regT3, m_bytecodeOffset);
-    }
+    if (*ident == m_vm->propertyNames->length && shouldEmitProfiling())
+        emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT2, m_bytecodeOffset);
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), RegisterSet::specialRegisters(),
@@ -634,7 +630,7 @@ void JIT::emit_op_get_by_pname(Instruction* currentInstruction)
     emitLoadPayload(iter, regT1);
     
     // Test base's structure
-    loadPtr(Address(regT2, JSCell::structureOffset()), regT0);
+    loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0);
     addSlowCase(branchPtr(NotEqual, regT0, Address(regT1, OBJECT_OFFSETOF(JSPropertyNameIterator, m_cachedStructure))));
     load32(addressFor(i), regT3);
     sub32(TrustedImm32(1), regT3);
@@ -731,7 +727,7 @@ void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot)
 {
     emitLoad(scope, regT1, regT0);
     loadPtr(structureSlot, regT2);
-    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureOffset()), regT2));
+    addSlowCase(branchPtr(NotEqual, Address(regT0, JSCell::structureIDOffset()), regT2));
 }
 
 void JIT::emitGetGlobalProperty(uintptr_t* operandSlot)
index ac1ab79..1e45f03 100644 (file)
@@ -73,7 +73,7 @@ namespace JSC {
         void emitFastArithIntToImmNoCheck(RegisterID src, RegisterID dest);
 #endif
 
-        Jump emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType);
+        Jump emitJumpIfNotType(RegisterID baseReg, JSType);
 
         void emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry, RegisterID to, RegisterID from = callFrameRegister);
         void emitPutToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
@@ -224,10 +224,9 @@ namespace JSC {
     }
 #endif
 
-    ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType type)
+    ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotType(RegisterID baseReg, JSType type)
     {
-        loadPtr(Address(baseReg, JSCell::structureOffset()), scratchReg);
-        return branch8(NotEqual, Address(scratchReg, Structure::typeInfoTypeOffset()), TrustedImm32(type));
+        return branch8(NotEqual, Address(baseReg, JSCell::typeInfoTypeOffset()), TrustedImm32(type));
     }
 
     ALWAYS_INLINE void JSInterfaceJIT::emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry entry, RegisterID to, RegisterID from)
index 2fe6206..56804dd 100644 (file)
@@ -33,6 +33,8 @@
 #include "DFGSpeculativeJIT.h"
 #include "FTLThunks.h"
 #include "GCAwareJITStubRoutine.h"
+#include "JIT.h"
+#include "JITInlines.h"
 #include "LinkBuffer.h"
 #include "JSCInlines.h"
 #include "PolymorphicPutByIdList.h"
@@ -103,7 +105,7 @@ static void repatchByIdSelfAccess(VM& vm, CodeBlock* codeBlock, StructureStubInf
     repatchCall(repatchBuffer, stubInfo.callReturnLocation, slowPathFunction);
 
     // Patch the structure check & the offset of the load.
-    repatchBuffer.repatch(stubInfo.callReturnLocation.dataLabelPtrAtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall), structure);
+    repatchBuffer.repatch(stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall), bitwise_cast<int32_t>(structure->id()));
     repatchBuffer.setLoadInstructionIsActive(stubInfo.callReturnLocation.convertibleLoadAtOffset(stubInfo.patch.deltaCallToStorageLoad), isOutOfLineOffset(offset));
 #if USE(JSVALUE64)
     if (compact)
@@ -131,10 +133,10 @@ static void addStructureTransitionCheck(
         // If we execute this code, the object must have the structure we expect. Assert
         // this in debug modes.
         jit.move(MacroAssembler::TrustedImmPtr(object), scratchGPR);
-        MacroAssembler::Jump ok = jit.branchPtr(
+        MacroAssembler::Jump ok = branchStructure(jit,
             MacroAssembler::Equal,
-            MacroAssembler::Address(scratchGPR, JSCell::structureOffset()),
-            MacroAssembler::TrustedImmPtr(structure));
+            MacroAssembler::Address(scratchGPR, JSCell::structureIDOffset()),
+            structure);
         jit.breakpoint();
         ok.link(&jit);
 #endif
@@ -143,10 +145,10 @@ static void addStructureTransitionCheck(
     
     jit.move(MacroAssembler::TrustedImmPtr(object), scratchGPR);
     failureCases.append(
-        jit.branchPtr(
+        branchStructure(jit,
             MacroAssembler::NotEqual,
-            MacroAssembler::Address(scratchGPR, JSCell::structureOffset()),
-            MacroAssembler::TrustedImmPtr(structure)));
+            MacroAssembler::Address(scratchGPR, JSCell::structureIDOffset()),
+            structure));
 }
 
 static void addStructureTransitionCheck(
@@ -165,10 +167,10 @@ static void addStructureTransitionCheck(
 
 static void replaceWithJump(RepatchBuffer& repatchBuffer, StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)
 {
-    if (MacroAssembler::canJumpReplacePatchableBranchPtrWithPatch()) {
+    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
         repatchBuffer.replaceWithJump(
-            RepatchBuffer::startOfPatchableBranchPtrWithPatchOnAddress(
-                stubInfo.callReturnLocation.dataLabelPtrAtOffset(
+            RepatchBuffer::startOfPatchableBranch32WithPatchOnAddress(
+                stubInfo.callReturnLocation.dataLabel32AtOffset(
                     -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),
             CodeLocationLabel(target));
         return;
@@ -249,7 +251,10 @@ static ProtoChainGenerationResult generateProtoChainAccessStub(ExecState* exec,
     
     MacroAssembler::JumpList failureCases;
     
-    failureCases.append(stubJit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(baseGPR, JSCell::structureOffset()), MacroAssembler::TrustedImmPtr(structure)));
+    failureCases.append(branchStructure(stubJit,
+        MacroAssembler::NotEqual, 
+        MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()), 
+        structure));
 
     CodeBlock* codeBlock = exec->codeBlock();
     if (structure->typeInfo().newImpurePropertyFiresWatchpoints())
@@ -380,8 +385,7 @@ static bool tryCacheGetByID(ExecState* exec, JSValue baseValue, const Identifier
         
         MacroAssembler::JumpList failureCases;
        
-        stubJit.loadPtr(MacroAssembler::Address(baseGPR, JSCell::structureOffset()), scratchGPR); 
-        stubJit.load8(MacroAssembler::Address(scratchGPR, Structure::indexingTypeOffset()), scratchGPR);
+        stubJit.load8(MacroAssembler::Address(baseGPR, JSCell::indexingTypeOffset()), scratchGPR);
         failureCases.append(stubJit.branchTest32(MacroAssembler::Zero, scratchGPR, MacroAssembler::TrustedImm32(IsArray)));
         failureCases.append(stubJit.branchTest32(MacroAssembler::Zero, scratchGPR, MacroAssembler::TrustedImm32(IndexingShapeMask)));
         
@@ -573,7 +577,10 @@ static bool tryBuildGetByIDList(ExecState* exec, JSValue baseValue, const Identi
         
         CCallHelpers stubJit(vm, codeBlock);
         
-        MacroAssembler::Jump wrongStruct = stubJit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(baseGPR, JSCell::structureOffset()), MacroAssembler::TrustedImmPtr(structure));
+        MacroAssembler::Jump wrongStruct = branchStructure(stubJit,
+            MacroAssembler::NotEqual, 
+            MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()), 
+            structure);
         
         // The strategy we use for stubs is as follows:
         // 1) Call DFG helper that calls the getter.
@@ -817,7 +824,7 @@ static MacroAssembler::Call writeBarrier(CCallHelpers& jit, GPRReg owner, GPRReg
     ASSERT(owner != scratch1);
     ASSERT(owner != scratch2);
 
-    MacroAssembler::Jump definitelyNotMarked = DFG::SpeculativeJIT::genericWriteBarrier(jit, owner, scratch1, scratch2);
+    MacroAssembler::Jump definitelyNotMarked = DFG::SpeculativeJIT::genericWriteBarrier(jit, owner);
     MacroAssembler::Call call = storeToWriteBarrierBuffer(jit, owner, scratch1, scratch2, allocator);
     definitelyNotMarked.link(&jit);
     return call;
@@ -858,10 +865,10 @@ static void emitPutReplaceStub(
 
     allocator.preserveReusedRegistersByPushing(stubJit);
 
-    MacroAssembler::Jump badStructure = stubJit.branchPtr(
+    MacroAssembler::Jump badStructure = branchStructure(stubJit,
         MacroAssembler::NotEqual,
-        MacroAssembler::Address(baseGPR, JSCell::structureOffset()),
-        MacroAssembler::TrustedImmPtr(structure));
+        MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()),
+        structure);
 
 #if USE(JSVALUE64)
     if (isInlineOffset(slot.cachedOffset()))
@@ -975,7 +982,10 @@ static void emitPutTransitionStub(
             
     ASSERT(oldStructure->transitionWatchpointSetHasBeenInvalidated());
     
-    failureCases.append(stubJit.branchPtr(MacroAssembler::NotEqual, MacroAssembler::Address(baseGPR, JSCell::structureOffset()), MacroAssembler::TrustedImmPtr(oldStructure)));
+    failureCases.append(branchStructure(stubJit,
+        MacroAssembler::NotEqual, 
+        MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()), 
+        oldStructure));
     
     addStructureTransitionCheck(
         oldStructure->storedPrototype(), exec->codeBlock(), stubInfo, stubJit, failureCases,
@@ -1026,7 +1036,10 @@ static void emitPutTransitionStub(
         scratchGPR1HasStorage = true;
     }
 
-    stubJit.storePtr(MacroAssembler::TrustedImmPtr(structure), MacroAssembler::Address(baseGPR, JSCell::structureOffset()));
+    ASSERT(oldStructure->typeInfo().type() == structure->typeInfo().type());
+    ASSERT(oldStructure->typeInfo().inlineTypeFlags() == structure->typeInfo().inlineTypeFlags());
+    ASSERT(oldStructure->indexingType() == structure->indexingType());
+    stubJit.store32(MacroAssembler::TrustedImm32(reinterpret_cast<uint32_t>(structure->id())), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()));
 #if USE(JSVALUE64)
     if (isInlineOffset(slot.cachedOffset()))
         stubJit.store64(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
@@ -1271,7 +1284,7 @@ static bool tryBuildPutByIdList(ExecState* exec, JSValue baseValue, const Identi
         
         return true;
     }
-    
+
     return false;
 }
 
@@ -1346,10 +1359,10 @@ static bool tryRepatchIn(
             needToRestoreScratch = false;
         
         MacroAssembler::JumpList failureCases;
-        failureCases.append(stubJit.branchPtr(
+        failureCases.append(branchStructure(stubJit,
             MacroAssembler::NotEqual,
-            MacroAssembler::Address(baseGPR, JSCell::structureOffset()),
-            MacroAssembler::TrustedImmPtr(structure)));
+            MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()),
+            structure));
 
         CodeBlock* codeBlock = exec->codeBlock();
         if (structure->typeInfo().newImpurePropertyFiresWatchpoints())
@@ -1500,10 +1513,10 @@ void linkClosureCall(
 #endif
     
     slowPath.append(
-        stubJit.branchPtr(
+        branchStructure(stubJit,
             CCallHelpers::NotEqual,
-            CCallHelpers::Address(calleeGPR, JSCell::structureOffset()),
-            CCallHelpers::TrustedImmPtr(structure)));
+            CCallHelpers::Address(calleeGPR, JSCell::structureIDOffset()),
+            structure));
     
     slowPath.append(
         stubJit.branchPtr(
@@ -1573,16 +1586,16 @@ void linkClosureCall(
 void resetGetByID(RepatchBuffer& repatchBuffer, StructureStubInfo& stubInfo)
 {
     repatchCall(repatchBuffer, stubInfo.callReturnLocation, operationGetByIdOptimize);
-    CodeLocationDataLabelPtr structureLabel = stubInfo.callReturnLocation.dataLabelPtrAtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranchPtrWithPatch()) {
-        repatchBuffer.revertJumpReplacementToPatchableBranchPtrWithPatch(
-            RepatchBuffer::startOfPatchableBranchPtrWithPatchOnAddress(structureLabel),
+    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
+    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
+        repatchBuffer.revertJumpReplacementToPatchableBranch32WithPatch(
+            RepatchBuffer::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
             MacroAssembler::Address(
                 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureOffset()),
-            reinterpret_cast<void*>(unusedPointer));
+                JSCell::structureIDOffset()),
+            static_cast<int32_t>(unusedPointer));
     }
-    repatchBuffer.repatch(structureLabel, reinterpret_cast<void*>(unusedPointer));
+    repatchBuffer.repatch(structureLabel, static_cast<int32_t>(unusedPointer));
 #if USE(JSVALUE64)
     repatchBuffer.repatch(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
 #else
@@ -1607,16 +1620,16 @@ void resetPutByID(RepatchBuffer& repatchBuffer, StructureStubInfo& stubInfo)
         optimizedFunction = operationPutByIdDirectNonStrictOptimize;
     }
     repatchCall(repatchBuffer, stubInfo.callReturnLocation, optimizedFunction);
-    CodeLocationDataLabelPtr structureLabel = stubInfo.callReturnLocation.dataLabelPtrAtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranchPtrWithPatch()) {
-        repatchBuffer.revertJumpReplacementToPatchableBranchPtrWithPatch(
-            RepatchBuffer::startOfPatchableBranchPtrWithPatchOnAddress(structureLabel),
+    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
+    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
+        repatchBuffer.revertJumpReplacementToPatchableBranch32WithPatch(
+            RepatchBuffer::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
             MacroAssembler::Address(
                 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureOffset()),
-            reinterpret_cast<void*>(unusedPointer));
+                JSCell::structureIDOffset()),
+            static_cast<int32_t>(unusedPointer));
     }
-    repatchBuffer.repatch(structureLabel, reinterpret_cast<void*>(unusedPointer));
+    repatchBuffer.repatch(structureLabel, static_cast<int32_t>(unusedPointer));
 #if USE(JSVALUE64)
     repatchBuffer.repatch(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
 #else
index d80537b..10c5c3e 100644 (file)
@@ -29,6 +29,8 @@
 #if ENABLE(JIT)
 
 #include "Executable.h"
+#include "JIT.h"
+#include "JITInlines.h"
 #include "JSInterfaceJIT.h"
 #include "JSStack.h"
 #include "LinkBuffer.h"
@@ -67,14 +69,18 @@ namespace JSC {
         void loadJSStringArgument(VM& vm, int argument, RegisterID dst)
         {
             loadCellArgument(argument, dst);
-            m_failures.append(branchPtr(NotEqual, Address(dst, JSCell::structureOffset()), TrustedImmPtr(vm.stringStructure.get())));
+            m_failures.append(branchStructure(*this, NotEqual, 
+                Address(dst, JSCell::structureIDOffset()), 
+                vm.stringStructure.get()));
         }
         
         void loadArgumentWithSpecificClass(const ClassInfo* classInfo, int argument, RegisterID dst, RegisterID scratch)
         {
             loadCellArgument(argument, dst);
-            loadPtr(Address(dst, JSCell::structureOffset()), scratch);
+            emitLoadStructure(dst, scratch, dst);
             appendFailure(branchPtr(NotEqual, Address(scratch, Structure::classInfoOffset()), TrustedImmPtr(classInfo)));
+            // We have to reload the argument since emitLoadStructure clobbered it.
+            loadCellArgument(argument, dst);
         }
         
         void loadInt32Argument(int argument, RegisterID dst, Jump& failTarget)
index 49a2e6d..dd4e414 100644 (file)
@@ -27,6 +27,7 @@
 #include "ThunkGenerators.h"
 
 #include "CodeBlock.h"
+#include "DFGSpeculativeJIT.h"
 #include "JITOperations.h"
 #include "JSArray.h"
 #include "JSArrayIterator.h"
@@ -189,7 +190,7 @@ static MacroAssemblerCodeRef virtualForThunkGenerator(
             CCallHelpers::NotEqual, GPRInfo::regT1,
             CCallHelpers::TrustedImm32(JSValue::CellTag)));
 #endif
-    jit.loadPtr(CCallHelpers::Address(GPRInfo::regT0, JSCell::structureOffset()), GPRInfo::regT2);
+    AssemblyHelpers::emitLoadStructure(jit, GPRInfo::regT0, GPRInfo::regT2, GPRInfo::regT1);
     slowCase.append(
         jit.branchPtr(
             CCallHelpers::NotEqual,
@@ -959,9 +960,7 @@ static MacroAssemblerCodeRef arrayIteratorNextThunkGenerator(VM* vm, ArrayIterat
     jit.load32(Address(SpecializedThunkJIT::regT4, JSArrayIterator::offsetOfNextIndex()), SpecializedThunkJIT::regT1);
     
     // Pull out the butterfly from iteratedObject
-    jit.loadPtr(Address(SpecializedThunkJIT::regT0, JSCell::structureOffset()), SpecializedThunkJIT::regT2);
-    
-    jit.load8(Address(SpecializedThunkJIT::regT2, Structure::indexingTypeOffset()), SpecializedThunkJIT::regT3);
+    jit.load8(Address(SpecializedThunkJIT::regT0, JSCell::indexingTypeOffset()), SpecializedThunkJIT::regT3);
     jit.loadPtr(Address(SpecializedThunkJIT::regT0, JSObject::butterflyOffset()), SpecializedThunkJIT::regT2);
     
     jit.and32(TrustedImm32(IndexingShapeMask), SpecializedThunkJIT::regT3);
index f09628f..568ca0f 100644 (file)
@@ -439,11 +439,12 @@ macro slowPathForCall(slowPath)
         end)
 end
 
-macro arrayProfile(structureAndIndexingType, profile, scratch)
-    const structure = structureAndIndexingType
-    const indexingType = structureAndIndexingType
-    storep structure, ArrayProfile::m_lastSeenStructure[profile]
-    loadb Structure::m_indexingType[structure], indexingType
+macro arrayProfile(cellAndIndexingType, profile, scratch)
+    const cell = cellAndIndexingType
+    const indexingType = cellAndIndexingType 
+    loadi JSCell::m_structureID[cell], scratch
+    storei scratch, ArrayProfile::m_lastSeenStructureID[profile]
+    loadb JSCell::m_indexingType[cell], indexingType
 end
 
 macro checkMarkByte(cell, scratch1, scratch2, continuation)
@@ -620,8 +621,8 @@ macro allocateJSObject(allocator, structure, result, scratch1, slowCase)
         storep scratch1, offsetOfFirstFreeCell[allocator]
     
         # Initialize the object.
-        storep structure, JSCell::m_structure[result]
         storep 0, JSObject::m_butterfly[result]
+        storeStructureWithTypeInfo(result, structure, scratch1)
     end
 end
 
index 5439e4e..e935f23 100644 (file)
@@ -531,6 +531,13 @@ macro loadConstantOrVariablePayloadUnchecked(index, payload)
         payload)
 end
 
+macro storeStructureWithTypeInfo(cell, structure, scratch)
+    storep structure, JSCell::m_structureID[cell]
+
+    loadi Structure::m_blob + StructureIDBlob::u.words.word2[structure], scratch
+    storei scratch, JSCell::m_indexingType[cell]
+end
+
 macro writeBarrierOnOperand(cellOperand)
     if GGC
         loadisFromInstruction(cellOperand, t1)
@@ -748,10 +755,9 @@ _llint_op_to_this:
     loadi 4[PC], t0
     bineq TagOffset[cfr, t0, 8], CellTag, .opToThisSlow
     loadi PayloadOffset[cfr, t0, 8], t0
-    loadp JSCell::m_structure[t0], t0
-    bbneq Structure::m_typeInfo + TypeInfo::m_type[t0], FinalObjectType, .opToThisSlow
+    bbneq JSCell::m_type[t0], FinalObjectType, .opToThisSlow
     loadpFromInstruction(2, t2)
-    bpneq t0, t2, .opToThisSlow
+    bpneq JSCell::m_structureID[t0], t2, .opToThisSlow
     dispatch(3)
 
 .opToThisSlow:
@@ -868,8 +874,7 @@ _llint_op_eq_null:
     loadi TagOffset[cfr, t0, 8], t1
     loadi PayloadOffset[cfr, t0, 8], t0
     bineq t1, CellTag, .opEqNullImmediate
-    loadp JSCell::m_structure[t0], t1
-    btbnz Structure::m_typeInfo + TypeInfo::m_flags[t1], MasqueradesAsUndefined, .opEqNullMasqueradesAsUndefined
+    btbnz JSCell::m_flags[t0], MasqueradesAsUndefined, .opEqNullMasqueradesAsUndefined
     move 0, t1
     jmp .opEqNullNotImmediate
 .opEqNullMasqueradesAsUndefined:
@@ -915,8 +920,7 @@ _llint_op_neq_null:
     loadi TagOffset[cfr, t0, 8], t1
     loadi PayloadOffset[cfr, t0, 8], t0
     bineq t1, CellTag, .opNeqNullImmediate
-    loadp JSCell::m_structure[t0], t1
-    btbnz Structure::m_typeInfo + TypeInfo::m_flags[t1], MasqueradesAsUndefined, .opNeqNullMasqueradesAsUndefined
+    btbnz JSCell::m_flags[t0], MasqueradesAsUndefined, .opNeqNullMasqueradesAsUndefined
     move 1, t1
     jmp .opNeqNullNotImmediate
 .opNeqNullMasqueradesAsUndefined:
@@ -942,10 +946,8 @@ macro strictEq(equalityOperation, slowPath)
     bineq t2, t3, .slow
     bib t2, LowestTag, .slow
     bineq t2, CellTag, .notString
-    loadp JSCell::m_structure[t0], t2
-    loadp JSCell::m_structure[t1], t3
-    bbneq Structure::m_typeInfo + TypeInfo::m_type[t2], StringType, .notString
-    bbeq Structure::m_typeInfo + TypeInfo::m_type[t3], StringType, .slow
+    bbneq JSCell::m_type[t0], StringType, .notString
+    bbeq JSCell::m_type[t1], StringType, .slow
 .notString:
     loadi 4[PC], t2
     equalityOperation(t0, t1, t0)
@@ -1225,8 +1227,7 @@ _llint_op_check_has_instance:
     traceExecution()
     loadi 12[PC], t1
     loadConstantOrVariablePayload(t1, CellTag, t0, .opCheckHasInstanceSlow)
-    loadp JSCell::m_structure[t0], t0
-    btbz Structure::m_typeInfo + TypeInfo::m_flags[t0], ImplementsDefaultHasInstance, .opCheckHasInstanceSlow
+    btbz JSCell::m_flags[t0], ImplementsDefaultHasInstance, .opCheckHasInstanceSlow
     dispatch(5)
 
 .opCheckHasInstanceSlow:
@@ -1240,15 +1241,14 @@ _llint_op_instanceof:
     loadi 12[PC], t0
     loadi 4[PC], t3
     loadConstantOrVariablePayload(t0, CellTag, t1, .opInstanceofSlow)
-    loadp JSCell::m_structure[t1], t2
-    bbb Structure::m_typeInfo + TypeInfo::m_type[t2], ObjectType, .opInstanceofSlow
+    bbb JSCell::m_type[t1], ObjectType, .opInstanceofSlow
     loadi 8[PC], t0
     loadConstantOrVariablePayload(t0, CellTag, t2, .opInstanceofSlow)
     
     # Register state: t1 = prototype, t2 = value
     move 1, t0
 .opInstanceofLoop:
-    loadp JSCell::m_structure[t2], t2
+    loadp JSCell::m_structureID[t2], t2
     loadi Structure::m_prototype + PayloadOffset[t2], t2
     bpeq t2, t1, .opInstanceofDone
     btinz t2, .opInstanceofLoop
@@ -1275,12 +1275,12 @@ _llint_op_is_undefined:
     storei t3, PayloadOffset[cfr, t0, 8]
     dispatch(3)
 .opIsUndefinedCell:
-    loadp JSCell::m_structure[t3], t1
-    btbnz Structure::m_typeInfo + TypeInfo::m_flags[t1], MasqueradesAsUndefined, .opIsUndefinedMasqueradesAsUndefined
+    btbnz JSCell::m_flags[t3], MasqueradesAsUndefined, .opIsUndefinedMasqueradesAsUndefined
     move 0, t1
     storei t1, PayloadOffset[cfr, t0, 8]
     dispatch(3)
 .opIsUndefinedMasqueradesAsUndefined:
+    loadp JSCell::m_structureID[t3], t1
     loadp CodeBlock[cfr], t3
     loadp CodeBlock::m_globalObject[t3], t3
     cpeq Structure::m_globalObject[t1], t3, t1
@@ -1318,8 +1318,7 @@ _llint_op_is_string:
     loadConstantOrVariable(t1, t0, t3)
     storei BooleanTag, TagOffset[cfr, t2, 8]
     bineq t0, CellTag, .opIsStringNotCell
-    loadp JSCell::m_structure[t3], t0
-    cbeq Structure::m_typeInfo + TypeInfo::m_type[t0], StringType, t1
+    cbeq JSCell::m_type[t3], StringType, t1
     storei t1, PayloadOffset[cfr, t2, 8]
     dispatch(3)
 .opIsStringNotCell:
@@ -1387,7 +1386,7 @@ macro getById(getPropertyStorage)
         t3,
         t0,
         macro (propertyStorage, scratch)
-            bpneq JSCell::m_structure[t3], t1, .opGetByIdSlow
+            bpneq JSCell::m_structureID[t3], t1, .opGetByIdSlow
             loadi 4[PC], t1
             loadi TagOffset[propertyStorage, t2], scratch
             loadi PayloadOffset[propertyStorage, t2], t2
@@ -1415,7 +1414,7 @@ _llint_op_get_array_length:
     loadi 8[PC], t0
     loadp 16[PC], t1
     loadConstantOrVariablePayload(t0, CellTag, t3, .opGetArrayLengthSlow)
-    loadp JSCell::m_structure[t3], t2
+    loadp JSCell::m_structureID[t3], t2
     arrayProfile(t2, t1, t0)
     btiz t2, IsArray, .opGetArrayLengthSlow
     btiz t2, IndexingShapeMask, .opGetArrayLengthSlow
@@ -1460,7 +1459,7 @@ macro putById(getPropertyStorage)
         t0,
         t3,
         macro (propertyStorage, scratch)
-            bpneq JSCell::m_structure[t0], t1, .opPutByIdSlow
+            bpneq JSCell::m_structureID[t0], t1, .opPutByIdSlow
             loadi 20[PC], t1
             loadConstantOrVariable2Reg(t2, scratch, t2)
             storei scratch, TagOffset[propertyStorage, t1]
@@ -1488,7 +1487,7 @@ macro putByIdTransition(additionalChecks, getPropertyStorage)
     loadi 16[PC], t1
     loadConstantOrVariablePayload(t3, CellTag, t0, .opPutByIdSlow)
     loadi 12[PC], t2
-    bpneq JSCell::m_structure[t0], t1, .opPutByIdSlow
+    bpneq JSCell::m_structureID[t0], t1, .opPutByIdSlow
     additionalChecks(t1, t3, .opPutByIdSlow)
     loadi 20[PC], t1
     getPropertyStorage(
@@ -1500,7 +1499,7 @@ macro putByIdTransition(additionalChecks, getPropertyStorage)
             storei t1, TagOffset[t3]
             loadi 24[PC], t1
             storei t2, PayloadOffset[t3]
-            storep t1, JSCell::m_structure[t0]
+            storep t1, JSCell::m_structureID[t0]
             dispatch(9)
         end)
 
@@ -1522,7 +1521,7 @@ macro structureChainChecks(oldStructure, scratch, slowPath)
     bieq Structure::m_prototype + TagOffset[oldStructure], NullTag, .done
 .loop:
     loadi Structure::m_prototype + PayloadOffset[oldStructure], protoCell
-    loadp JSCell::m_structure[protoCell], oldStructure
+    loadp JSCell::m_structureID[protoCell], oldStructure
     bpneq oldStructure, [scratch], slowPath
     addp 4, scratch
     bineq Structure::m_prototype + TagOffset[oldStructure], NullTag, .loop
@@ -1549,7 +1548,7 @@ _llint_op_get_by_val:
     traceExecution()
     loadi 8[PC], t2
     loadConstantOrVariablePayload(t2, CellTag, t0, .opGetByValSlow)
-    loadp JSCell::m_structure[t0], t2
+    loadp JSCell::m_structureID[t0], t2
     loadp 16[PC], t3
     arrayProfile(t2, t3, t1)
     loadi 12[PC], t3
@@ -1633,7 +1632,7 @@ _llint_op_get_by_pname:
     loadConstantOrVariablePayload(t0, CellTag, t2, .opGetByPnameSlow)
     loadi 20[PC], t0
     loadi PayloadOffset[cfr, t0, 8], t3
-    loadp JSCell::m_structure[t2], t0
+    loadp JSCell::m_structureID[t2], t0
     bpneq t0, JSPropertyNameIterator::m_cachedStructure[t3], .opGetByPnameSlow
     loadi 24[PC], t0
     loadi [cfr, t0, 8], t0
@@ -1675,7 +1674,7 @@ macro putByVal(holeCheck, slowPath)
     writeBarrierOnOperands(1, 3)
     loadi 4[PC], t0
     loadConstantOrVariablePayload(t0, CellTag, t1, .opPutByValSlow)
-    loadp JSCell::m_structure[t1], t2
+    loadp JSCell::m_structureID[t1], t2
     loadp 16[PC], t3
     arrayProfile(t2, t3, t0)
     loadi 8[PC], t0
@@ -1781,8 +1780,8 @@ macro equalNull(cellHandler, immediateHandler)
     loadi TagOffset[cfr, t0, 8], t1
     loadi PayloadOffset[cfr, t0, 8], t0
     bineq t1, CellTag, .immediate
-    loadp JSCell::m_structure[t0], t2
-    cellHandler(t2, Structure::m_typeInfo + TypeInfo::m_flags[t2], .target)
+    loadp JSCell::m_structureID[t0], t2
+    cellHandler(t2, JSCell::m_flags[t0], .target)
     dispatch(3)
 
 .target:
@@ -1912,8 +1911,7 @@ _llint_op_switch_char:
     loadp CodeBlock::RareData::m_switchJumpTables + VectorBufferOffset[t2], t2
     addp t3, t2
     bineq t1, CellTag, .opSwitchCharFallThrough
-    loadp JSCell::m_structure[t0], t1
-    bbneq Structure::m_typeInfo + TypeInfo::m_type[t1], StringType, .opSwitchCharFallThrough
+    bbneq JSCell::m_type[t0], StringType, .opSwitchCharFallThrough
     bineq JSString::m_length[t0], 1, .opSwitchCharFallThrough
     loadp JSString::m_value[t0], t0
     btpz  t0, .opSwitchOnRope
@@ -1961,9 +1959,9 @@ macro arrayProfileForCall()
     negi t3
     bineq ThisArgumentOffset + TagOffset[cfr, t3, 8], CellTag, .done
     loadi ThisArgumentOffset + PayloadOffset[cfr, t3, 8], t0
-    loadp JSCell::m_structure[t0], t0
+    loadp JSCell::m_structureID[t0], t0
     loadpFromInstruction(CallOpCodeSize - 2, t1)
-    storep t0, ArrayProfile::m_lastSeenStructure[t1]
+    storep t0, ArrayProfile::m_lastSeenStructureID[t1]
 .done:
 end
 
@@ -2026,8 +2024,7 @@ _llint_op_ret_object_or_this:
     loadi 4[PC], t2
     loadConstantOrVariable(t2, t1, t0)
     bineq t1, CellTag, .opRetObjectOrThisNotObject
-    loadp JSCell::m_structure[t0], t2
-    bbb Structure::m_typeInfo + TypeInfo::m_type[t2], ObjectType, .opRetObjectOrThisNotObject
+    bbb JSCell::m_type[t0], ObjectType, .opRetObjectOrThisNotObject
     doReturn()
 
 .opRetObjectOrThisNotObject:
@@ -2042,8 +2039,7 @@ _llint_op_to_primitive:
     loadi 4[PC], t3
     loadConstantOrVariable(t2, t1, t0)
     bineq t1, CellTag, .opToPrimitiveIsImm
-    loadp JSCell::m_structure[t0], t2
-    bbneq Structure::m_typeInfo + TypeInfo::m_type[t2], StringType, .opToPrimitiveSlowCase
+    bbneq JSCell::m_type[t0], StringType, .opToPrimitiveSlowCase
 .opToPrimitiveIsImm:
     storei t1, TagOffset[cfr, t3, 8]
     storei t0, PayloadOffset[cfr, t3, 8]
@@ -2071,7 +2067,7 @@ _llint_op_next_pname:
     storei t3, PayloadOffset[cfr, t1, 8]
     loadi 8[PC], t3
     loadi PayloadOffset[cfr, t3, 8], t3
-    loadp JSCell::m_structure[t3], t1
+    loadp JSCell::m_structureID[t3], t1
     bpneq t1, JSPropertyNameIterator::m_cachedStructure[t2], .opNextPnameSlow
     loadp JSPropertyNameIterator::m_cachedPrototypeChain[t2], t0
     loadp StructureChain::m_vector[t0], t0
@@ -2079,7 +2075,7 @@ _llint_op_next_pname:
 .opNextPnameCheckPrototypeLoop:
     bieq Structure::m_prototype + TagOffset[t1], NullTag, .opNextPnameSlow
     loadp Structure::m_prototype + PayloadOffset[t1], t2
-    loadp JSCell::m_structure[t2], t1
+    loadp JSCell::m_structureID[t2], t1
     bpneq t1, [t0], .opNextPnameSlow
     addp 4, t0
     btpnz [t0], .opNextPnameCheckPrototypeLoop
@@ -2324,7 +2320,7 @@ macro loadWithStructureCheck(operand, slowPath)
     loadisFromInstruction(operand, t0)
     loadp [cfr, t0, 8], t0
     loadpFromInstruction(5, t1)
-    bpneq JSCell::m_structure[t0], t1, slowPath
+    bpneq JSCell::m_structureID[t0], t1, slowPath
 end
 
 macro getProperty()
index 1f0225c..34fc791 100644 (file)
@@ -442,6 +442,29 @@ macro valueProfile(value, operand, scratch)
     storeq value, ValueProfile::m_buckets[scratch]
 end
 
+macro loadStructure(cell, structure)
+end
+
+macro loadStructureWithScratch(cell, structure, scratch)
+    loadp CodeBlock[cfr], scratch
+    loadp CodeBlock::m_vm[scratch], scratch
+    loadp VM::heap + Heap::m_structureIDTable + StructureIDTable::m_table[scratch], scratch
+    loadi JSCell::m_structureID[cell], structure
+    loadp [scratch, structure, 8], structure
+end
+
+macro loadStructureAndClobberFirstArg(cell, structure)
+    loadi JSCell::m_structureID[cell], structure
+    loadp CodeBlock[cfr], cell
+    loadp CodeBlock::m_vm[cell], cell
+    loadp VM::heap + Heap::m_structureIDTable + StructureIDTable::m_table[cell], cell
+    loadp [cell, structure, 8], structure
+end
+
+macro storeStructureWithTypeInfo(cell, structure, scratch)
+    loadq Structure::m_blob + StructureIDBlob::u.doubleWord[structure], scratch
+    storeq scratch, JSCell::m_structureID[cell]
+end
 
 # Entrypoints into the interpreter.
 
@@ -595,10 +618,10 @@ _llint_op_to_this:
     loadisFromInstruction(1, t0)
     loadq [cfr, t0, 8], t0
     btqnz t0, tagMask, .opToThisSlow
-    loadp JSCell::m_structure[t0], t0
-    bbneq Structure::m_typeInfo + TypeInfo::m_type[t0], FinalObjectType, .opToThisSlow
+    bbneq JSCell::m_type[t0], FinalObjectType, .opToThisSlow
+    loadStructureWithScratch(t0, t1, t2)
     loadpFromInstruction(2, t2)
-    bpneq t0, t2, .opToThisSlow
+    bpneq t1, t2, .opToThisSlow
     dispatch(3)
 
 .opToThisSlow:
@@ -713,11 +736,11 @@ macro equalNullComparison()
     loadisFromInstruction(2, t0)
     loadq [cfr, t0, 8], t0
     btqnz t0, tagMask, .immediate
-    loadp JSCell::m_structure[t0], t2
-    btbnz Structure::m_typeInfo + TypeInfo::m_flags[t2], MasqueradesAsUndefined, .masqueradesAsUndefined
+    btbnz JSCell::m_flags[t0], MasqueradesAsUndefined, .masqueradesAsUndefined
     move 0, t0
     jmp .done
 .masqueradesAsUndefined:
+    loadStructureWithScratch(t0, t2, t1)
     loadp CodeBlock[cfr], t0
     loadp CodeBlock::m_globalObject[t0], t0
     cpeq Structure::m_globalObject[t2], t0, t0
@@ -1054,8 +1077,7 @@ _llint_op_check_has_instance:
     traceExecution()
     loadisFromInstruction(3, t1)
     loadConstantOrVariableCell(t1, t0, .opCheckHasInstanceSlow)
-    loadp JSCell::m_structure[t0], t0
-    btbz Structure::m_typeInfo + TypeInfo::m_flags[t0], ImplementsDefaultHasInstance, .opCheckHasInstanceSlow
+    btbz JSCell::m_flags[t0], ImplementsDefaultHasInstance, .opCheckHasInstanceSlow
     dispatch(5)
 
 .opCheckHasInstanceSlow:
@@ -1067,24 +1089,23 @@ _llint_op_instanceof:
     traceExecution()
     # Actually do the work.
     loadisFromInstruction(3, t0)
-    loadisFromInstruction(1, t3)
     loadConstantOrVariableCell(t0, t1, .opInstanceofSlow)
-    loadp JSCell::m_structure[t1], t2
-    bbb Structure::m_typeInfo + TypeInfo::m_type[t2], ObjectType, .opInstanceofSlow
+    bbb JSCell::m_type[t1], ObjectType, .opInstanceofSlow
     loadisFromInstruction(2, t0)
     loadConstantOrVariableCell(t0, t2, .opInstanceofSlow)
     
     # Register state: t1 = prototype, t2 = value
     move 1, t0
 .opInstanceofLoop:
-    loadp JSCell::m_structure[t2], t2
-    loadq Structure::m_prototype[t2], t2
+    loadStructureAndClobberFirstArg(t2, t3)
+    loadq Structure::m_prototype[t3], t2
     bqeq t2, t1, .opInstanceofDone
     btqz t2, tagMask, .opInstanceofLoop
 
     move 0, t0
 .opInstanceofDone:
     orq ValueFalse, t0
+    loadisFromInstruction(1, t3)
     storeq t0, [cfr, t3, 8]
     dispatch(4)
 
@@ -1104,17 +1125,17 @@ _llint_op_is_undefined:
     storeq t3, [cfr, t2, 8]
     dispatch(3)
 .opIsUndefinedCell:
-    loadp JSCell::m_structure[t0], t0
-    btbnz Structure::m_typeInfo + TypeInfo::m_flags[t0], MasqueradesAsUndefined, .masqueradesAsUndefined
+    btbnz JSCell::m_flags[t0], MasqueradesAsUndefined, .masqueradesAsUndefined
     move ValueFalse, t1
     storeq t1, [cfr, t2, 8]
     dispatch(3)
 .masqueradesAsUndefined:
+    loadStructureWithScratch(t0, t3, t1)
     loadp CodeBlock[cfr], t1
     loadp CodeBlock::m_globalObject[t1], t1
-    cpeq Structure::m_globalObject[t0], t1, t3
-    orq ValueFalse, t3
-    storeq t3, [cfr, t2, 8]
+    cpeq Structure::m_globalObject[t3], t1, t0
+    orq ValueFalse, t0
+    storeq t0, [cfr, t2, 8]
     dispatch(3)
 
 
@@ -1147,8 +1168,7 @@ _llint_op_is_string:
     loadisFromInstruction(1, t2)
     loadConstantOrVariable(t1, t0)
     btqnz t0, tagMask, .opIsStringNotCell
-    loadp JSCell::m_structure[t0], t0
-    cbeq Structure::m_typeInfo + TypeInfo::m_type[t0], StringType, t1
+    cbeq JSCell::m_type[t0], StringType, t1
     orq ValueFalse, t1
     storeq t1, [cfr, t2, 8]
     dispatch(3)
@@ -1200,14 +1220,15 @@ macro getById(getPropertyStorage)
     # to take fast path on the new cache. At worst we take slow path, which is what
     # we would have been doing anyway.
     loadisFromInstruction(2, t0)
-    loadpFromInstruction(4, t1)
     loadConstantOrVariableCell(t0, t3, .opGetByIdSlow)
-    loadisFromInstruction(5, t2)
+    loadStructureWithScratch(t3, t2, t1)
+    loadpFromInstruction(4, t1)
+    bpneq t2, t1, .opGetByIdSlow
     getPropertyStorage(
         t3,
         t0,
         macro (propertyStorage, scratch)
-            bpneq JSCell::m_structure[t3], t1, .opGetByIdSlow
+            loadisFromInstruction(5, t2)
             loadisFromInstruction(1, t1)
             loadq [propertyStorage, t2], scratch
             storeq scratch, [cfr, t1, 8]
@@ -1233,7 +1254,7 @@ _llint_op_get_array_length:
     loadisFromInstruction(2, t0)
     loadpFromInstruction(4, t1)
     loadConstantOrVariableCell(t0, t3, .opGetArrayLengthSlow)
-    loadp JSCell::m_structure[t3], t2
+    move t3, t2
     arrayProfile(t2, t1, t0)
     btiz t2, IsArray, .opGetArrayLengthSlow
     btiz t2, IndexingShapeMask, .opGetArrayLengthSlow
@@ -1271,15 +1292,16 @@ macr