فهرست مطالب

مجله ژئوفیزیک ایران
سال ششم شماره 1 (پیاپی 12، بهار 1391)

  • تاریخ انتشار: 1391/07/03
  • تعداد عناوین: 10
|
  • سید هانی متولی عنبران، وحید ابراهیم زاده اردستانی*، هرمان زین صفحات 1-11
    در این تحقیق از مدل سازی هم زمان داده های ژئوئید، توپوگرافی، مفاهیم پایه فیزیک و ریاضی و هم ایستایی (isostasy) محلی به منظورتعیین ضخامت پوسته و مدل سنگ کره در مناطق فعال از نظر زمین شناسی، که همواره از موضوعات مورد توجه محققان است، استفاده شده است. روش حاضر علاوه بر دقت خوب و سازگاری زیاد با پدیده های زمین شناسی، نسبت به روش های دیگر، سرعت زیاد و هزینه اندکی دارد. از دیگر مزایای این روش می توان به امکان بررسی هم زمان جغرافیایی محدوده ای وسیع و مقایسه چندساختار متفاوت زمین شناسی در کنار یکدیگر اشاره کرد.
    مدل سازی روی نواحی شمالی فلات ایران شامل قسمت های شمالی ایران مرکزی، رشته کوه البرز و حوضه خزر جنوبی صورت گرفته است. نتایج حاصل به وضوح، ضخیم شد گی پوسته در زیر البرز تا ضخامت حدود 55 کیلومتر را نشان می دهد. ضخامت پوسته و سنگ کره در ایران مرکزی به نسبت نواحی مجاور کاسته شده است و به حدود 36-38 کیلومتر برای ژرفای موهو و 140 کیلومتر برای ژرفای مرز سنگ کره- نرم کره می رسد. هرچند فرورانش ورقه خزر به زیر ورقه اوراسیا به وضوح قابل رویت است لیکن در منطقه کپه داغ مشاهده نمی شود.
    کلیدواژگان: ژرفای موهو، سنگکره، ژئوئید، توپوگرافی، البرز، خزر جنوبی، ایران مرکزی
  • اصغر راست بود، بهزاد وثوقی صفحات 12-33
    آشنایی با نحوه حرکت و تغییر مختصات نقاط سطح زمین با زمان برای انواع کاربردهای ژئودتیک امری بسیار مهم و ضروری است. هدف از این تحقیق مدل سازی وابسته به زمان جابه جایی و تغییر مختصات نقاط سطحی زمین در اثر حرکت صفحه های زمین ساختی و زمین لرزه ها در محدوده فلات ایران است. از مدل عرضه شده می توان برای پیش بینی تغییر مختصات نقاط سطحی زمین و یا پیش بینی مشاهدات ژئودتیک (طول و زاویه) از یک اپوک زمانی دلخواه به اپوک دلخواه دیگر نیز بهره جست. این مدل مختصات نقاط ورودی را در انواع چارچوب های مرجع ITRF و یا WGS84 دریافت می کند و بعد از اجرای محاسبات، خروجی را در چارچوب مرجع دلخواه عرضه می دارد. به منظور مدل سازی حرکت دائمی صفحه ها و حرکت های بین لرزه ای و همچنین حرکت های هم لرزه از روابط تحلیلی اکادا (1985) استفاده شده است. جذر میانگین مربعات (rms) خطای مدل سازی حرکت های دائم و بین لرزه ای برای مدلی که بهترین انطباق را با مشاهدات GPS داشت برابر mm/yr 35/0 محاسبه شد. نتایج مدل سازی نشان می دهد که سهم گسلش های اطراف صفحه عربستان در تولید میدان سرعت GPS شبکه غیردائم ژئودینامیک سراسری ایران بیشتر از سهم گسلش های فلات آناتولی و حتی گسلش های داخلی ایران است. استفاده از پارامترهای هندسی دقیق گسلش ناشی از زمین لرزه ها که با بهره گیری از مشاهدات InSAR و با حل مسئله معکوس به دست آمده باشند و مدل سازی حرکت های بعدلرزه ای با استفاده از مدل وانگ (2006) برای تکمیل و بالا بردن دقت خروجی های مدل پیشنهاد می شود.
    کلیدواژگان: میدان سرعت GPS، نظریه نابرجایی، زمین ساخت، چارچوب مرجع، ITRF، WGS84
  • احمد قربانی، حسینعلی قاری، افشین نمیرانیان صفحات 34-41
    ارتباط خواص مکانیکی سنگ ها با خواص فیزیکی (روش های ژئوفیزیکی)، موضوعی است که اخیرا مورد توجه محققان قرار گرفته است. در این میان روش های الکتریکی و لرزه ای بیشترین کاربرد را دارند. در این مقاله، عبور جریان الکتریکی در حین اعمال تنش فشاری در آزمایشگاه بررسی شده است.
    پس از نصب الکترودهای مخصوص روی 7 نمونه مغزه تقریبا اشباع، آزمایش مقاومت فشاری تک محوری و عبور جریان الکتریکی از نمونه ها به طور هم زمان صورت پذیرفت و تغییرات مقاومت ویژه الکتریکی در حین بارگذاری اندازه گیری شد.
    ماسه سنگ ها افزایش مقاومت ویژه و سنگ آهک های فسیل دار کاهش مقاومت ویژه در سراسر محدوده افزایش کرنش نشان دادند. تراورتن ها و سنگ آهک با افزایش کرنش در ابتدا افزایش مقاومت ویژه و سپس کاهش مقاومت ویژه نشان دادند. رفتار مقاومت ویژه حین بارگذاری به بسته شدن منفذها (کاهش تخلخل) درکرنش های کم و ایجاد درزه های القایی (افزایش تخلخل) در کرنش های بیشتر ارتباط داده شد.
    کلیدواژگان: رسانایی الکتریکی، آزمایش مقاومت فشاری تک محوری، مدول کشسانی، مقاومت ویژه الکتریکی، پتروفیزیک
  • ویکتوریا عزتیان، ابراهیم اسعدی اسکویی صفحات 42-60
    اوزون وردسپهری (تروپوسفری) سبب بروز مشکلات تنفسی می شود و پوشش های گیاهی را تحت تاثیر قرار می دهد. در این پژوهش مدل های آماری براساس مقادیر متغیرهای هواشناختی و آلاینده های جوی برای پیش بینی تغییرات غلظت اوزون وردسپهری در اصفهان در مقیاس های زمانی ساعتی و روزانه عرضه شده که طیف وسیعی از مدل های رگرسیونی خطی چندمتغیره را شامل می شود. نتایج روشن ساخت که بین تغییرات اوزون و متغیرهای هواشناختی و آلاینده های جوی هم بستگی های معنی دار وجود دارد لیکن هیچ یک از مدل ها توانایی تبیین سهم بزرگی از واریانس مقادیر اندازه گیری شده اوزون وردسپهری در اصفهان را نداشتند. محاسبه یک مدل غیرخطی دومتغیره اگرچه توانست حالت کلی نوسان های ذاتی اوزون را نشان دهد، اما به دلیل وجود نوسان های نامنظم در داده های ساعتی نتوانست مدلی مناسب برای پیش بینی غلظت اوزون وردسپهری باشد. بیشتر مدل ها نشان دادند که افزایش دما و رطوبت، بیشترین سهم را در تشکیل اوزون وردسپهری دارند و فشار سطح دریا در تحلیل های نقطه ای دارای کاربرد چندانی نیستند. همچنین افزایش غلظت ترکیبات اکسیژن دار نیتروژن، تولید اوزون وردسپهری را افزایش می دهد. در مقیاس روزانه گاز کربن مونوکسید و دما توانستند بهترین توجیه را برای غلظت اوزون وردسپهری به دست دهند.
    کلیدواژگان: و ری گرسیون خطی، اوزون وردسپهری، مدل چند متغیره، روش آماری
  • امین روشندل کاهو، علی نجاتی کلاته * صفحات 61-68

    به دلیل خاصیت ناایستا بودن داده های لرزه ای در اثر عبور از داخل زمین، از تبدیل های زمان – بسامد به طور گسترده ای در پردازش و تفسیر این داده ها استفاده می شود. روش های متفاوتی برای نمایش زمان – بسامد سیگنال ها معرفی شده اسیت. هر یک از این روش ها دارای نقاط ضعف و قوت مخصوصی هستند. بنابراین استفاده از ابزارهایی که بتواند علاوه بر حفظ نقاط قوت این روش ها، نقاط ضعف آنها را برطرف کند، بسیار سودمند است.
    در این مقاله از یک عملگر واهمامیخت دوبعدی مبتنی بر توزیع ویگنر – وایل برای واهمامیخت طیف نگاره تبدیل فوریه زمان کوتاه استفاده شده است و نمایش زمان – بسامد با قدرت تفکیک زیاد و بدون جملات متقاطع به دست آمده است. کارایی این روش تجزیه طیفی روی داده های مصنوعی مورد بررسی قرار گرفت و با نتایج سایر روش های متداول مقایسه شد. همچنین از این روش برای آشکارسازی سایه های بسامد کم مربوط به مخازن گازی، در یکی از میادین گازی جنوب غربی ایران در مقایسه با تبدیل فوریه زمان کوتاه استفاده شده است. نتایج نشان می دهد که روش پیش گفته دارای قدرت تفکیک زیاد است.

    کلیدواژگان: تجزیه طیفی، واهمامیخت، توزیع ویگنر - وایل، تبدیل فوریه زمان کوتاه، سایه بسامد پایین
  • کمال علمدار، ابوالقاسم کامکار روحانی، عبدالحمید انصاری صفحات 69-83
    در این نوشته به منظور تفسیر آنومالی های مغناطیسی روشی خودکار با استفاده از مشتق های سیگنال تحلیلی ارائه شده است. مزیت اصلی این روش این است که رابطه خطی را برای تعیین پارامترهای موقعیتی (عمق و مرز) توده های مولد آنومالی مغناطیسی بدون نیاز به اطلاع از هندسه توده به دست می دهد. با داشتن پارامترهای مربوط به موقعیت قائم و افقی توده زیرسطحی امکان برآورد هندسه توده نیز وجود دارد. این روش بر روی داده های میدان مغناطیسی حاصل از مدل های دوبعدی مصنوعی در حالت بدون نویز و آغشته به نویز به کار برده شده است. همچنین روش مذکور بر روی مدل سه بعدی بی شاپ (Bishop) نیز با موفقیت به کار برده شده است. کاربرد این روش بر روی داده های مغناطیس واقعی معدن سنگ آهن جلال آباد زرند نتایج حفاری موجود را تایید می کند.
    کلیدواژگان: سیگنال تحلیلی، پارامترهای موقعیتی، بی شاپ، جلال آباد زرند
  • رضا محبیان، مصطفی یاری، محمد علی ریاحی * صفحات 84-94

    از اولین روش های زمان- بسامد در تجزیه طیفی تبدیل فوریه زمان-کوتاه (STFT) است. در این روش پنجره ای مناسب با طول ثابت در نظر گرفته می شود. امواج لرزه ای از جمله امواج غیرپایا هستند که محتوای بسامدی متغیر با زمان دارند. ازاین رو باید از روش هایی بهره گرفت که طول پنجره در آنها متغیر با بسامد باشد. از جمله روش هایی که می تواند مورد استفاده قرار گیرد تبدیل s است. در این روش تفکیک پذیری زمانی و بسامدی در صفحه زمان- بسامد تغییر می کند تا یک تحلیل با تفکیک پذیری چندگانه به دست آید. علاوه براین روش دیگری تحت عنوان تجزیه با تعقیب تطابق (MPD)، که روش جدیدتری نسبت به روش های پیش گفته است، وجود دارد که در الگوریتم خود از توزیع ویگنر- ویل استفاده می کند. در این مقاله از مقاطع لرزه ای تک بسامد که با استفاده از روش های زمان-بسامد مقاطع لرزه ای حاصل شده است، برای تشخیص کانال ها استفاده می شود..

    کلیدواژگان: مدل های سیگنال ناپایا، طیف زمان، بسامد، تبدیل موجک پیوسته، نشانگر مستقیم مخازن هیدروکربنی، تعقیب تطابق، سیگنال لرزه ای
  • علی حمیدی حبیب، محمدعلی ریاحی*، غلامحسین نوروزی صفحات 95-106

    هدف اصلی این مقاله، برآورد پارامتر تخلخل در یکی از میدان های دریایی ایران با روش های جدید برآورد پارامترهای مخزنی بود و روشن شد که چگونه می توان به کمک روش های جدید شبکه عصبی با تلفیق اطلاعات لرزه ای و داده های چاه نگاری به نقشه های بهتری از مدل تخلخل در مخازن رسید. در این مقاله با تلفیق اطلاعات لرزه ای و نمودارگیری و تعیین همبستگی موثر بین داده های چاه و اطلاعات لرزه ای معادله ریاضی به منظور استخراج پارامترهای مخزنی از اطلاعات لرزه ای در مکان چاه ها به دست آمد، سپس با تعمیم این رابطه به کل ناحیه عملیات لرزه ای در زون مخزنی مورد بررسی، نقشه های اولیه ای از پارامتر تخلخل در محدوده مخزن تهیه شد. در مرحله بعد به کمک زمین آمار و استفاده از نقشه های اولیه درحکم متغیر ثانویه در روش کولوکیتد کوکریجینگ به نقشه های بهتری از توزیع تخلخل دست یافته شد.

    کلیدواژگان: تخلخل، پارامتر پتروفیزیکی، نشانگر لرزه ای، شبکه عصبی، رگرسیون چند متغیره، زمین آمار
  • معصومه احمدی حجت، فرهنگ احمدی گیوی صفحات 107-127
    در این تحقیق با شناسایی 25 مورد از قوی ترین رویدادهای پرفشار سیبری در یک دوره 60 ساله (1948 تا 2008)، ساختار این سامانه جوی و برخی از عوامل موثر در تقویت آن از دیدگاه دینامیکی و ترمودینامیکی مورد بررسی قرار می گیرد. این کار با استفاده از داده های NCEP/NCAR در دوره مورد تحقیق صورت می گیرد. نتایج نشان می دهد که بخش جریان سوی پرفشار سیبری دارای ساختار گرمایی و بخش پادجریان سوی آن دارای ساختار دینامیکی است. همچنین مشاهده می شود که پرفشار سیبری فقط یک سامانه محدود به سطح زمین نیست و می تواند با میدان های هواشناختی در بالای جو نیز در ارتباط باشد که نوسان وردایست در هنگام تقویت پرفشار سیبری از آن جمله است. به علاوه، ساختار گردش میدان باد به گونه ای است که با اینکه عموما پرفشار سیبری به منزله یک واچرخند سطح زمین شناخته شده است، اما در سطوح فوقانی، یک چرخند در بخش جریان سو و یک واچرخند در بخش پادجریان سوی آن شکل می گیرد. از دیگر نتایج آن که در مرحله تقویت پرفشار سیبری، یک قطار موج شبه ایستای راسبی بیرونی در بالای جو وجود دارد، به طوری که یک پشته آن در بخش پادجریان سوی مرکز پرفشار بند می آید (بلاک می شود). به نظر می رسد که جفت شدگی و برهم کنش بین بی هنجاری های واچرخندی ناشی از هوای سرد سطح زمین با گردش های ناشی از بی هنجاری تاوایی پتانسیلی قطار موج راسبی موجود در بالای جو، نقش موثری در تقویت پرفشار سیبری و تقویت پشته بند آمده واقع در بخش پادجریان سوی آن دارد.
    کلیدواژگان: پرفشار سیبری، تاوایی پتانسیلی، نوسان وردایست، موج شبه ایستای راسبی بیرونی، بندالی (بلاکینگ)
  • امین عباسی، محمد تاتار، محمدرضا عباسی، فرزام یمینی فرد صفحات 128-146

    گسل مشا یکی از گسل های مهم و تهدیدکننده کلان شهر تهران (پایتخت کشور) است. تا پیش از این تحقیق، دانش زلزله شناسی دستگاهی منطقه به داده های کم، همراه با خطای تعیین مکان به ویژه در عمق و تعداد اندک سازوکار کانونی در پیوند با روند مشا، محدود بوده است. در تحقیق حاضر، خرد لرزه های(Micro Earthquakes) خاور لبه جنوبی البرز مرکزی به ویژه بخش خاوری گسل مشا به کمک نصب شبکه موقت محلی و متراکم، ثبت و پردازش شده است. پس از خوانش فاز امواج پوسته ای(Pg، Sg)، زمین لرزه های دارای پوشش آزیموتی مناسب و خطای باقی مانده زمانی(Timing Residual errors) و مکانی ناچیز پالایش شد و برای تعیین نسبت سرعت امواج(Vp/Vs) و محاسبه مدل یک بعدی ساختارسرعتی پوسته بالایی به کار رفت. سپس با مکان یابی دقیق و حل سازوکار ژرفی خرد لرزه های قابل اعتماد، لرزه خیزی، چگونگی جنبش، هندسه گسله ها و وضعیت لرزه زمین ساختی منطقه بررسی و تحلیل شد.

    کلیدواژگان: گسل مشا، خرد لرزه خیزی، ساختارسرعتی پوسته، سازوکار کانونی
|
  • Seyed Hani Motavalli, Anbaran, Vahid Ebrahimzadeh Ardestani*, Herman Zeyen Pages 1-11
    Geological events and the curiosity of human mind to comprehend these phenomenacompel the researchers to investigate their structures and tectonic evolution. Some keyparameters to better understand these subjects are Moho Depth (the boundary betweencrust and mantle) and also the Lithosphere-Asthenosphere Boundary (LAB). There are methods available which can give us some knowledge about these key parameters such as seismic, magnetotelluric, volcanologic and etc. Each one has advantages and disadvantages. In the seismological method, a period of about six months is needed to be sure that a reasonable quantity and quality of events has been detected to record enough data during the research. The high expense of instruments and lack of access roads in high topography has limited this method to adequately capture the data researchers seek. These problems also exist in the seismic base method. The seismic method is generally expensive. Moreover, this method is nearly blind in lithospheric depth such as LAB.We tried to introduce another method that used potential field data. Our data weretopography and geoid undulation mainly observed by satellites. The method for this studyutilizes some basic concepts such as local isostasy as wel as some basic physical andmathematical rules, relations and equations. Our topography data were from the newlyreleased topography database for all over the world, ETOPO1. The spatial resolution ofthe data were 1 Arc-minute. The geoid undulation was calculated by a spherical harmonicup to order 2159 and degree 2190 from Earth Gravitational Model's 2008 (EGM2008). Toavoid the effects of anomalies deeper than LAB, wavelengths greater than 4000 km wereremoved. There were some advantages to this method such as: the higher speed of thecalculation so that the examination of a big region was possibile at a fraction of the costfor other methods. Modeling was done on a very substantial area in the Northern part ofthe Iranian plateau that included the Northern part of Central Iran, the Alborz Mountainsand the South Caspian Basin.The results showed the evidence of thickening of the crust up to ~55 km underneath the Alborz Mountains. However, many previous researchers concluded no roots there. The other outcome of utilizing this method was thinning the crust and lithosphere underneath the Northern part of Central Iran (36-38 km for the crust and 140 km for the LAB) relative to the surrounding area. Our data reflected a solid correlation with some previous work and geological evidence. Subduction of the south Caspian basin (probable oceanic crust) underneath the Eurasia plate is completely visible; however, this activity was not recognized in Kopet-Dagh.
    Keywords: Moho depth, Lithosphere, Geoid, topography, Alborz, South Caspian, Central Iran
  • Asghar Rastbood, Behzad Voosoghi Pages 12-33
    Being familiar with the modes of motion and coordinate changes of the Earth surface points as a function of time is very important and essential in different types of geodetic applications. The puepose of this research is the time-dependent modeling of displacement and coordinate changes of the Earth’s crust surface points due to the plate tectonic motions and earthquakes in the region of Iranian plateau. The provided model could be used to predict the coordinate changes of surface crust points or to predict the geodetic observations (distance and angle) from one arbitrary epoch to another. This model receives the coordinates in various ITRFs or WGS84 reference frames and after the computations are made, the results could be provided in any reference frame. The Bursa-Wolf seven-parameter conformal model was used to transform three dimensional Cartesian co-ordinates between WGS84 and ITRF2000. In the absence of a crustal motion, the equations for transforming positional coordinates from one ITRF to another are rather familiar to the surveying community, i.e. it is a seven-parameter transformation. In the presence of a crustal motion, the transformation equations can be generalized to allow one frame to move relative to the other. Thus, each of the seven defining parameters becomes a function of time. Therefore, in modeling, fourteen transformation parameters were used for ITRF2000 reference frame transformation to the previous and later reference frames. Okada (1985) analytical model was used to model sudden coseismic and interseismic motions due to earthquakes. In previous works (Pearson, 2010 and Meade, 2005) the block model was used for secular and interseismic deformation modeling, but in this research, we used Okada (1985) analytical modeling for this purpose since (1) Modeling the present-day velocity field determined with GPS networks incorporates geological constraints on the geometry of the main structures and on the long-term deformations; (2) Regions between the major faults are not rigid and so the modeling allows for internal deformations. Finally, we have a tectonic model for Arabia-Eurasia oblique collision zone in Iran that is more realistic than the rigid block model. This model shows that about 30% of GPS velocity field components are produced by faults inside Iran, 60% by Arabian plate and 10% by Anatolian plate. Continuous use of GPS data and local network observations is recommended to get a more precise model for secular and interseimic motions. Also using more precise geometric faulting parameters due to earthquakes obtained by inverse problem solution based on GPS or InSAR observations is recommended to get more precise outputs. Postseimic motions were not modeled in this research since this effect is a function of time and its amplitude is just considerable for large earthquakes, beside that the amount of this effect is reduced with time. Anyway, the postseismic deformation modeling due to intense earthquakes with a large focal depth using Wang (2006) model is recommended. In this research, just the effects of secular, interseismic and coseismic motions were included in the model. To complete the model, it is recommended to consider the effects of the crustal motions associated with land subsidence, volcanic activity, postglacial rebound etc.
    Keywords: GPS velocity field, Dislocation theory, Tectonics, reference frame, ITRF, WGS84
  • Ahmad Ghorbani, Hossein Ali Ghari, Afshin Namiranian Pages 34-41
    The study of the physical properties (geophysical methods) of rocks associated with its mechanical properties has recently received lots of attention. Recent studies show that geophysical methods especially the seismic and geoelectric methods are able to estimate the mechanical parameters and recognize their spatial variations, including anisotropy. Meanwhile, electrical and seismic methods are the most used one. Electrical measurement is one of the non-invasive geophysical methods commonly used by engineers working in various fields such as mining, geotechnical, civil, underground engineering as well as oil and gas mineral explorations. This method can be applied both in laboratory and in the field. Numerous scientists have focused on the relation between resistivity and porosity. However, there is a very limited study on the relation between the electrical resistivity and the rock properties apart from porosity. In this paper, changes in the electrical conductivity of rocks during a uniaxial compression test were investigated in laboratory. The uniaxial compressive strength, elastic modulus, and density values of the samples were determined in laboratory. We installed special electrodes on seven nearly saturated core samples in order to measure the resistivity. Core samples had a 52-mm diameter and a 110-mm length. Two-electrode as well as four-electrode arrays were both used in resistivity monitoring in laboratory. Using a four-electrode array minimized the undesirable electrode polarization effects. In the four-electrode array, we used two non-polarizing Ag/AgCl electrodes mounted on the core sample. Our laboratory observations showed that there was not any electrode polarization effect. When we used a two-electrode array, the resistivity changes were less than 5 percent compared to a four-electrode array. In our laboratory investigation, we used different sedimentary core samples including sandstone, fossilioferous limestone and travertine. Maximum resistivity observed for the travertine core sample was less than 12 kohm. During the uniaxial compressive test, deformation measurements were made and the stress–strain curves were plotted. Tangent Young’s modulus values were obtained from stress–strain curves at a stress level equal to 50% of the ultimate uniaxial compressive strength. Sandstone core samples showed a resistivity increase in the whole strain range. On the contrary, the fossiliferous limestone samples (thin section showed that the sample was composed of tiny calcium fossils in a fine aggregate of micrite cementation) showed a resistivity decrease in the whole strain range. Travertine and limestone showed an intermediate behavior (resistivity increased in the lower strain and it decreased in the higher range). In other words, the onset of new crack formation occurs well inside the quasi-linear part of the stress-strain curve. The quasi-linear portion of the stress-strain curve was the result of a competition between closure of one population of cracks, and the growth of new propagation of the existing cracks. Resistivity behavior during a uniaxial compression load is closely related to the pores in the lower strain ranges and then to the new induced fractures in higher strains. Our results showed that the electrical resistivity may be a representative measure of the rock properties. Additionally, the effect of certain minerals on the rock’s resistivity must be taken into account. The results indicated that the rock structure had an important effect on the resistivity behavior during a mechanical loading.
    Keywords: Electrical Conductivity, uniaxial compression test, elasticity modulus, electrical resistivity, petrophysics
  • Victoria Ezzatian, Abrahim Asadi Pages 42-60
    Tropospheric ozone is one of the main causes of respiratory problems and it hurts vegetations. In this research, statistical models based on a wide variety of regression models are presented in order to evaluate the surface ozone concentrations in hourly and daily scales in Isfahan using meteorological variables and pollutant gases as predictors. Although none of meteorological variables and pollutant gas levels has the ability to interpret the measured ozone variations in Isfahan, the results have shown there is a significant correlation between them and the ozone variations. Calculating a nonlinear bivariate model can show the general ozone fluctuations, but because of irregular fluctuations in hourly data, it can not be a proper predictor. Most of the models assigned the biggest influence to the air temperature and humidity in surface ozone production and declared that the mean surface pressure do not have an important role in the point analysis. Also increasing the oxide compositions of nitrogen increases the ozone production. In a daily scale, carbon monoxide and temperature have presented the best interpretation for the ozone concentration. The aim of this research was to present a consistent evaluation of the surface ozone using statistical methods. At beginning, the society of ozone samples, pollutant gases and corresponding meteorological data was assessed and the correlation between the ozone level and each of them or a group of them was tested, step by step. Most of data did not obey a normal curve, so in different stages, some operations were necessary to make the data closer to the normal situation. In this paper, the data from meteorological and pollution observations were used as predictors. The station was located in 32.62N, 51.66E with the elevation of 1550 m. The data consisted of: a- The data from the pollution stations: surface ozone, CO, SO2, NO, NOx, NO2 b- Meteorological data: air temperature, relative humidity, wind speed, solar radiation, air pressure. Reviewing the time series of the ozone data (24 hours) showed that there was a daily sinusoidal cycle in the ozone concentration and a sinusoidal model can easily calculate the ozone amount as a function of the hours in a day. Although a sinusoidal curve was well fitted to the daily curve of the ozone concentration, random fluctuations in the daily average were seen. These irregularities caused difficulties in presenting a single proper model to show the daily cycle of the ozone concentrations. In the next stage, an equation was gained by modulation of the daily and hourly equations to show the ensemble daily and hourly cycle of the ozone concentrations. Analysis of the results of regression models shows that between the three equation, best equation be gained from step wise method. Then, by using a backward method, 13 equation be gained. All of these equations show that the daily scale can not justify the surface ozone variations. This can be because of the act of other unknown variables or because of the nonlinear nature of the correlations between ozone levels and the predictors. However the data were preprocessed to get closer to a normal distribution. For this purpose, both logarithmic and squared forms of the data were also used eventhough they could not make a considerable change in order to transform the data to normal distributions. All of them were used beside the natural data to form more regression models. It should be noted that the nature of these kinds of data, that needs complicated process to be created, makes the correlations coefficient less strong. The resulted equations in this paper showed that the current operations could not normalize the distributions of the data. The existence of a nonlinear correlation between the ozone levels and the studied variables can be a reason for the weakness of these models. In the previous studies, the highest determination coefficient was 0.36 (Alexandrof, 2005). In this paper, the best equation nearly showed the same amounts (r = 0.304). In the backwards method, a higher coefficient was gained (r = 0.592) but because of the length and size of the equation, it is not usable. Although the regression models and the principal component analysis showed that they had a strong ability to interpretat the surface ozone fluctuations and predict its concentration, the number of their independent variables prevented them from being useful enough from an application viewpoint.
    Keywords: Linear Regression, tropospheric ozone, multivariate model
  • Amin Roshandel Kahoo, Ali Nejati Kalateh Pages 61-68

    Because of the non-stationary property of seismic signals, time-frequency transforms have widely used in seismic data processing and interpretations. Spectral decomposition can reveal the characteristics that are not easily observed in the time representation or the frequency representation alone. Conventional spectral decompositions such as short-time Fourier transform (STFT) and Wigner–Ville distribution (WVD) have some restrictions, such as Heisenberg uncertainty principle and cross terms. In this paper, we used the deconvolution of a short-time Fourier transform spectrogram to overcome the mentioned restrictions. The resolution of a time–frequency representation using STFT is strongly dependent on the length of the window function. A short window length will result in a good time resolution but a poor frequency resolution; a long window length will result in a poor time resolution but a good frequency resolution. No window function is used to calculate the WVD of signals. Therefore, WVD has a high resolution in time and frequency simultaneously. However, the existence of the cross–terms has limited the application of this distribution (Boashash, 2003). Auger et al. (1996) introduced the smoothed pseudo WVD (SPWVD) to eliminate these cross-terms. Smoothing the WVD leads to a tradeoff between the time–frequency resolution and cross-term elimination. When the smoothing function is the WVD of the window function used in STFT, the SPWVD will become the STFT spectrogram (Qiang and Wen-kai, 2010). There are no cross-terms in the STFT spectrogram, and yet it has a low resolution both in time and frequency. Therefore, a 2D deconvolution operator can be used to generate a high time-frequency representation of the signal with no cross-terms. To perform the 2D deconvolution, we used the iterative Lucy–Richardson algorithm. The resulted spectrogram after 2D deconvolution is nominated as deconvolutive STFT or DSTFT. The efficiency of this method is evaluated by applying on both synthetic and real seismic data. The results of synthetic example show that the deconvolutive short-time Fourier transform spectrogram (DSTFT) has the fine resolution as the Wigner–Ville distribution (WVD) has but with no cross-terms. Castagna et al. (2003) used the spectral decomposition to detect the low-frequency shadows associated with hydrocarbons. In fact, these shadows are often related to an additional energy occurring at low frequencies, rather than the preferential attenuation of higher frequencies. One possible explanation is that these are locally converted shear waves that have traveled mostly as P-waves and thus arrive slightly after the true primary event. We used a deconvolutive short-time Fourier transform spectrogram to illuminate the low-frequency shadow corresponding to a gas reservoir at one of the gas fields in the South West of Iran. The results of the real data example show that the DSTFT analysis has a much better resolution than the conventional spectral decomposition and can potentially be used to detect hydrocarbons from a gas reservoir directly using low-frequency shadows.

    Keywords: spectral decomposition, Deconvolution, Wigner–Ville distribution, short-time Fourier transform, low-frequency shadow
  • Kamal Alamdar_Abolghasem Kamkare – Rouhani_Abdolhamid Ansari Pages 69-83
    Aeromagnetic surveys play an important role in the exploration of natural resources of economic interest, as well as in regional geologic mapping. Magnetic anomalies caused by the lateral variations of magnetization in the earth’s crust often are characterized by smooth regional gradients with isolated features. The main goal of magnetic prospecting is to infer both the geometry and the magnetization of the geologic structure that causes the observed magnetic anomalies. However, akin to other potential-field methods, interpretation of magnetic field anomalies is non-unique because more than one distribution of magnetization (i.e., magnetic dipole moment per unit volume) and source geometry can explain the same observed magnetic anomaly. One important goal in the interpretation of magnetic data is to determine the geometry and the location of the magnetic source. This has recently become particularly important because of the large volumes of magnetic data that are being collected for environmental and geological applications. To this end, a variety of semiautomatic methods based on the use of derivatives of the magnetic field have been developed to determine magnetic source parameters such as locations of boundaries and depths. As faster computers and commercial software have become widely available, these techniques are being used more extensively. Utilizing first-order derivatives of the magnetic field, Euler deconvolution was first applied on profile data and subsequently on gridded data. The method has come into wide use as an aid for interpreting magnetic data. The main advantage of the Euler method is that it can provide automatic estimates of the source location of the causative magnetic anomalies. However, it requires an assumption about the geometry of the body that is the actual source. In practice, assumption is achieved by specifying a structural index to define the source geometry in generalized situations, setting a good strategy for discriminating, and selecting meaningful solutions. Recent extensions to the Euler method allow to be estimated from the data, with the calculation of Hilbert transforms of the derivatives. The SPI method, which requires second-order derivatives of the field, uses a term known as the local wavenumber to provide a rapid estimate of the depth of buried magnetic bodies. The local wavenumber was defined as the spatial derivative of the local phase. The SPI method worked on gridded data, but assumed a contact model (=0). Later extensions to the method enabled calculation of, but these required third-order derivatives. The calculation of third-order derivatives from gridded data is problematic, so the use of profile data was advocated by Smith et al. (2005). In a more recent paper, a linearized least-squares method was applied to obtain information about the depth and nature of the buried sources from first- and second-order derivatives of the field (the analytic signal and its horizontal gradient). However, their approach requires knowledge of the horizontal position of the source, inferred from the peak of the analytic signal. Inappropriate sampling of the data and/or noise can make the selection of the horizontal position inaccurate. As a result, these inaccuracies lead to errors in the estimatation of both the depth and the nature of the sources. To overcome the limitations of the previous studies and to improve the process of estimating the source parameters using the analytic signal approach, an automatic method is presented to estimate horizontal location, depth, and the nature of 2D magnetic sources using derivatives of the analytic signal. Derivatives of the field of up to only the second order are used. First, a generalized equation is derived and solved using the least-squares method to provide source location parameters without any a priori information about the nature of the source. Then, using the estimated source location parameters, the nature of the source is obtained. To implement the method, the anomalies are first identified using the analytic signal peak. The method is then applied to a data window around the peak, where the signal-to-noise ratios of both the analytic signal derivative and the horizontal gradient of the analytic signal are relatively high. The determination of the number of data selected is based on the quality of the data and interference from nearby sources. The optimum number of selected data is small enough to see only a single anomaly and large enough to contain sufficient variations in the anomaly within the window. In this study, data for which the analytic signal values are greater than 10% of the peak value were used within each window. The presented method was applied successfully to synthetic magnetic data from 2D models with random noise as well as on a 3D synthetic Bishop model. In synthetic examples, we tested the feasibility of the proposed method; using theoretical anomalies of 2D magnetic models buried at different depths. These models were a horizontal cylinder with an infinite horizontal extent and a thin dike with infinite depth extent. The total-field anomaly values were calculated along a 100 km profile striking south–north at intervals of 1 km. Good results were obtained on a real magnetic dataset related to an ore field in Jalal-Abad,Iran, which has a broad correlation with drilling. In this regard, the results obtained by the proposed method were selected as start point in 2D modeling, and this shows a good fit with the measured profile.
    Keywords: Analytic signal, location parameters, Bishop, Jalal-Abad
  • Reza Mohebian, Mostafa Yari, Mohammad Ali Riahi Pages 84-94

    Spectral decomposition has provided a means for observing those features in seismic data that are not always clear in the time domain. There are several approaches that can be used to produce a spectral decomposition of a seismic trace. A case history of using the spectral decomposition and coherency to interpret incised valleys was shown by Peyton et al. (1998). Partyka et al. (1999) used a windowed spectral analysis to produce discrete-frequency energy cubes for applications in reservoir characterization. Continuous wavelet transform (CWT) was introduced by Morlet et al. (1982). In CWT, time frequency atoms are chosen in such a way that its time support changes for different frequencies according to Heisenberg’s uncertainty principle (Mallat, 1999; Daubechies, 1992). In early years, transforming the seismic traces into time and frequency domain was done via a windowed Fourier transform, called the Short Time Fourier Transform (STFT). In STFT, the time-frequency resolution is fixed over the entire time-frequency space by reselecting a window length. Therefore, the resolution in the seismic data analysis becomes dependent on a user-specified window length. This problem can be resolved by an S-transform. The window used in this method at any given moment is adapted to the frequency analysis. In this method, time and frequency resolution will change the window of the time–frequency to obtain a multi-resolution analysis. The CWT commonly used in the data compression decomposes the signal from the time domain to a time-scale domain using the orthogonal wavelets that vary in length and frequency by a factor of two. In contrast, the S-transform decomposes the signal from the time domain to a time-frequency domain using the non-orthogonal variable size Morlet wavelet (Mallat, 1999; Stockwell, 1996; Sinha, et al., 2005). While being computationally more intensive than the orthogonal wavelet transform, the nonorthogonal S-transform provides an added time and frequency resolution valuable for interpretation. St­ockwell et al. (1996) interpreted the S-transform as a combination of CWT and STFT. The difference between the S-transform and STFT is that the Gaussian window is a function of time and frequency for the S-transform, while it is only a function of time for the Short Time Fourier Transform (STFT). Another method of spectral decomposition is Matching Pursuit Decomposition (MPD). MPD involves a cross-correlation of the wavelet dictionary against the seismic trace. The projection of the best correlating wavelet on the seismic trace is then subtracted from that trace. The wavelet dictionary is then cross-correlated against the residual, and again the best correlated wavelet projection is subtracted. The process is repeated iteratively until the energy left in the residual falls below some acceptable threshold (Castagna, 2006). MPD has been used recently in a seismic signal analysis (Castagna et al., 2003; Liu and Marfurt, 2005). Wang et al. (2007) has applied an inverse-Q filter to the seismic signals to show the existence of a low-frequency shadow. One of the applications of the spectral decomposition is to detect the geological structures with less thickness such as buried channels. Filled with porous rocks, and located in a non-porous environment, the channels will be a good place for hydrocarbon reserves. For this reason, from the past, detection of the boundary of these channels and lithologic features inside them has been essential in explorations of these reserves. In this paper, we investigated the application and efficiency of the S-transform and MPD methods in the time-frequency analysis of the seismic sections to delineate and detect a buried channel in Sarvak Formation in one of the oil fields located in the South West of Iran. The results from the MPD and S-transform were compared with an STFT applying to the single frequency seismic sections at frequencies 15 Hz and 25 Hz.

    Keywords: Models of nonstationary signal, time-frequency spectrum, continues wavelet transform, hydrocarbon reservoir's direct attribute, matching persuade, seismic signal
  • Ali Hamidi Habib, Mohammad Ali Riahi, Gholamhossein Norozi Pages 95-106

    During the last decade, there has been an increasing interest in the use of attributes derived from 3-D seismic data to define reservoir properties, such as the presence and amount of porosity and fluid content. Therefore, it is worthwhile to continue the advances in the study and application of expert systems in the petroleum industry so that it is possible to use the attributes in reservoir characterization more effectively. The establishment of the existence of an intelligent formulation between two sets of data (inputs/outputs) has been the main topic of such studies. One such topic of great interest was the characterization of 3D seismic data with relation to lithology, rock type, fluid content, porosity, shear wave velocity, and other reservoir properties. Petrophysical parameters, such as water saturation and porosity, are very important data for hydrocarbon reservoir characterization. Hitherto, several researchers endeavored to predict them from seismic data using statistical methods and intelligent systems (Russell et al., 2002; Russell et al., 2003; Chopra and Marfurt, 2006). Correct recognition of porosity model and estimation of petrophysical parameters in reservoirs is a key issue in any oil project. The correct estimation of porosity as a petrophysical parameter can inform decisions that have high financial risk, such as drilling. By determining reservoir characterizations and assessing petrophisical parameters with a adequate accuracy during the first steps of studies, researchers would be able to produce optimum exploitation with a minimum number of wells. This paper focuses on the link between seismic attributes and reservoir properties such as lithology, porosity, and pore-fluid saturation. Typically, seismic attributes have been the only information obtainable from seismic data. Using statistical rock-physics, the type of seismic attributes that are direct functions (analytically defined) of the elastic properties can be probabilistically transformed, sample-by-sample and independently one of each other, into reservoir properties. In this paper, we combine the methods of geostatistics and multiattribute prediction for the integration of seismic and well-log data, and illustrate this new procedure with a case study. A number of new ideas are developed for the statistical determination of reservoir parameters using seismic attributes, combining the classical techniques of multivariate statistics and the more recent methods of neural network analysis. We first extract average porosity values at the zone of interest, and then compare these values to average seismic attributes over the same zone. The technique of cross-validation is subequently used to show which attributes are significant. We then apply the results of the training and cross-validation to data slices derived from both the seismic data cube and the inverted cube to produce an initial porosity map. Finally, we improve the fit between the well log values and the porosity map using co-kriging. The main purpose of this paper is to present a quantitative assessment of porosity as a petrophisical parameter in an offshore oil field in Iran using the newly proposed method of reservoirs parameter estimation. This paper shows that by using both seismic data and well logging data it is possible to obtain a more accurate model of porosity in a given reservoir. Specifically, the study determines the relationship between a set of seismic attributes and a reservoir parameter such as porosity at well locations, and then uses this relationship to compute reservoir parameters from sets of seismic attributes throughout a seismic volume. Therefore, a primary plan of porosity is available for the area of study. In the next step, by using geostatistics and, according to the initial plan, as a secondary variable in collocated cokriging, we can approach a more accurate plan to show the distribution of porosity. In effect, the proposed method combines geostatistics with multiattribute transforms. This technique uses multivariate statistics and neural networks to improve the secondary dataset used in the collocated cokriging technique.

    Keywords: Porosity, petrophsical parameters, multiattribute transforms, Neural Network, multivariate statistics, Geostatistics
  • Masomeh Ahmadi-Hojat, Farhang Ahmadi-Givi Pages 107-127
    The purpose of this paper was to study the dynamical and thermodynamical structures of the Siberian high pressure (SH) and some of the effective parameters in its development. The data used were from the National Centers for EnvironmentalPrediction–NationalCenter for Atmospheric Research (NCEP–NCAR) reanalysis winter time data for a 60-year period (1948-2008). To identify the most significant feature points using the Siberian High Index (SHI) the 25 strongest cases were selected from the period of the study. The range of the fields investigated included the mean sea level pressure, the lower- and upper-tropospheric geopotential heights, wind, temperature, and the potential vorticity (PV), as well as the pressure field on the tropopause surface (PV = 2 PVU) and the wave activity vector. The results showed that in the sea level pressure field, the Siberian high pressure has been strengthening around its climatological position at the developing stage until the peak time. After that the high pressure has started to extend and its central cell has been divided into two distinct cells with one moving southeastward into the Far East and consequently cold surge over there while the ridge of the other cell extends westerly toward Europe and the North East of Iran. The composite maps of the anomalies suggest that the vertical structures of the SH are different in the downstream and upstream portions of the surface high. A noticeable feature was that the downstream portion of the SH exhibited a thermal structure, while its upstream portion showed a dynamical structure. In addition, although the SH was generally recognized as an anticyclonic circulation in the lower troposphere, the vertical structure of the wind anomalies indicated that there were cyclonic and anticyclonic circulations in the upper troposphere, respectively, in the downstream and upstream parts of the central area (40-65°N, 80-120°E) of the SH. At the amplification stage of the SH, the appearance of negative pressure anomalies over the Mediterranean Sea implies that this stage can enhance favorable conditions for cyclogenesis over the Mediterranean Sea. This indicates that the SH could have some impacts on the meteorological fields outside its source area. The other finding was that the SH may have profound effects on the meteorological fields in the middle- and upper-troposphere. Examples include the occurrence of a tropopause folding in the downstream side of the SH and the formation of a blocking ridge, as a part of a quasi-stationary external Rossby wave train, in the upstream side when the surface high is amplified. The calculation of the horizontal component of the wave activity flux for the stationary Rossby wave revealed that the Rossby wave originated from the Euro-Atlantic sector and the blocking ridge was a component of this approaching wave. Also, during the development of the blocking ridge, the wave activity flux diverges from the negative height anomalies located at the upstream of the ridge and converges into the amplifying blocking ridge. By evaluating each term of the horizontal temperature advection based on a composite field for the 850 hPa level, it was found that the advection of the basic state temperature by wind anomalies had an important role in developing a surface cold high throughout the amplification stage of the Siberian high pressure. Finally, through a qualitative analysis, it was seen that the coupling between the negative PV anomalies at the surface due to the low-level cold anomalies and upper-level positive PV anomalies due to the tropopause folding lead to the amplification of the SH as well as the blocking ridge. The main conclusion was that the SH was not simply a local thermal system along with the restricted effects in the low-level troposphere.
    Keywords: Siberian high, potential vorticity, tropopause folding, quasi-stationary external Rossby wave, blocking
  • Amin Abbassi, Mohammad Tatar, Mohammad Reza Abbassi, Farzam Yaminifard Pages 128-146

    Mosha is one of the most important and threatening faults in TehranMegacity (Capital of Iran). Instrumental seismology in the region was limited to insufficient data along with location errors especially in depths as well as the small number of available focal mechanisms in bound with the trends in Hedayati et al. (1976) and Ashtari et al. (2005). In the study ahead, by installing 48 local and temporary seismological stations during June to November 2006, micro earthquakes around the eastern part of the southern flank of central Alborz particularly the Mosha fault zone were recorded and processed. The local temporary network consisted of 24 one-vertical-component TAD-2 Hz, 11 3-components MiniTitan- 5 S and 13 3-components Guralp 6TD 0.02-10 S sensors. Sampling rates were 100 samples/sec the for Guralp sensors and 125 samples/sec for the others in continuous and triggering threshold modes. 115 well recorded micro earthquakes with an appropriate azimuthal gap (Gap ≤ 180°), a trivial residual timing and location errors (RMS ≤ 0.3 sec, Erh ≤ 2 km and Erz ≤ 3 km) were selected and applied for the wave velocity ratio (Vp/Vs) calculation based on Wadati (1933) and Chatelain (1978) approaches (1549 P-wave and 1495 S-wave arrival times). A 1-D model of the upper crustal velocity structure was determined as well. SEISAN software (Havskov and Ottemöller, 2005) for phase readings, Hypo71 (Lee and Lahr, 1975) Hypocenter (Barry, 1994.) for seismic event locations, VELEST (Kissling, 1988) for a crustal velocity layers model and FOCMEC program (Snoke, 2003) for focal mechanism solutions were used. Four layers at the depths 3, 7, 16 and 24 km of the crust were determined with P-wave velocities of 5.4, 5.8, 6.1 and 6.25 km/sec, respectively. Accurate locations of 553 micro earthquakes and 15 A and 31 B (excellent for A and good for B groups in red and blue colors in related figures respectively) classes of focal mechanism solutions of the reliable micro earthquakes with a high quality of P-wave first arrival polarities (more than 8 Pg onset’s signs), were provided for the possible analyses of seismicity, the fault geometries-movements and seismotectonic interpretations. We have found that the Eastern part of Mosha fault, longitudinally located from 51.7° to 52.5°, has a northward high dip angle and complex focal mechanisms. The fault mechanisms varied from thrust, strike slip with a small reverse component to reverse with a small normal component from the West to the East. From grouping analysis of the focal mechanism P (or T) axes, the strikes, N 40 (or N 130) were derived for the compression (or tension) stress direction approximately. The focal mechanisms accompanying with the geodynamic analyses from GPS measurements in the studied area reveal a slip partitioning in the local and regional scale compatible with some conclusions from the previous studies (Ritz et al., 2006 and Tatar et al., 2007). Although micro and large earthquakes nonlinearity relation in stress axes orients as true or not proved is important (Mercier et al., 1991 and Hatzfeld et al., 1999), seismotectonics strain analyses of the micro earthquakes in the studied area show the same results of the large earthquakes stress analyses in stress inversion method by Gillard and Wyss (1995). In addition, this study has demonstrated a seismic active trend as mentioned by Jajrood-Pardis-Absard in the South of Mosha fault. Concentrated seismic activities around Mosha fault in the time of data recording have shown that it is a potential hazard for the studied area.

    Keywords: Mosha fault, microearthquake, crustal velocity structure, Focal mechanism