فهرست مطالب

فیزیک زمین و فضا - سال چهلم شماره 4 (زمستان 1393)

فصلنامه فیزیک زمین و فضا
سال چهلم شماره 4 (زمستان 1393)

  • تاریخ انتشار: 1393/10/27
  • تعداد عناوین: 12
|
  • کورش امامی، اسماعیل بایرام نژاد، محمدرضا قیطانچی صفحات 1-14
    یکی از کاربردهای مهم زلزله شناسی تعیین کانون زمین لرزه ها است. دقت بیشتر در این فرایند موجب کسب نتایج بهتری در سایر بررسی ها مانند بررسی ساختار پوسته، لرزه خیزی منطقه و تعیین صفحه گسل مسبب زلزله ها می شود. روش های کلاسیک تعیین محل بر پایه معادلات گایگر (1910) بنا شده است. در این روش با استفاده از جمله اول بسط تیلور به کمک یک مدل سرعتی معلوم برای منطقه، رابطه بین زمان رسید و مختصات کانونی زلزله خطی سازی می شود. پاسخ بهینه در روش های خطی شده، از کمینه کردن اختلاف داده های محاسبه شده و مشاهده شده به روش کمترین مربعات گام به گام صورت می پذیرد. اغلب به علت نبود پوشش سمتی (آزیموتی) مناسب و فقط به واسطه استفاده از داده زمان سیر، خطای قابل توجهی در تعیین محل کانون زلزله ها ایجاد می شود. در راستای کاهش خطای ناشی از تعیین محل، پژوهشگران از سمت و پارامتر پرتو برای بهینه سازی نتایج استفاده می کنند.
    در این تحقیق سعی شده است بهینه سازی موقعیت رویدادهای لرزه ای در محدوده °44 تا °50 طول شرقی و°36 تا °40 عرض شمالی واقع در شمال غرب ایران صورت گیرد. 11200 زمین لرزه با بزرگی برابر یا بیش از 4/1 در مقیاس ناتلی، رخداده بین سال های 2006 تا 2013، ابتدا فقط با استفاده از زمان سیر موج P و سپس با استفاده از زمان سیر، پارامتر پرتو و سمت موج P تعیین محل شد. نتایج حاصل بیانگر تاثیر بسیار مهم پارامتر پرتو و سمت به همراه زمان سیر، در کاهش خطای افقی تعیین محل زمین لرزه ها به میزان قابل توجهی است.
    کلیدواژگان: لرزه خیزی آذربایجان، سمت، پارامتر پرتو، بیضی اطمینان، زمان سیر
  • سید حسن موسوی بفرویی، نوربخش میرزایی*، الهام شعبانی، مرتضی اسکندری قادی صفحات 15-38

    به منظور ارزیابی سطوح جنبش نیرومند روی سنگ بستر برای بیشینه شتاب و مقادیر طیفی در پهنه ایران، برآورد خطر زمین لرزه به روش احتمالاتی اصلاح شده صورت گرفته است. فهرست نامه ای (کاتالوگی) یکنواخت، شامل زمین لرزه های تاریخی و دستگاهی تا انتهای 2012 در ناحیه ای محدود به 22 تا 42 درجه عرض شمالی و 42 تا 66 درجه طول شرقی، تهیه شده است. تعداد 238 چشمه بالقوه زمین لرزه به صورت پهنه ای تعیین و پارامترهای لرزه خیزی هریک ارزیابی شده است. در برآورد آهنگ رویداد زمین لرزه برای بازه های بزرگی گوناگون در هر چشمه بالقوه زمین لرزه، از تابع توزیع مکانی استفاده شده است. با نرم افزار OpenQuake، برآورد خطر زمین لرزه در ایران برای شبکه ای شامل بیش از 40000 نقطه و سه رابطه تضعیف و دو مدل لرزه خیزی با استفاده از درخت منطقی، صورت گرفته است. نقشه های خطر زمین لرزه برای PGA و SA در دوره های تناوب 2/0 و 2 ثانیه روی سنگ بستر با 5% میرایی برای 10% و 63% احتمال فزونی در 50 سال تهیه شده است. برای دوره بازگشت 475 سال، مقدار PGA در مناطق گوناگون کشور از g63/0 برای شمال شرق استان لرستان تا g1/0 برای مناطق مرکزی ایران متغیر است. در بین مراکز استان های کشور، بندرعباس و تبریز با g35/0 بیشترین و یزد و اصفهان با g11/0 کمترین مقدار PGA روی سنگ بستر را برای دوره بازگشت 475 سال به خود اختصاص داده اند. نتایج، با نقشه پهنه بندی خطر نسبی زمین لرزه در ایران (استاندارد2800، ویرایش 3) و تحقیقات صورت گرفته سایر محققان، مقایسه شده است.

    کلیدواژگان: برآورد خطر زمین لرزه، جنبش نیرومند زمین، لرزه خیزی، چشمه بالقوه زمین لرزه، شتاب طیفی
  • یوسف محمدی، حمیدرضا سیاه کوهی* صفحات 39-52

    تفکیک طیفی ابزاری قوی برای تحلیل ماهیت متغیر با زمان سیگنال های ناپایا است. با توجه به ماهیت ناپایای داده های لرزه ای، این ابزار به طور گسترده در تحلیل، پردازش و تفسیر داده های لرزه ای به کار می رود. دست یافتن به روشی با قدرت تقکیک زیاد و حجم محاسبات کم از اهداف بنیادی در زمینه تفکیک طیفی است. روش تبدیل فوریه زمان کوتاه و طیف نگاشت آن اولین و سریع ترین روشی است که در این زمینه عرضه شده است، اما در قدرت تفکیک محدودیت دارد. توزیع ویگنر- وایل غیرخطی نگرشی دیگر در تفکیک طیفی است که قدرت تفکیک برتری در صفحه زمان- بسامد دارد اما به علت ماهیت غیرخطی، عملکردش با مشکل جملات متقاطع مواجه است. روش های متعددی برای بهبود قدرت تفکیک نمایش زمان- بسامد سیگنال های ناپایا معرفی شده است که طیف نگاشت واهمامیختی و طیف نگاشت بازچینی شده از جمله این روش ها هستند. برای افزایش قدرت تفکیک، هر دو روش تلاش می کنند که قدرت تفکیک طیف نگاشت به دست آمده را به قدرت تفکیک توزیع ویگنر- وایل نزدیک کنند. در این مقاله ابتدا شکل اصلاح شده روش طیف نگاشت واهمامیختی را معرفی و سپس عملکرد آن را با روش طیف نگاشت بازچینی شده مقایسه می کنیم. نتایج اعمال روش ها روی داده های مصنوعی و واقعی برتری روش معرفی شده را نشان می دهد. این روش با وجود افزایش قدرت تفکیک، عاری از اثرات ناخواسته و غیر واقعی روی داده های لرزه ای است. هرچند اندکی حجم محاسباتی بیشتری نسبت به روش بازچینی شده دارد.

    کلیدواژگان: تفکیک طیفی، طیف نگاشت واهمامیختی بهبودیافته، الگوریتم لوسی، ریچاردسون، طیف نگاشت بازچینی شده
  • میلاد تکلو، بهروز اسکویی، بنفشه حبیبیان دهکردی صفحات 53-65
    سامانه کلی گسل های اطراف قله سبلان را می توان به دو صورت خطی و کمانی در نظر گرفت. گسل هایی با ساختار خطی عمدتا با راستای NW-SE و N-S در قسمت جنوب قله سبلان مشاهده می شوند. گسل های کمانی اطراف آتش فشان سبلان تقریبا منطبق بر دهانه آتش فشان هستند و منجر به نفوذ توده آتش فشانی شده اند. مدل سازی داده های مگنتوتلوریک (زمین مغناط برقی) روی نیم رخ MM صورت گرفت، به گونه ای که این نیم رخ هر دو سامانه گسلی منطقه را قطع می کند. با بررسی و تفسیر نیم رخ مقاومت ویژه MM حاصل از وارون سازی داده های مگنتوتلوریک به صورت واضحی مشخص می شود که گسل ها اثر خود را در نیم رخ مقاومت ویژه به صورت جابه جایی ها و تغییرات ناگهانی در بی هنجاری های مقاومت ویژه وبه صورت زون های رسانا با مقاومت ویژه کم در امتداد نیم رخ نشان داده اند.
    کلیدواژگان: سبلان، گسل، مگنتوتلوریک، نیم رخ مقاومت ویژه، وارون سازی
  • عبدالرضا صفری، محمدعلی شریفی، سعید فرزانه * صفحات 67-82

    گیرنده های سامانه موقعیت یاب جهانی موجود در ماهواره های گرانی سنجی اطلاعات ارزشمند موقعیت سه بعدی ماهواره را به صورت تقریبا پیوسته اندازه گیری می کنند. با این حال سرعت ماهواره را که کاربردهای بسیار مهمی در ژئودزی ماهواره ای دارد نمی توان به طور مستقیم اندازه گیری کرد. برای نمونه به منظور بازیابی میدان گرانش زمین به روش ماهواره ای نیاز به سرعت و یا شتاب ماهواره است تا با به کارگیری روش انتگرال انرژی یا روش های مبتنی بر معادله حرکت، میدان گرانش زمین تعیین شود، بنابراین لازم است سرعت ماهواره را به روش های دیگری تعیین کنیم. روش متداول در برآورد بردار سرعت، استفاده از روش های مشتق گیری عددی است. در تحقیق حاضر روش فیلتر کالمن تعمیم یافته برای محاسبه بردار سرعت با استفاده از سری زمانی بردار موقعیت ماهواره معرفی و با روش مشتق گیری عددی نیوتن- رافسون که به منزله بهترین روش عددی در تحقیقات پیشین معرفی شده است، مورد مقایسه قرار می گیرد. نتایج عددی این مقاله نشان می دهد که روش پیش گفته در مقایسه با سایر روش های عددی مشتق گیری، دقت قابل قبول تری نتیجه می دهد.

    کلیدواژگان: ماهواره های مدار پایین، میدان گرانش زمین، سامانه فاصله یابی باند K، فیلتر کالمن تعمیم یافته، مشتق گیر عددی
  • عبدالرضا صفری، صباح راموز *، عباسعلی جمعه گی صفحات 83-98

    یکی از روش های مرسوم در مدل سازی میدان گرانی (ثقل) زمین استفاده از تبدیل همجایی (کالوکیشن) کمترین مربعات است. یکی از مزایای روش تبدیل همجایی کمترین مربعات، امکان استفاده هم زمان از مشاهدات ژئودتیک برای مدل سازی میدان گرانی زمین است. در مدل سازی میدان گرانی با استفاده از تبدیل همجایی کمترین مربعات از روش حذف و بازگشت استفاده می شود. برای این منظور پس از حذف اثر مدل جهانی و اثر توپوگرافی از مشاهدات، و پس از حل مسئله، این اثرات بازگردانده می شود. چگالی پوسته در مرحله حذف اثر توپوگرافی از روی مشاهدات (2.67) در نظر گرفته می شود. در این مقاله بهبود مدل سازی محلی میدان گرانی زمین با استفاده از تبدیل همجایی کمترین مربعات با درنظر گرفتن چگالی واقعی و مدل جهانی و استفاده هم زمان از مشاهدات بی هنجاری جاذبه و GPS/Leveling مورد تحقیق قرار گرفته است. برای بررسی روش پیشنهادی میدان گرانی زمین در منطقه پارس ساحلی با در نظر گرفتن چگالی واقعی آن منطقه (2.30) و داده های بی هنجاری جاذبه و GPS/Leveling مدل سازی شد. نتایج تحقیق نشان دهنده بهبود مدل سازی میدان گرانی براساس روش پیشنهادی است، به نحوی که از دست یابی به دقت نسبی 05/0 متر در تعیین زمین وار (ژئوئید) در نقاط کنترل در منطقه تحقیقاتی حکایت دارد.

    کلیدواژگان: مدل سازی میدان گرانی، تبدیل همجایی (کالوکیشن) کمترین مربعات (LSC)، نظریه حذف و بازگشت، مدل سازی تابع کوواریانس، مدل وریانس های درجه، چگالی، مدل ژئوپتانسیل جهانی، تعیین زمین وار (ژئوئید)
  • شیما سلیمانی، امید اصغری، محمدکاظم حفیظی صفحات 99-110
    مسئله نفوذ آب شور به داخل آب شیرین در مناطق ساحلی، بسیاری از کشورها را نگران کرده است. با توجه به اهمیت تامین منابع آب شیرین در چنین مناطقی و مخاطره موجود در احتمال برخورد چاه های آب حفر شده به زون های آب شور، داشتن مدل سه بعدی دقیقی از نحوه توزیع آب های شور و شیرین حائز اهمیت است. برای دستیابی به چنین مدلی، اندازه گیری های قابل اطمینان و تفسیر های کامل و جامع مورد نیاز است. هدف از این تحقیق، استفاده از روش های زمین آماری برای سه بعدی سازی نتایج تحقیقات مقاومت ویژه و بارز سازی زون های آب شور و شیرین در آبخوان است. داده های مقاومت ویژه ظاهری مورد نیاز در این تحقیق با آرایه شلومبرژه در دشت برازجان بوشهر برداشت شد. پس از تفسیر های صورت گرفته روی داده های اولیه، مقادیر مقاومت ویژه واقعی آبخوان، درحکم ورودی های الگوریتم شبیه سازی پی درپی گاوسی معرفی و داده ها 100 بار شبیه سازی شدند. اعتبارسنجی صورت گرفته روی نتایج نشان می دهد که 100 بار تحقق مقادیر مقاومت ویژه، معتبر هستند. سپس نقشه هایE-type و احتمال مقاومت ویژه آبخوان رسم شد. این نقشه ها مدلی سه بعدی از توزیع مقاومت ویژه واقعی آبخوان و احتمال وجود آب شور و شیرین را نمایش می دهند. مقایسه مدل به دست آمده از شبیه سازی و چاه موجود در منطقه حاکی از تطبیق زیاد مدل با واقعیت است.
    کلیدواژگان: ژئوالکتریک، زمین آمار، آب شور و شیرین، شبیه سازی، تغییرنما (واریوگرام)
  • ابوالفضل اسدیان، علی مرادزاده*، علیرضا عرب امیری، علی نجاتی کلاته، داود رجبی صفحات 111-126
    برداشت های الکترومغناطیس بالگردی حوزه بسامد به علت سرعت و قدرت تفکیک زیاد نسبت به دیگر روش های هوابرد (به واسطه ارتفاع پایین و مانورپذیری زیاد بالگرد) در زمره روش های پرکاربرد در تصویرسازی مقاومت ویژه هوابرد محسوب می شوند. اغلب تفسیر و مدل سازی داده های الکترومغناطیس هوایی به دو روش کلی تبدیل مستقیم و روش های تکرار محاسبات صورت می گیرد. در روش های تبدیل مستقیم، از تبدیل مستقیم داده های الکترومغناطیسی برای یک مدل کلی مثل یک نیم فضا در به دست آوردن مدل تقریبی استفاده می شود درحالی که در روش تکرار محاسبات، معکوس سازی دقیق داده ها با انتخاب مدل اولیه مناسب صورت می گیرد. روش تجزیه مقادیر تکین یکی از روش های تکرار محاسبات است که با وجود توانمندی های قابل توجه تاکنون در کشورمان برای مدل سازی داده های الکترومغناطیس هوابرد مورد استفاده قرار نگرفته است. به همین علت در این تحقیق در نظر است تا با برنامه های نوشته شده در محیط برنامه نویسی مت لب اثر مدل های اولیه گوناگون روی دقت نتایج مدل سازی این روش در یک فرایند تکراری روش لونبرگ-مارکوارت مورد بررسی قرار گیرد. نتایج مدل سازی های داده های مصنوعی و واقعی نشان می دهد که در صورت نداشتن اطلاعات اولیه از مدل مورد بررسی، روش تجزیه مقادیر تکین ممکن است نتایج نامطلوبی تولید کند. این اثر می تواند ناشی از نزدیک نبودن مدل اولیه به مدل واقعی باشد که موجب برازش نشدن زیاد بین داده های واقعی و پاسخ مدل و در نتیجه همگرا نشدن فرایند معکوس سازی شود. همچنین نتایج مطالعه نشان می دهد که استفاده از مدل های مقاومت ویژه حاصل از روش های عمق مرکزی سیمون وسنگپیل به عنوان مدل اولیه می تواند نتایج مطلوب تری در معکوس سازی های داده های الکترومغناطیس بالگردی به کمک روش تجزیه مقادیر تکین نسبت به دیگر روش های تبدیل مستقیم، به دست دهد.
    کلیدواژگان: الکترومغناطیس بالگردی، روش تبدیل مستقیم، روش های تکرار محاسبات، تجزیه مقادیر تکین، روش عمق مرکزی
  • کیارا عباس زاده اقدم، علیرضا محب الحجه، فرهنگ احمدی گیوی صفحات 127-138
    یافتن رابطه بین وضعیت و تحول پوشن سپهر و شرایط جوی در سطح زمین، علاوه بر اینکه درک ما را از چگونگی عملکرد لایه های گوناگون جو و دینامیک آنها افزایش می دهد، پیامدهایی برای بهبود پیش بینی وضع هوا و اقلیم در هر منطقه دربر دارد. هدف اصلی پژوهش حاضر، شناخت این پیامدها با تمرکز بر منطقه جنوب غرب آسیا است. در این پژوهش، داده های روزانه بازتحلیل NCEP/NCAR شامل فشار، دما، دمای کمینه، رطوبت نسبی، آهنگ بارش و ارتفاع ژئوپتانسیلی در ترازهای گوناگون و یا روی سطح زمین از 1958 تا 2011 به کار رفته است. منطقه مورد بررسی، عرض های N°25 تا N°45 و طول های E°35 تا E°65 را دربر می گیرد که شامل ایران و مناطق واقع در غرب آن تا شرق دریای مدیترانه است. در این تحقیق نشان داده می شود که به دنبال رویداد تاوه قطبی ضعیف، از یک سو دمای میانگین روزانه سطح زمین در بیشتر نقاط منطقه نسبت به حالت قوی تاوه قطبی کاهش می یابد و از سوی دیگر تعداد رویدادهای سرد شمارش شده در کل منطقه نیز کمتر می شود. همچنین، پس از رویداد تاوه قطبی ضعیف، همه نقاط منطقه با کاهش فشار سطحی و بیشتر نقاط با افزایش رطوبت نسبی نسبت به تاوه قطبی قوی همراه است. آهنگ بارش در دو حالت متضاد تاوه قطبی، در هر یک از نقاط منطقه متفاوت است؛ در برخی نقاط افزایش و در برخی کاهش مشاهده می شود که بیشینه تفاوت دو حالت به 5/12 میلی متر در ماه می رسد. در مجموع، می توان نتیجه گرفت که گرچه میزان تاثیر شدت تاوه قطبی پوشن سپهر در وضعیت آب وهوایی ایران از برخی نقاط کره زمین مانند شمال قاره امریکا کمتر است، اما این آثار می تواند در بحث پیش بینی بلند مدت وضع هوا در این منطقه حائز اهمیت باشد.
    کلیدواژگان: تاوه قطبی پوشن سپهر، شاخص شدت زمستان، رویداد سرد، شاخص NAM، وردایی اقلیم
  • زهرا شرعی پور، عباسعلی علی اکبری بیدختی صفحات 139-154
    در این تحقیق، پریشیدگی های ازن کلی ایجاد شده از امواج وردسپهر بالایی زمستانی در منطقه ایران برای سال های 2005 تا 2013 بررسی شده است. حضور ناوه ها و پشته های وردسپهر بالایی زمستانی در منطقه می تواند باعث افزایش و کاهش مقدار روزانه ازن کلی در منطقه شود. این الگوهای توزیع ازن کلی از نقشه های ارتفاع ژئوپتانسیلی در سطح hPa 300 تبعیت می کنند.
    افزایش روز به روز ازن در اثر ناوه ممکن است تا مقدار DU 140 و کاهش روز به روز ازن در اثر پشته ممکن است تا مقدار DU 75 برسد. اغلب این افزایش ها و کاهش های ازنی، کل منطقه ایران را تحت تاثیر قرار می دهد ولی بیشینه آنها بیشتر در محدوده عرض های 30-35 درجه شمالی و 50-60 درجه شرقی رخ می دهد. این بیشینه های افزایش روز به روز ازن یا در محدوده بیشینه روزانه ازن و یا در نزدیکی گرادیان شدید پربندهای ازنی روزانه تشکیل می شوند. موقعیت بیشینه افزایش و کاهش روزانه ازن به ترتیب در نزدیک محور ناوه ها و پشته ها و همچنین در سرعت های کم جریان جتی جنب حاره ای قرار دارد. در مناطق گرادیان افقی شدید ازنی در محدوده محور ناوه و در راستای نصف النهاری، مقدار افزایش ازن بر کاهش ارتفاع ژئوپتانسیلی برای الگوهای گوناگون بین DU/gpm 2/0 تا 8/0و به طور متوسط DU/gpm 5/0 است. در اغلب الگوهای به دست آمده، بیشینه سرعت نزولی هوا بیش از Pa/s 2/0 و نمونه بیشینه سرعت نصف النهاری باد بیش از m/s 30 و نشان دهنده فعالیت موج قابل ملاحظه ای است. این فعالیت موج می تواند هوای با غلظت زیاد ازن وردسپهر فوقانی را که از منطقه شکست امواج سیاره ای راسبی در پوشن سپهر (stratosphere) عرض های بالا منشا می گیرد، به عرض های کمتر در این منطقه بیاورد.
    کلیدواژگان: ازن کلی، امواج همدیدی، ناوه و پشته، فرارفت افقی و قائم، ایران
  • اشرف اسدی، سید ابوالفضل مسعودیان صفحات 155-168
    هدف از این پژوهش، پهنه بندی ایران بر پایه گرماهای فرین است. برای این منظور از داده های روزانه دمای 663 ایستگاه اقلیمی و همدیدی کشور در بازه زمانی 1/1/1340 تا 11/10/1383 بهره برداری شده است. با استفاده از روش کریجینگ داده ها روی یاخته های 15×15 کیلومتر درون یابی شدند. به این ترتیب آرایه ای به ابعاد 7187 × 15992 تشکیل شد. برای شناسایی روزهای فرین گرم از نمایه انحراف بهنجار شده دما (NTD) بهره بردیم. سپس داده ها برحسب مقدار این نمایه و گستره حاکمیت گرما (NTD>0) مرتب شد و 264 روز اول که شرط NTD>2 را برآورده می کرد درحکم نمونه انتخاب شد. سرانجام یک آرایه به ابعاد 7187 × 264 تشکیل شد. تحلیل خوشه ایبه روش وارد روی این آرایه نشان داد که ایران را می توان از نظر گرماهای فرین به پنج ناحیه تقسیم کرد. ناحیه غربی شدیدترین گرماهای فرین را در طول دوره آماری سپری کرده است. این ناحیه بیش و پیش از دیگر نواحی از سامانه های ایجادکننده گرماهای فرین متاثر می شود. ناحیه جنوب و جنوب شرقی نیز کمترین گرماهای فرین را پشت سر گذاشته اند. بیشترین رویدادهای گرم فرین ناحیه غربی در ماه های دی و بهمن و در ناحیه جنوب و جنوب شرقی در اردیبهشت و مرداد روی داده است.
    کلیدواژگان: گرمای فرین، انحراف بهنجار شده دما، تحلیل خوشه ای، پهنه بندی
  • امید غفارپسند، فهیمه حسینی بالام، اسماعیل حسن زاده صفحات 169-180
    در این پژوهش با نگاهی جدید رفتار هواویزهای معلق در یک شاره متلاطم بررسی می شود. در این راه حرکت متلاطم یک شاره در یک مجرای مکعب مستطیلی با شرایط مرزی غیرلغزشی روی دیواره های آن به روش شبیه سازی عددی مستقیم شبیه سازی شد. در ادامه مسیرهای حرکت دو مجموعه هواویزها با عددهای استوکس 5 و 25 با رویکرد لاگرانژی مسیریابی شدند. با توجه به ابعاد و جرم هواویزهای مورد بررسی، تنها نیروی موثر در حرکت هواویزها، نیروی کشندگی شاره است و از نیروی براونی صرف نظر می شود.
    با بررسی شدت شارهای گوناگون هواویزها مشاهده شد که شار تلاطم زدا و شار پخش تلاطمی فرایندهای اصلی پراکنش هواویزهای معلق در حرکت متلاطم یک شاره است و شکل نمایه غلظت نیز از تقابل این دو شار حاصل شده است. همچنین دیده شد که شدت شار تلاطم زدای هواویزهای کوچک تر، از شدت شار هواویزهای بزرگ تر بیشتر است. این در حالی است که برای شار پخش تلاطمی، وارون این نامساوی مشاهده می شود. مقایسه نمایه های سرعت هواویزها و شاره حامل نیز نشان می دهد که هواویزهای ناحیه نزدیک دیوار از شاره حامل شان سرعت بیشتری دارند. این در حالی است که سرعت هواویزها در ناحیه مرکزی مجرا از سرعت شاره حامل کمتر است. تفاوت سرعت هواویزها و شاره حامل از مهاجرت عرضی هواویزها یا همان شار تلاطم زدا ناشی می شود.
    کلیدواژگان: هواویزها، پراکنش ذرات، تلاطم، شبیه سازی عددی مستقیم، شار تلاطم زدا، شار پرواز، آزاد، پخش
|
  • K. Emami, E. Bayramnejad, M. R. Gheitanchi Pages 1-14
    An important aim of seismology is to determine focuses of earthquakes. The more precisely the process is carried out، the better results will be achieved in various studies، for example، in vestigation of the crustal structure and seismogenic zone or the determination of the plane of faults causing earthquakes. The classic methods of locating are based on Geiger''s equations (1910). The relation between the arrival time and coordination of the earthquake is linearized in this method using the first term of the Taylor''s expansion and a velocity model available for the study area. The optimum responses in the linearized methods are obtained by minimizing the differences between the observed and calculated data using iterative Least Squares (L. S.) method. A considerable error is usually observed in determination of earthquake focus because of the absence of a proper azimuthal bearing، as well as the mere use of the travel time data. With the purpose of reduction of the errors، researchers have used azimuth and ray path parameters for optimization of their performance. The study area is located within 44-50o E and northern latitude 36-40oN، which is located in northwestern Iran، forming part of the central Alborz-Azerbaijan tectonic region. In this research the earthquakes of a magnitude equal or greater than 1. 4 in the body wave scale occurred in the area within the period from 2006 to 2013، included 11،200 events، first were located using only P wave traveltimes and then process was repeated by adding ray path and P wave azimuthal parameters data. To this effect، the aforesaid earthquakes were classified into two general classes، consisting of earthquakes of magnitudes 1. 4-3. 4 and earthquakes of magnitudes exceeding 3. 4. Concerning the first class of earthquakes، processing was made only for the data obtained from the stations in the local seismic network of the Institute of Geophysics، University of Tehran. The velocity model used in this part of the work was local Velocity Model (Bayramnejad، 2008) that is specific to the study area. Because of a large number of earthquakes، each of the aforesaid classes of earthquakes were subdivided into two subclasses consisting of the earthquakes with azimathal gaps less than 180 degrees and those with azimuthal gaps greater than 180 degrees. The results obtained by the earthquakes locating are represented by histograms which indicate that utilization of ray parameter and azimuthal parameter considerably reduce the horizontal error in relocating of earthquakes، specially for the earthquakes with the azimuthal gaps exceeding 180 degrees compared to utilization of mere travel time data. For the second class of earthquakes، we have used all the data from the stations located within the whole seismic range of Iran. It is evident that locating of earthquakes is carried out with a greater precision where more data is available. The velocity model used in this part was similar to the one used for the whole. Optimization of relocating of earthquakesare indicated by histograms for the magnitude of 3. 4-4. 6earthquakes. Results indicated reduction in the horizontal error when the azimuthal and ray path parameters have been used. For the magnitude 4. 6-6-2 earthquakes، it is proved، by drawing a certainty ellipse، that the use of azimuthal and ray parameters may optimize locating of earthquakes،and lower the dimensions of the certainty ellipse.
    Keywords: Seismicity of Azarbayejan, Azimuth, Confidence ellipsoid, Ray parameter, Travel time
  • H. Mousavi, N. Mirzaei, E. Shabani, M. Eskandari Ghadi Pages 15-38

    Growing environmental and social concerns، both on the part of decision makers and public opinion، have brought a new perspective to the perception of hazard assessments a valid alternative in the long-term، and an effective complement in short and medium terms، to traditional design procedure for a resistant and safe environment. Results of the gradual development of research on the probabilistic seismic hazard assessment (PSHA) in the past 40 years make a framework that could be used for estimation of probability of occurrences of earthquakes، at certain return periods on each site. The primary advantage of the PSHA over alternative representations of the earthquake threat is that PSHA integrates over all possible earthquake occurrences and ground motions to calculate a combined probability of exceedance that incorporates the relative frequencies of occurrence of different earthquakes and ground-motion characteristics. Features of the PSHA allow the ground-motion hazard to be expressed at multiple sites consistently in terms of the earthquake sizes، frequencies of occurrence، attenuation، and associated ground motion. Potential seismic sources، seismicity models، ground motion prediction equations (GMPE) and site effects، are the most important factors in seismic hazard studies. In this research، a modified probabilistic seismic hazard assessment، developed by Chinese researchers، is used to estimate the level of the potential seismic ground motion in Iran. A unified catalog of de-clustered earthquakes containing both historical and recent seismicity until late 2012 in the area encompassed by 22-42ºN and 42-66ºE is used. An area source model which contains 238 potential seismic sources within 5 major seismotectonic provinces in the study region has been delineated. Considering magnitude uncertainty and incompleteness of the earthquake data، the seismicity parameters of the seismotectonic provinces are determined. Spatial distribution function is used to determine occurrence rates of potential seismic sources for different magnitude intervals. Also، the background seismicity has been determined for each province. Seismic hazard assessment of Iran for a grid of over 40،000 points with 10 km interval is carried out using OpenQuake software by three different GMPEs and two models of seismicity for potential seismic sources in a logic tree. The peak ground horizontal acceleration (PGA) and spectral accelerations (SA) for 5% damping ratio at 0. 2 and 2 seconds corresponding to 10% and 63% probability of exceedances within 50 years (475- and 50-years mean return periods، respectively) are calculated. The resultant seismic hazard maps display a probabilistic estimate of PGA and 0. 2 and 2 sec SA for different mean return periods of 50 and 475 years. Resultant peak ground horizontal acceleration for 475-years return period varies from 0. 63g in North-East of Lorestan to 0. 1g in central Iran. The resultant PGAs for the 475-year return period in provincial capitals indicate the maximum value (0. 35g) in Bandar Abbas and Tabriz، and the minimum one (0. 11g) in Esfahan and Yazd. Comparison of the results of this study with the last map of seismic hazard in the Iranian code of practice for seismic resistance design of buildings، seismic macrozonation hazard map of Iran، Standard 2800، shows significant differences. Seismic hazard levels estimated in this study in southern Iran، Sistan-Baluchestan، Hormozgan and Fars provinces، show significantly higher values.

    Keywords: Seismic hazard assessment, Strong ground motions, Seismicity, Potential seismic sources, Spectral acceleration
  • Y. Mohammadi, H. Siahkoohi Pages 39-52

    Spectral decomposition or time-frequency representation (TFR) is a powerful tool for analyzing time varying nature of seismic data. Mapping of a one dimensional seismic time trace to a two dimensional function of time and frequency reveals some characteristics of seismic signals that are not available in only time or frequency representations. This approach has been widely used in seismic exploration applications such as: denoising، direct hydrocarbon detection، seismic sequence analysis، reservoir characterization، and resolution enhancement. Obtaining a method with higher resolution in TFR and less computational cost is of fundamental significant in this field. Among different methods for spectral decomposition، the short time Fourier transform (STFT) or its squared modulus (the spectrogram)، is the fastest one. However، it has limited resolution because of the uncertainty principle. The Wigner-Ville distribution (WVD) is another method that has superior TFR resolution، but its practical application is limited by undesirable cross terms because of its bilinear nature. The spectrogram obtained by the STFT method is related to the WVD via a 2-D deconvolution. The Deconvolutive Short Time Fourier Transform Spectrogram (DSTFTS) is another method that has been introduced recently to increase the TFR resolution of the spectrogram by applying the Lucy-Richardson algorithm for 2-D deconvolution operation. However، as expected theoretically، the resolution of the DSTFTS is not very close to the resolution of the WVD. In this paper، we first explain why the resolution of the deconvolutive spectrogram، introduced by Lu and Zhang (2009)، is not as close as to that of the WVD، and then we introduce its modified version which effectively improves the TFR resolution. In the 2-D deconvolution process، the sampling interval of the input data must agree in both time and frequency. We have shown that the sampling interval of the WVD of the window function in the frequency direction is not equal to that of the spectrogram of the signal. A simple technique is proposed here to overcome this problem. The proposed modified deconvolutive spectrogram provides better results compared to those of Lu and Zhang (2009). The TFR resolution is very close to that of the WVD because of the correct implementation of the deconvolution process. As we performed deconvolution by the Lucy- Rchardson algorithm which is not an ideal and perfect process، our TFR is not the same as that of the WVD. We have evaluated the performance of the modified deconvolutive spectrogram in spectral decomposition of a chirp signal; a synthetic signal consists of four Morlet wavelets and a real seismic trace. In comparison with the popular reassigned spectrogram method (Auger and Flandrin، 1995)، the modified deconvolutive spectrogram effectively improves the TFR resolution and has no artifact and undesirable effects on TFR of any type of input signal. However، the reassigned spectrogram eliminates some components and disturbs the shape of the TFR of the complex input signals. These issues can make a method completely unusable. We have shown that both the reassigned and the modified deconvolutive spectrograms have the same nature. The aim of the reassigned spectrogram is to improve the degree of the localization of the signal components by reallocating them over the time-frequency plane and applying a weighted integration. On the other hand، the aim of the modified deconvolutive spectrogram is to directly remove and compensate the damping effect of the window function by applying the 2-D deconvolution operation. From the utilized mathematical tools point of view، the 2-D deconvolution algorithms are more advanced and more reliable than the weighted integration (especially in case of signals with a complicated TFR such as seismic data). Finally، we applied the method to detect low frequency shadow associated with a possible thin gas reservoir on a seismic section. The low frequency shadow has been used as a seismic indicator of a hydrocarbon accumulation. Several reasons have been proposed for this shadow، such as abnormally high attenuation in gas filled reservoir and some other mechanism in data processing. By spectral decomposition of all traces، a cube of data has been obtained from the seismic section. We have used single frequency seismic sections، extracted from the cube، for interpretation. According to the results، high amplitude energy on the 20 Hz single frequency seismic section has been disappeared on the 50 Hz single frequency seismic section. This high amplitude energy is a hydrocarbon indicator that exists beneath the reservoir. The superior resolution of the modified deconvolutive spectrogram resulted in a remarkably better localization of the reservoir. Therefore، the modified deconvolutive spectrogram is a fast and effective method for spectral decomposition of seismic data، especially when it is used for seismic attributes extraction.

    Keywords: Spectral decomposition, Modified Deconvolutive Spectrogram, Lucy, Richardson algorithm, Reassigned Spectrogram
  • M. Takalu, B. Oskooi, B. Habibian Pages 53-65
    The magnetotelluric method is a frequency domain electromagnetic (EM) tool which utilizes natural variations in the Earth’s magnetic and electrical field as a source. Variations in the Earth’s natural fields supply information، providing the ability to study the electric substructure of the Earth in great depths. The large frequency range of the EM signals eradicates the problematic presence of conductive overburden or sampling frequencies، thereby allowing a deep penetration. Natural magnetotelluric (MT) signals arise from a variety of natural currents، including thunderstorms and solar winds. Total frequency range of the MT data can be from 40 kHz to less than 1-4 Hz. Data is acquired in a passive mode using a combination of electric sensors and induction coil magnetometers، and can detect changes of resistivity in great depths (Simpson and Bahr، 2005). Cagniard (1953) and Tikhonov (1950) developed a theory underlying the magnetotelluric method independent of each other in the 1950’s. They both observed that the electric and magnetic fields associated with telluric currents that flow in the Earth as a result of variations in the Earth’s natural electromagnetic field، should relate to each other in a certain way depending on the electrical characteristics of the Earth. The ratio of the horizontal electric field to the orthogonal horizontal magnetic field gives the electromagnetic impedance. The major advantage of the MT method is that it simultaneously measures the electric and magnetic fields in two perpendicular directions. The electric sensors are used to determine the electric field، which is derived from measurements of the voltage difference between electrode pairs of Ex and Ey. Induction coils are used to measure the magnetic field components in 3 orthogonal directions. The ratio of the recorded electric and magnetic fields gives an estimate of the apparent resistivity of the Earth at any given depth. The elements of the 2x2 impedance (Z) tensor are derived from complex ratios of the orthogonal components of the horizontal electric and magnetic fields in the frequency domain. As all the measurement stations are located over a line in our case، the data only permit the application of a two-dimensional interpretation process that requires the identification of the TE and the TM modes corresponding to electric and magnetic fields parallel to the geologic strike، respectively. As the geological strikes are not known in advance، the components of electromagnetic fields are measured in geomagnetic (or arbitrary) directions and the impedance tensor is rotated to principal axes. The strike direction often changes with depth in the field، accordingly، the rotational angle varies at each frequency. For two-dimensional structures، there are many conditions and consequently many possible schemes to determine the rotational angle (Simpson and Bahr، 2005). Here، we minimized the diagonal elements of the impedance tensor. There are two possible strike directions for a certain frequency and the interpreter identifies the TE and the TM modes using geological and geophysical information. Volcanic activities of Sabalan started in Eocene and resumed in Pliocence by the eruption of trachy-andesitic lava flow through the main caldera. Four major lithostratigraphic units were defined in the studied area in the following order: Dizu formation (Quaternary alluvium and terrace)، Kasra formation (post-caldera، latepliostocene)، Taos formation (syn-caldera، early Plistocene) and Volhazir formation (pre caldera، Pliocence). The Sabalan fault complex، structurally consists of linear faulting trending NE-SW، N-S and WNW-ESE and arcuate structures forming inner and outer fractures to the caldera (Sahabi، 1378). The MT data were obtained using three sets of MTU-5A equipment، two roving sites within Mt. Sabalan and one remote reference site. The raw time series data were processed using the Phoenix Geophysics، Ltd. SSMT 2000 software، and the resulting EDI files were edited، analyzed and modeled using the Geosystem’s WinGlink software. Overall، the obtained MT data were in good quality down to 10-2 Hz.
    Keywords: Fault, Inversion, Magnetotelluric, Resistivity Section, Sabalan
  • A. Safari, M. A. Sharifi, S. Farzaneh Pages 67-82

    Global Positioning System (GPS) receivers in gravimetric satellites continuously measure valuable information about 3D satellite position. However، the velocity of satellites، which has important applications in the satellite geodesy such as gravity field recovery، cannot be directly measured. These data are used in the energy integral method or other methods based on the Earth gravity field motion equation to determine the velocity or acceleration of satellite. In this study، the velocity vector is computed using the numerical differentiation and the Kalman filtering for the Gravity Recovery And Climate Experiment (GRACE) twin satellites. The Numerical results show that the Kalman filtering yields more accurate results than numerical differentiation when they are compared with the intersatellite range-rate measurements. In the wake of the New Gravity Satellite era due to the launch of ChAllenging Minisatellite Payload (CHAMP)، GRACE and GOCE، processing methods of enormously large orbit data has become the focus of the geodetic interest. The input data are different from earlier times as they contain some millions of continuous position data per satellite per year. The huge number of data arises from continuous observation from these satellites to the GPS system. This can be done due to the much higher altitude of the GPS satellites (20،000 km) compared to that of the gravity satellites (between 250 and 500 km). The latter is often referred to as Low Earth Orbiter، i. e. LEO. The GPS-LEO constellation as described above in technical terms is called High-Low Satellite to Satellite Tracking (High-Low SST). Thus some million position-data of the LEOs are the basis of the global gravity field determination techniques. The concept behind the Solutions is that satellites are in free-fall in the gravity field of the Earth. After modeling and removing all further force sources (e. g. gravitation of the Sun and the Moon and other planets، direct and indirect tides، surface forces (atmospheric drag، solar radiation pressure)) the remaining orbit is a trajectory in space، which is governed purely by the gravity field of the Earth. Therefore، the task is only to determine the force behind the motion. Conservation laws can be applied for satellites successfully. The Newton’s equation of law states the conservation of forces in a closed system. Applying it for a satellite requires information of the acceleration along the orbit. In this article the velocity vector is derived as a part of the unknown vector in Kalman filter algorithm. Kalman filter is a well-known mathematical tool، which gives the answer to the most frequent engineering question: how can we get the best estimate of the system state from the noisy measurements? The Kalman filter is a data processing algorithm that estimates the state of a system from noisy measurements using least-squares. It gives the optimal system state estimate together with a measure of how precise is the state estimate compared to true state. The Kalman filter performs optimal solution for a linear process with uncorrelated، white، zero mean Gaussian process and measurement disturbances. The Kalman filter is a “one-step back recursive filter”، meaning that there is no need to store past measurements for the purpose of computing the present time. We assume that the discrete random kinematic process to be estimated can be modeled with two main Kalman filter equations: Process equation Measurement equation Where x is the system state vector، A is the state transition matrix and W is the process noise. From this discrete time linear formation of the Kalman filter، the discrete time nonlinear formation of the Extended Kalman Filter is based. For the state space model for the Extended Kalman Filter (EKF)، the above linear equations are replaced by one nonlinear function: In this study، the velocity vector is computed using numerical differentiation and the Kalman filtering for the GRACE twin satellites. The numerical analysis shows that the Extended Kalman filtering yields the optimal solution. The comparison is performed based on the intersatellite range-rate measurements.

    Keywords: Low earth orbit, Gravity field, K, band ranging, Extended Kalman filter, Numerical differentiation
  • A. Safari, S. Ramouz, A. A. Jomegi Pages 83-98

    In this paper we use the Laest Squares Collocation (LSC) method for the «Geoid Determination» and the «Earth Gravity Field Modeling» in the Coastal Pars region in southern Iran. The LSC is one of the Earth Gravity Field Modeling methods which does not need regularization، opposite to the Geodetic Boundary Value Problem (GBVP) solutions، such as Stokes. Also، unlike statistical methods، the LSC has the ability to account for the systematic effects in the data (trend)، it predicts quantities between the data points (interpolation)، and estimates the quantity at the data point (filtering). The main advantage of LCS methods is their capability of incorporating heterogeneous data، for which gravimetric or geometric data can be used as inputs of the target function. In the first Section، we briefly introduce the LSC. In Section 2، we describe fundamentals of the LSC in a geometric space، and the way it connects the statistical concept of the covariance function and error least squares constrain in order to reproduce the kernel function in the Hilbert space which leads to the «Least Squares Collocation». Furthermore، the Wiener-Kolmogrov formula (Equation 7) is introduced as a solution for the LSC. Further in Section 2، we explain our approach to use the LSC with random errors to adapt its theory to the noisy data (Equation 28). In Section 3، the concept of «True Covariance Function» (Equation 12)، and the procedure of estimating its «Empirical Covariance Function» (Equation 34) based on two essential assumptions: «Non Stationarity» and «Ergodicity»، are described. We divide the covariance function into global and local subclasses and individually explain their structures. Also، we describe the covariance function modeling in the LSC by fitting an analytical covariance model (derived from a true covariance function) to an empirical covariance function (obtained from local gravity data) (Equation 63). We demonstrate that an analytical covariance model can be generated by determining the covariance model parameters using the least squares inverse (Equation 65). In Section 4، we use gravity anomaly data for determining Geoid by applying the LSC. Tscherning''s algorithm (Figure 5) is used for the purpose of implementing the LSC theory. As in the collocation theory، the function that was used in the Hilbert space must be harmonic، In our observational space (a sphere that represents the Earth)، we assume that there is no mass above the Geoid surface. In order to guarantee this، the «Remove-Compute-Restore» method is used. Based on the field operation conducted the Department of Geophysics، (Exploration Directorate of National Iranian Oil Company، 2004) in our case study، the value of the topographic density has estimated about 2. 3. Finally in the Section 5، we evaluate the results with 15 GPS/Leveling control points in the region and the root mean squared (RMS) value of 0. 052544 meters is achieved. In another experiment we use the LSC for determination of the geoid، using the same data، but having topographic density of 2. 67. The achieved RMS in this experiment is 0. 06695 meters. Comparing these two experiments indicates that، in the Coastal Pars region، the topographic density value (2. 3) determined by the Department of Geophysics، (Exploration Directorate of National Iranian Oil Company، 2004)، provides a better estimation compared to the global value (2. 67). The Section is wrapped by further analysis between the Geoid results of the LSC and Geoid derived from the Earth Gravity Model released (EGM1) 1996 and the EGM 2008 Geopotential models in the region. Our analysis demonstrates that the Geoids obtained from the EGM''s models have about 20 centimeters shift compared to those obtained by the LSC.

  • S. Soleimani, O. Asghari, M. K. Hafizi Pages 99-110
    Saltwater intrusion into freshwater in coastal areas has been a serious concern for many countries. Providing fresh water in some regions is very crucial. In fact، the areas that are prone to encountering salt water zones should be checked meticulously. The preferred method for such investigation is a precise 3-D model of distribution of fresh and salt water In order to reach such a model، reliable measurements and comprehensive resistivity interpretations are needed. The purpose of this study is to use geostatistical simulations in order to provide a 3-D aquifer model from the results of the resistivity studies. This means to delineate the boundary of saltwater and freshwater in the aquifer. Geostatistical simulation provides a robust tool for presentation of the results achieved from interpretation of resistivity data. Geostatistical simulations by assessing the risk and uncertainties regarding the measurements at hand، provides a method for a precise economical study and therefore a more detailed financing and planning scheme. Most of the prediction/estimation methods involve، in some way، an averaging method in which smoothing and reducing the amplitude of fluctuations among their characteristics are happened. However، geostatistical simulation methods are able to reproduce the minor and local differences more precisely than other methods. In other words، the simulation does not reduce the variance of the data so the minimum and maximum values are reproduced. The required data for this study were acquired in Borazjan plane in the Boushehr province، south of Iran. 82 Vertical Electrical Sondage (VES) with Schlumberger array were conducted along with 6 profiles in the Study area. The distance between 2 subsequent measurements are 200 m، and lateral distance between 2 neighbor profiles is 1000 m. Distances between current electrodes (AB) are increased from 1. 5 m to 1000 m. Each logarithmic decade contains 6 different measurements. Direction of survey oriented North-West to South-East in each profile. After the data gathering، with the use of electrical software، apparent resistivity sections are provided. In the next step، data are inverted using a software and the standard curves. The best multi-layered ground for the Earth is obtained. After the interpretation of the initial data، the real resistivity values of the aquifer are introduced to sequential Gaussian simulation algorithm as input data. Regarding the concept of 1D resistivity inversion، those maps and sections are considered important that manifest coherent amplitude of resistivity variations. In this study، those simulations are considered and used that are capable to reproduce coherent amplitude of resistivity values. For this reason، we use Sequential Gaussian Simulation method which includes such a characteristic in nature، for simulation of the aquifer. For this reason، data are normalized into Gaussian distribution. In order to investigate the anisotropy in the region، a directional variography is done; then the best variogram model is chosen using the cross validation test. The anisotropy shows range/sill variations of the variogram in different orientations; thus the variogram is a useful tool for identification the heterogeneities in the investigation area. Using 50 m×50 m×10 m blocks and the sequential Gaussian simulation algorithm، 100 times simulations were performed and 100 realizations were obtained. The simulation results (realizations) are only acceptable when they can reproduce the identical histogram/variogram، which in this case is the histogram and the variogram of the raw data of the aquifer. After the simulation results were validated the E-type map is derived. This map shows the average value simulated for each block by averaging the values of the 100 realizations. This map is a 3-D model of the real resistivity distribution within the aquifer. The increase of the resistivity values can be observed in this map. Among the most important results، obtained from the realizations، are the probability maps. These maps show the probability of exceeding a defined value، and are driven by counting the number of times that the resistivity value of a block has passed a certain resistivity value in the all realizations. In fact، the probability map can be assumed as a good factor for determination of drilling position for freshwater exploitation. Using the probability maps، the freshwater positions can be identified with the probability of 1 or very close to 1. In order to make a comparison between the data of the drilled well which is placed in the farthest distance between the two profiles، and the estimated model، a network was designed by which it was possible to estimate the aquifer resistivity values at the position of the well. The acquired resistivity values، using the Geostatistical simulation within the designed network and the resistivity values in the aquifer in the position of the well، proves the accurate estimation of the model in accordance with the reality of the aquifer.
    Keywords: Geostatistics, Sequential Gaussian simulation, Freshwater, Saltwater, Resistivity, Variogram
  • A. Asadian, A. Moradzadeh*, A. R. Arabamiri, A. Nejati, D. Rajabi Pages 111-126
    Helicopter-borne electromagnetic (HEM) is a fast and high resolution airborne electromagnetic (AEM) method that is frequently used for imaging of the subsurface resistivity structures. This is a versatile and cost effective method، frequently has used in mineral and groundwater exploration and various environmental problems. Modern frequency-domain HEM systems utilize small electromagnetic، magnetic، Global Positioning System (GPS) and laser altimeter sensors which are encapsulated in a “bird”، a cigar-shaped 9 m long tube، which is kept at about 30–40 m above the ground level. Separation between the rigidly mounted transmitter and receiver coils typically lies between 4 and 8 m. The modern HEM systems use a multi-frequency devices operating at 4–6 frequencies، ranging from 200 Hz to 200 kHz. In this method، a sinusoidal current flow through the transmitter coil generates a primary magnetic field at a frequency that is very close to a dipole field at some distance from the transmitter coil. The primary oscillating magnetic field induces eddy currents in the subsurface of the Earth. These currents، in turn، generate a secondary magnetic field، which is related to the Earth resistivity distribution. The receiver coils measure the induced secondary magnetic field with respect to the primary magnetic in parts per million (ppm). Due to the induction process of the Electromagnetic (EM) field، there is a small phase shift between the primary and secondary fields. In practice، the transmitter coil is horizontal (VMD: vertical magnetic dipole) or vertical (HMD: horizontal magnetic dipole) and the receiver coil is oriented in a maximally coupled position، resulting in horizontal coplanar (HCP)، vertical coplanar (VCP)، or vertical coaxial (VCA) coil systems. The final results of the frequency domain HEM data are normally presented in the form of resistivity maps in various frequency or depth levels or as resistivity depth sections along the survey lines for interpretation. The vertical resistivity sections are constructed by concatenating the resistivity models for every measuring point along a survey line. Several methods have been developed to prepare these resistivity maps or depth sections. Many techniques have been developed to model the measured HEM data during the recent 35 years. They are classified into two general groups: (1) direct transform of the data into a generalized model such as a half-space، and (2) inversion of the data to a specific model such as a layered Earth، for which a starting model is used، followed by iterative fitting of the data in the least-squares sense. In the direct transformation approaches (e. g. Sengpiel (1988) and Siemon (2001) centroid depth method)، upon the calculation of the centroid depth and apparent resistivity values for each frequency، a vertical pseudo-section of the resistivity is created by concatenating the resistivity-depth curve (smooth model) for each point of a survey line. In these schemes an approximate resistivity model is quickly acquired without a need of a starting or initial resistivity model. In the iterative inversion methods، however، the EM data are modeled inversely using a starting model to get a precise resistivity model. The final outputs of these inversion techniques are highly dependent on the correct selection of the starting model. One of the most effective and accurate methods is the layered earth inversion using the Levenberg-Marquardt (LM) technique based on the singular value decomposition (SVD). Despite the high capability of this inversion technique، it has not been used for modeling of the HEM data in Iran. Because of this reason، this paper aims to verify the accuracy of the final inversion results of the HEM data using various choice of a starting model. Here the required inversion computer codes were developed in the Matlab software. This inversion routine was tested on noise-free and noise-contaminated synthetic data of layered and 3-D models. The obtained results indicate that the final resolved model is in a great accordance with the true model in each case. In addition a set of real HEM data، in south parts of Damghan city in Semnan Province، has finally been inverted with this program، and its results have been compared with those obtained with the direct transformation methods. Results show that the SVD inversion may go to the wrong path when there is not a good starting model. Results also indicate that if the Sengpiel or Siemon centroid depth resistivity models are used as the starting model of the SVD inversion، the final resistivity models would be superior to the final resistivity models obtained by the staring models، yielded by the other direct transformation methods.
    Keywords: HEM, Direct transform modeling, Iterative computation methods, Centroid depth method, SVD inversion
  • K. Abbaszadeh, A. R. Mohebalhojeh, F. Ahmadi-Givi Pages 127-138
    The relationship between the stratospheric conditions and evolution، and the surface weather، not only has revolutionized our understanding of the functions of different atmospheric layers and their dynamics، but has also brought about potential implications for weather and climate predictions in almost any region of the planet. Obtaining a better understanding of the field، particularly in Southwest Asia، is the motivation for this study. The، NCEP/NCAR reanalysis data including minimum and average daily temperature، geopotential height، the precipitation rate، pressure and relative humidity at the surface or various atmospheric levels are used. Due to the fact that between 1948 and 1957، observations of the upper atmosphere were less frequent and were made at synoptic hours different from today’s main synoptic times، the reanalysis data are less reliable (Kistler et al.، 2001); hence this period has been omitted in the present study; consequently only the data from 1958 to 2011 were used. The study region consists of an area between 25-45°N and longitudes 35-65°E، which includes Iran and extends westward to the Mediterranean Sea. The grid points are 2. 5 degrees apart in both longitude and latitude. In a procedure similar to that of Thompson et al. (2002)، the mean daily surface temperature and the frequency of cold events are compared in a 60-day interval، following weak and strong vortex conditions. A cold event is defined as a day in which the minimum temperature falls more than one standard deviation beneath the January to March (JFM) climatological mean. The stratospheric polar vortex is called weak or strong when the absolute value of the daily geopotential height anomaly at 10 hPa، averaged from 60- 90°N، is more than twice the JFM climatological standard deviation. Other variables such as relative humidity at 850 hPa، surface pressure، the precipitation rate and temperature anomaly are also compared during these intervals. For each day، in evaluating the temperature anomaly and its standard deviation، use is made of the climatological mean value for the month that the day is in it. Values of a winter severity index (Thompson et al.، 2002) are calculated and compared for every winter day using 54 years of temperature data in the two cases of weak and strong polar vortex. The index is proportional to the standardized squared minimum daily temperature and is nonzero if and only if the latter temperature is below a specified threshold value. The winter severity index is averaged over all grid points within our domain of study. A randomization test is used to estimate the significance of the differences observed in the number of cold events after the weak and strong vortex conditions. It is shown in weak vortex events، daily mean surface temperatures decrease compared to the strong vortex conditions in most parts of the under study region، however cold events become less frequent. Exceptions are two distinct locations in the east and northwest part of the country which appear to have higher daily mean temperatures following the weak vortex conditions. Also، the entire region shows a decrease in surface pressure relative to the strong events for which it can reach up to 30 hPa in some parts including a region located in the southwest of Iran. Weak vortex events are followed by higher relative humidity conditions in most parts of the region— the enhancement reaches six percent at certain areas— which may be linked to the aforementioned drop in surface pressure. The difference in precipitation rate following weak and strong vortices varies from one part of the region to another، showing both increases and decreases، with a maximum absolute value of 12. 5 millimeters per month. A positive temperature anomaly differenceis identified in almost the entire region، in agreement with the expectations the frequency and magnitude of positive and negative temperature anomalies. The performed randomization test reveals a confidence level of 92 to 99 percent for the observed differences between the frequency of cold events after the weak and strong vortex conditions depending on the severity of events. The winter severity index also confirms previous findings regarding the frequency of cold events following weak and strong vortices، demonstrating a higher value in almost every winter day for the case of a strong vortex compared to its climatological mean value and the weak vortex conditions. Although the strength of the stratospheric polar vortex has a less dramatic impact on the study region compared to regions like northern parts of Asia and North America، the current study reveals patterns of weather and climate variability related to the polar vortex conditions in the region which can have important implications for long-term forecasting purposes.
    Keywords: Stratospheric polar vortex, Winter severity index, Cold event, NAM index, Climate variability
  • Z. Sharie, A. A. Bidokhti Pages 139-154
    Ozone is an important chemical constituent of the atmosphere، and stratosphere it protects the Earth surface from harmful UV radiations، such that it is known as “good ozone” while، in the lower troposphere it is a highly oxidizing pollutant and known as “bad ozone”. It is also a greenhoure gas that can affect the climate. Hence، the study of spatial and temporal variations of ozone is important in the atmosphere. Large scale atmospheric flows، specially large scale wave activities can contribute to stratosphere-troposphere interaction، leading to the vertical exchange of ozone. Specially transfer of good ozone to the troposphere where it can chemically react fast، through which regionally changes the total atmospheric ozone. These large-scale synoptic systems often occur in winter when the condition of baroclinic conversion of the potential energy to the kinetic energy in the middle latitudes is the largest. The wave activities at this time are also greatest، leading to meridional and vertical transfer of air. In this work the total atmospheric ozone variations over Iran due to upper tropospheric wave activities in winter for the period of 2005-2013 have been investigated. The data are acquired from the Ozone Monitoring Instrument (OMI) satellite، while the ground ozone data are from the Geophysics Station (51。23’E 35。44’N and 1419 m above sea level)، University of Tehran. The OMI data has a resolution of 1 degree and have been acquired from TOMS site The synoptic data have also been obtained from NOAA. Results show that large-scale synoptic troughs and ridges are associated with the daily variations of increase (up to 140 DU) and decrease (down to 75 DU) of total ozone، respectively. The pattern of total ozone distributions over this area are well correlated with the 300hpa geopotentail maps. The variations of total ozone cover most of the Iranian Plateau، particularly 30-35°N and 50-60°E. Regions with the maximum variations of total ozone are also found in areas with the largest gradients of ozone concentrations. The areas with the largest decrease or increase of total ozone are found at the axes of troughs and ridges، respectively where large vertical motions are expected، and the westerly component of the subtropical jet stream has the least intensity. It is also found that at the axes of troughs، the gradient of ozone with respect to the geopotentail height is between 0. 2 and 0. 8 DU/gpm، with an average of 0. 5 DU/gpm. In the dominant synoptic patterns associated with variations of total ozone، the vertical motions of 0. 2 Pa/s and typical meridional velocity of 30 m. s-1 are found، indicating large wave activities in the region. Also، regions of maximum ozone appear as bands with their axes often being in the northeast-southwest direction، corresponding to the final stage of the development of large-scale mid-latitude baroclinic disturbances. These regions with such activities also cover the whole area of Iran usually from the Persian Gulf to the Caspian Sea.
  • A. Asadi, A. Masoodian Pages 155-168
    Temperature is one of the essential elements of forming a climate and plays a crucial role in the lives of flora، fauna and human activities. The extreme temperature is one of the thermal indexes in meteorological and climatological studies. The extreme temperature is divided into two types: the extreme warm and extreme cold. The extreme warm includes the temperatures much above the normal value and the extreme cold includes temperatures much below the normal value. Studying the extreme warm events due to their social and economical effects and their impact on human’s health has prominent importance. In order to regionalize the extreme warm of Iran، we used Sphezari dataset. The Sphezari base has been provided from the average temperature based on daily data from 663 synoptic and climatological stations from 1 January 1961 to 31 December 2004. The pixel of this dataset has been calculated in the form of 15 × 15 km2 and by kriging method. Therefore، the matrix dimensions of day to day temperature of Iran is in the form of 15992 × 7187 Sphezari dataset. In this dataset the rows (915992 days) represent the time and the columns (7187 pixel) represent the place. We have used normalized temperature departure index to identify the events of extreme warm events in this survey. The index has been introduced by Fujibi et al. (2007). To obtain this index، the long term average temperature of calendar days must first be calculated. The thermal amounts of 44 years are averaged to calculate the long term mean temperature of the given days. To avoid the existing noise in the daily mean temperature،the nine-day running average was applied three times in order to filter out day-to-day irregularities. After carrying out this phases، temperature departure () of each of the 15992 days is investigated in the long term mean of the same day. Thus، it is necessary that the amount of the absolute temperature departure becomes standardized by the averages of. In this way، the amount of temperature departure in different times of a geographical point and different spatials in a particular time can be compared to each other. As an index of day-to-day variability، the variance of ΔT in the 31 days centered on each calendar day was calculated as Then the moving mean of nine days in three times will be conducted to dimnish the noise. Then normalized temperature departure (NTD) indexed with x* symbol was calculated. This index was calculated for 7187 pixels، each pixel for 15992 days. Then، the index of location x* was investigated over Iran and the percent area of Iran which had the amount of x*≥2 was determined. In this way، an index of 15992 × 2 was obtained، indicating the greatness highest temperatures of Iran for the period of 1 Jan 1961 to 31 Dec 2004. This matrix was arranged according to the mean of NTD and area amount. The first 264 days was selected as the sample. Whereas the temperature was in over of Iran، at least، 2 standard deviation more than its long term mean (x*≥2) and a large area was warmmer of Iran. The NTD of 7187 pixels in the selected 264 days was classified using the cluster analysis technique and agglomeration based on the entered method. Results of this research showed that according to the extreme warm events، Iran can be classified into five distinctive regions. The most important characteristics of the extreme warm events in Iran are as follow: Most of the extreme warm events of Iran have occurred in winter and autumn days. The maximum warm events of Iran has occurred in west and southwest of Iran، specially، in recent years. NTD is one degree above the other areas. The setting of this region with the maximum rate of the NTD index shows that the systems creating the extreme warm events was entered from west and southwest of Iran; thus there are regions was influenced more and prior to the other regions. The highest spatial standard deviation belongs to these regions. It means that these regions have little spatial similarity from the viewpoint of the NTD index. It means that the extreme warm events creating systems donot attack this region equally. Some regions are influenced more and some less than others by these systems. Maximum temporal standard deviation belongs to northern and western regions. This means that events of the extreme warm events happen in these regions in some months. Therefore the systems creating the extreme warm events in these regions are activated in part of the year. The least temporal standard deviation belongs to the northeastern region and the least spatial standard deviation belongs to south and southeast regions.
    Keywords: Extreme warm events, Normalized temperature departure, Cluster analysis, Regionalization
  • O. Ghaffarpasand, F. Hosseini, E. Hassanzadeh Pages 169-180
    In this study، the dispersion mechanisms of aerosols suspended in a turbulent plane channel flow is investigated using a novel numerical approach. A turbulent channel flow is simulated by a Direct Numerical Simulation (DNS) method، for which no-slip boundary conditions are assumed at the top and bottom walls، while periodicity conditions are applied on the other sides. DNS، in particular، allows a detailed analysis of the near wall region، where most of the particle transfer mechanisms take place. Hence، it is found the best simulating method for detailed analyzing the dispersion mechanisms compared to the other available methods. The simulation procedure of the turbulent flow is continued as along as enough، 14000 time units، when fully developed turbulent condition are achieved. The aerosols with two Stokes number، 15 and 25، are then introduced in the simulated turbulent channel flow، and tracked by a Lagrangian approach. The drag force compared to the effect of Brownian motion is a dominant force due to the aerosols size. The initial concentration of suspended aerosols is also assumed considerably low، so that the simulations conducted under the one-way coupling condition. Besides، the collisions of aerosols with the walls are assumed elastically. The particle tracking was continued throughout the fluid simulation time to obtain the all reliable interesting statistics. Comparison of the particle flux intensities indicates that turbophoretic and turbulent diffusion fluxes are the dominant dispersion mechanisms. In other words، the free-flight flux can be neglected in comparison with the other fluxes in the wall region. The steady-state concentration distribution is not uniform across the channel، primarily due to the opposing actions of the turbophoretic and turbulent diffusion flux. Turbulent diffusion flux separated the aerosols from the core and gathered them in the near wall region، while the turbophoretic flux migrate the particles from the near wall to the wall region. It was also observed that the turbophoretic flux for smaller aerosols is more efficient than that of larger ones. However، the opposite was observed for the turbulent diffusions flux. The smaller particles were less gathered in the near wall region due to a stronger turbulent diffusion flux and more migrated to the wall region due to stronger turbophoretic flux. We also investigated the cross channel fluid and particles velocity profiles. It was shown that the aerosol velocity components lag the fluid velocities in the near wall، but lead it in the core region. This is due to the transverse migration of aerosols across the channel.
    Keywords: Aerosols, Particles dispersion, Turbulence, DNS, Turbophoretic flux, Free, flight flux, Diffusion