فهرست مطالب

فصلنامه فیزیک زمین و فضا
سال چهلم شماره 2 (تابستان 1393)

  • تاریخ انتشار: 1393/04/31
  • تعداد عناوین: 12
|
  • حمیدرضا جوان امروز، مرتضی اسکندری قادی، نوربخش میرزایی صفحات 1-16
    از آنجا که روش دقیقی برای پیش بینی جنبش های زلزله های آینده در یک منطقه وجود ندارد، منطقی است که از طیف های تابع های جابه جایی، سرعت و شتاب برای برآورد بیشینه مقادیر نیروهای طراحی استفاده شود. در این تحقیق، طیف های پاسخ و طرح شتاب برای ساختگاه های سنگی در ایران به دست می آید. بدین منظور، داده های شتاب نگاری ایستگاه های مستقر بر ساختگاه های سنگی گردآوری، تصحیح خط مبنا و فیلتر شده و با کنترل طیف فوریه هریک از نگاشت ها، صحت حذف نوفه کنترل می شود. به این ترتیب 103 نگاشت قائم و 109 نگاشت افقی قابل قبول فراهم شده است. برای در نظر گرفتن اثر فاصله و بزرگی، داده های حاصل برحسب فاصله های گوناگون ایستگاه ها از رومرکز زمین لرزه ها، به دسته های با فاصله نزدیک (km 0-35)، فاصله متوسط (km 35-65) و فاصله دور (km 100-65) و بزرگی زمین لرزه ها به بازه های با بزرگی کوچک (5/4-5/5)، متوسط (5/5-5/6) و بزرگ (5/6-5/7) تقسیم شده اند.
    با به دست آوردن شتاب طیفی داده ها و مقیاس کردن آنها به اوج شتاب زمین مربوط به خود، طیف پاسخ در میرایی 5% برای هریک از نگاشت ها، به دست می آید. با توجه به بازه های فاصله و بزرگی مورد اشاره، از طیف های پاسخ، میانگین گیری می شود و پس از هموارسازی، طیف های میانگین و میانگین به علاوه انحراف معیار شتاب افقی و قائم هموار شده برای میرایی 5% رسم می شوند. به منظور مقایسه، طیف طرح برای شرایط مشابه با استفاده از روابط تضعیف آمبرسیزو همکاران (2005) تهیه می شود. مقایسه طیف های حاصل از این تحقیق و تحقیقات منطقه ای آمبرسیز نشان می دهد که همبستگی به نسبت خوبی بین طیف ها، در دوره های بزرگ تر از حدود 19/0 ثانیه و همبستگی ضعیف در دوره های کمتر از آن وجود دارد.
    کلیدواژگان: اوج شتاب زمین، میرایی، شتاب طیفی، طیف پاسخ، طیف طرح
  • رامین موقری، غلام جوان دولویی، مژگان نوروزی، احمد سدیدخوی صفحات 17-30
    هدف از این تحقیق بررسی نوفه (نویز) های محیطی و پردازش آنها برای محاسبه تابع همبستگی متقابل ایستگاه های لرزه نگاری باندپهن در ناحیه جنوب شرق کشور است. با استفاده از تبدیل فوریه گسسته، آنالیز طیفی امواج سطحی روی تابع های همبستگی متقابل ناشی از ارتعاشات محیطی برای محاسبه خم (منحنی)های پاشندگی سرعت گروه موج ریلی صورت گرفته است. سپس با استفاده از مدل سازی خم های پاشندگی موج های سطحی ریلی، ساختار سرعتی پوسته در ناحیه جنوب شرق کشور مورد بررسی قرار گرفته است. مقایسه خم های پاشندگی ناحیه جنوب شرق کشور با خم های پاشندگی مولفه قائم لرزه نگاشت های زمین لرزه 21 اردیبهشت 1392 شمال جاسک با بزرگی (3/6ML=) که در فاصله تقریبی 160 کیلومتری شرق ایستگاه لرزه نگاری باند پهن بندرعباس (BNDS) به وقوع پیوسته است، تطابق خوبی را نشان می دهد. علاوه بر آن ساختار سرعتی محاسبه شده براساس تابع گرین ارتعاشات محیطی با ساختار سرعتی ناشی از آنالیز موج ریلی زمین لرزه 21 اردیبهشت 1392 شمال جاسک توافق قابل قبولی را آشکار می سازد. این نتایج درحکم نخستین تجربه موفق از مقایسه تطبیقی خم های پاشندگی تابع گرین تجربی ارتعاشات محیطی و خم های پاشندگی موج ریلی زمین لرزه واقعی در ایران است و اهمیت و کارایی ثبت و ذخیره ارتعاشات محیطی به صورت پیوسته در ایستگاه های لرزه نگاری باند پهن برای محاسبه ساختار سرعتی پوسته زمین را تایید می کند.
    کلیدواژگان: ارتعاشات محیطی، تابع همبستگی متقابل، آنالیز طیفی، خم پاشندگی موج سطحی ریلی
  • مجید معهود، نفیسه اکبرزاده، حسین حمزه لو صفحات 31-43
    یکی از روش های بررسی زمین لرزه ها با استفاده از شتاب نگاشت ها روش شبیه سازی جنبش نیرومند زمین است. شبیه سازی جنبش نیرومند زمین به ویژه برای مناطقی که از آن داده ای در دسترس نیست، نقش مهمی در برآورد پارامترهای جنبش نیرومند ایفا می کند. در این پژوهش، پارامترهای گسل مسبب اولین زمین لرزه 21 مرداد 1391 اهر-ورزقان با بزرگای گشتاوری 4/6 که در ایستگاه شتاب نگاری مرکز تحقیقات ساختمان و مسکن ثبت شده است با شبیه سازی به روش کاتوره ای گسل محدود تعیین شد. برای تعیین پارامترهای گسل مسبب این زمین لرزه از 39 شتاب نگاشت که تا فاصله رومرکزی 175 کیلومتر به ثبت رسیده، استفاده شده است. مقدار پارامتر افت طیفی برای مولفه افقی و مولفه عمودی به ترتیب 047/0و034/0 برآورد شد. مدل گسل به دست آمده برای این زمین لرزه نشان دهنده گسلشی با ابعاد 10 × 15 کیلومتر است. نتایج حاصل نشان می دهد که نقطه شروع گسیختگی در المان 3 × 4 بوده که نشان دهنده انتشار گسیختگی به سمت غرب است. امتداد و شیب صفحه گسل برآورد شده برای این زمین لرزه با میانگین گیری وزنی روی پارامترهای به دست آمده در هریک از ایستگاه های شتاب نگاری شبیه سازی شده با سطح کیفیت A و B، به ترتیب برابر 85 و 83 درجه به دست آمد. ژرفای کانونی 12کیلومتر و افت استرس60 بار برآورد شد. مقادیر به دست آمده هم خوانی خوبی با نتایج گزارش شده از سوی موسسه های گوناگون دارد.
    کلیدواژگان: شبیه سازی جنبش نیرومند زمین، روش کاتوره ای گسل محدود، زمین لرزه اهر، ورزقان، شمال غرب ایران
  • عادل مجیدی، حمیدرضا سیاه کوهی، رامین نیکروز صفحات 45-57
    هدف از تحلیل سرعت، به دست آوردن مقادیر سرعت برون راند نرمال درحکم تابعی از زمان دورافت- صفر نقاط میانی مشترک در طول خط برداشت لرزه ای است. از آنجا که دقت نتایج تحلیل سرعت در روش هایی که بر پایه شباهت هستند، بستگی به روش به کار رفته برای اندازه گیری شباهت دارد، عرضه روشی که قدرت تفکیک بیشتری را فراهم آورد ضروری است. اگرچه روش ضریب شباهت (Semblance coefficient) به منزله متداول ترین روش اندازه گیری شباهت، طیف سرعت با دقت خوبی را فراهم می آورد، اما افزایش پهن شدگی پیک های طیف سرعت با ازدیاد عمق، عدم قطعیت در تعیین دقیق سرعت را افزایش می دهد. همچنین این روش در تشخیص رویدادهای تداخلی دریک پنجره زمانی کوتاه، وجود پدیده معکوس شدگی قطبیدگی و در مدل های زمین با لایه بندی نازک، درست عمل نمی کند.
    برای به دست آوردن طیف سرعت دقیق تر، در این مقاله دو روش جدید شباهت تفاضلی معین و شباهت تفاضلی خودران با قدرت تفکیک زیاد برای اندازه گیری شباهت معرفی می شود که بر پایه شباهت تفاضلی (Differential-semblance) هستند و با مقدار ضریب شباهت به دست آمده وزن دار می شوند. در این روش ها از فن خودرانی (Bootstrapping)، برای چینش تصادفی ردلرزه های ثبت نقطه میانی مشترک استفاده می شود تا انحراف زمان رسید آنها را از حالت افقی بهتر نشان داده و موجب افزایش ضریب شباهت تفاضلی شود. این روش ها با کمی هزینه محاسباتی بیشتر، قدرت تفکیک بیشتری را نسبت به روش شباهت متداول فراهم می آورند. افزایش در قدرت تفکیک طیف سرعت به وسیله روش های پیشنهاد شده، با اجرای آنها روی داده های لرزه ای مصنوعی و واقعی آشکار و با روش مرسوم ضریب شباهت مقایسه شده است.
    کلیدواژگان: روش خودران، تحلیل سرعت، شباهت تفاضلی، تصحیح برون راند نرمال
  • برهان توکلی، علی غلامی، حمیدرضا سیاه کوهی صفحات 59-68
    در عملیات لرزه اکتشافی اغلب به علت وجود موانع طبیعی، غیرطبیعی و یا صرفه جویی در هزینه ها، برداشت داده به صورت منظم و یک شکل صورت نمی گیرد. بنابراین، نیازمندیم تا با روش های ریاضی ردلرزه های مفقود شده را درون یابی و بازسازی کنیم. متاسفانه بسیاری از روش های امروزی در پر کردن درست و دقیق مکان ردلرزه های خالی ناتوان هستند. در سال های اخیر نظریه نمونه برداری فشرده در حل مسئله درون یابی و بازسازی داده های لرزه ای بسیار کارآمد ظاهر شده است. براساس این نظریه می توان ثبت های لرزه ای چشمه مشترک را در یک حوزه تنک مناسب (برای مثال حوزه کرولت) و با یک معادله بهینه سازی، بازسازی و درون یابی کرد.
    در این مقاله از مجموعه ای از تابع های پتانسیل برای حل مسئله به کمک نظریه نمونه برداری فشرده بهره می بریم. علاوه بر این روشی نیز برای تعیین پارامتر منظم سازی در این گونه مسائل معرفی خواهیم کرد. سپس نتایج را با تابع های پتانسیل متفاوت و مرسوم مقایسه می کنیم و در انتها بهترین و بهینه ترین تابع پتانسیل که منجر به جواب های دقیق تر می شود معرفی خواهد شد.
    کلیدواژگان: بازسازی و درون یابی لرزه ای، نمونه برداری فشرده، تنکی، تبدیل کرولت
  • نوید امینی، عبدالرحیم جواهریان صفحات 69-82
    برش نگاری شکل موج داده های لرزه ای به منزله ابزاری کارا در تصویرسازی داده های لرزه ای قادر است تصاویر با قدرت تفکیک زیاد از ساختارهای زمین شناسی عرضه کند و در سال های اخیر مورد توجه محققان قرار گرفته است. از آنجا که در این نوع برش نگاری علاوه بر داده های زمان سیر از دامنه و شکل موج لرزه نگاشت ها نیز استفاده می شود در مقایسه با برش نگاری زمان سیر قدرت تفکیک بیشتری دارد، اما به سبب پیچیدگی های محاسباتی کمتر مورد استفاده عملی قرار گرفته است. در این مقاله نتایج اعمال برش نگاری شکل موج روی داده های لرزه ای حاصل از آزمایش بین چاهی عرضه می شود. آزمایش لرزه بین چاهی روشی متداول در بررسی ساختارها و بی هنجاری های زمین حد فاصل دو گمانه است که با توجه به هندسه برداشت عبوری انعطاف خوبی در پوشش یکنواخت زمین مورد بررسی فراهم می کند. در ادامه، کاربرد روش روی داده های مصنوعی مورد ارزیابی قرار می گیرد. در انتها نیز داده های یک آزمایش بین چاهی مهندسی با استفاده از این روش وارون سازی می شود که طبق نتایج آن تصویری با جزئیات بیشتری از ساختار مورد بررسی حاصل می شود.
    کلیدواژگان: برش نگاری شکل موج، لرزه بین چاهی، معادله موج، حیطه بسامد
  • بهروز اسکویی، صفیه امیدیان صفحات 83-96
    بر اساس اطلاعات زمین شناسی، گسل های ساختاری ایرا و نوا با روند ESE و دارا بودن سازوکار معکوس در شرق آتشفشان دماوند، جزء سامانه زمین ساختی البرز مرکزی محسوب می شوند. ادامه روند این گسل ها به سمت غرب، با تعدد شاخه های فرعی فعالی همراه است که با تغییر جهت بارز به زیر گدازه های دماوند در منطقه آب اسک محو می شوند. از آنجا که مطابق تحقیقات جدید، سامانه های نوپای زمین ساختی از 2 ± 5 میلیون سال پیش در این ناحیه مستقر شده اند، حرکات راست گرد فشارشی به نوع چپ گرد تغییر یافته است و موجب فعالیت های بزرگ مقیاس ساختاری و مستولی شدن سامانه های تراکششی با روند NNW در البرز شده است. در این تحقیق برای یافتن اثراتی از فعالیت های جدید زمین ساختی و مرور دوباره نظرات پیشین در تطبیق با یافته های جدید، گسل های شرقی منطقه چالش برانگیز دماوند، به روش مغناطیس سنجی بررسی شد. با برداشت بیش از 280 داده مغناطیسی در امتداد دو نیم رخ شمالی- جنوبی و پردازش و تفسیر آنها، نقشه های حاصل از اعمال فیلترهای انتقال به قطب، مشتق قائم مرتبه اول، ادامه فراسو و سیگنال تحلیلی تهیه و بررسی شدند. از آنجا که خروجی مشتق قائم مرتبه اول از مستندترین روش ها برای بررسی نواحی زمین ساختی و گسلیده محسوب می شود، لذا با تاکید بیشتر و هماهنگ با بررسی های ساختاری جدید، نقشه زمین شناسی و تصاویر ماهواری بررسی شد. نتایج جالب توجه تفسیر و تلفیق این داده ها نشان می دهد که علاوه بر گسل های دیگری که در این منطقه به موازات گسل های عمومی (ESE) وجود دارند و در نقشه ها وجود آنها نمایش داده نشده و به واسطه حرکات لغزشی جدید در زیر آبرفت ها و واریزه مدفون هستند، روندهایی مطابق با سامانه جدید کششی حاکم بر البرز نیز وجود دارند. روند این گسل ها NNW و با سازوکار نرمال، حرکات کششی ناشی از جابه جایی واحدهای سنگی را در خود ثبت کرده اند. شواهد ناشی از فعالیت تراکششی آنها در واحدهای ائوسن به بعد به خوبی مشهود است و بیشتر واسطه بین گسل های بزرگ پیشین با روند شرقی- غربی هستند.
    کلیدواژگان: آتشفشان دماوند، البرز، گسل های ایرا و نوا، مشتق قائم، مغناطیس سنجی
  • مهدی گلی صفحات 97-111
    سال های متمادی بی هنجاری جاذبی بوگه به صورت های گوناگون در بررسی های ژئودزی و ژئوفیزیک تعریف و محاسبه شده است. این تحقیق به بررسی روش های کلاسیک و ابهامات آنها، همچنین تحقیقات اخیر در جهت ایجاد یک روش استاندارد برای تعریف بی هنجاری جاذبی بوگه می پردازد. در تحقیق حاضر سعی شده است همه اثرات و مولفه های تاثیرگذار در همه کاربردهای ژئودتیکی و ژئوفیزیکی تا حد دقت چند میکروگال محاسبه شود. در این راستا به ابهامات تعریف بی هنجاری جاذبی (هوای آزاد)، اثر غیرمستقیم ژئوفیزیکی، اثر توپوگرافی و مدل های تقریب کننده آن، اثر جو، نحوه محاسبه اثرات مستقیم و غیرمستقیم توپوگرافی پرداخته شده است. نتایج عددی اثرات پیش گفته برای یک منطقه آزمون کوهستانی محاسبه شده است. محاسبات بیانگر وجود اختلافات بسیار بزرگ (بیش از 100 میلی گال) بین دیدهای کلاسیک و جدید از بی هنجاری جاذبی بوگه است. علاوه بر این، بی هنجاری های بوگه جدید وابستگی کمتری به ارتفاع دارند. لذا اثر جاذبی توپوگرافی بهتر مدل شده و از روی داده های گرانی حذف شده است.
    کلیدواژگان: ژئودزی، ژئوفیزیک، گرانی، بی هنجاری جاذبی، بوگه
  • مهدی گلی، مهدی نجفی علمداری صفحات 113-124
    اثر جاذبی توپوگرافی به منزله یک مولفه کلیدی، در مدل سازی میدان گرانی زمین نقش مهمی ایفا می کند. محاسبه اثر توپوگرافی به کمک مدل های رقومی زمین که در دستگاه مختصات (گاوسی) ژئودتیکی داده می شوند، صورت می گیرد. معمولا برای محاسبه این اثر از تقریب کروی انتگرال نیوتن استفاده می شود. این تحقیق به تقریب بیضوی پتانسیل جاذبی ناشی از توپوگرافی و گرادیان ارتفاعی آن اختصاص دارد. در این مقاله روابط کاملی برای محاسبه پتانسیل و گرادیان ارتفاعی آن (اثر توپوگرافی بر شتاب گرانی) با استفاده از انتگرال نیوتن در دستگاه مختصات بیضوی گاوسی عرضه شده است. نتایج عددی برای دو حالت کروی و بیضوی مورد مقایسه قرار گرفته است. نتایج عددی برای منطقه آزمون ایران نشان می دهد که صرف نظر از اثر عبارت های بوگه با دقت خوبی می توان از تقریب کروی انتگرال نیوتن و گرادیان ارتفاعی آن در مدل سازی منطقه آزمون استفاده کرد. مقادیر عددی اختلاف دو مدل بیضوی و کروی برای پتانسیل کمتر از m2s-2 50 و 500 میکروگال برای شتاب جاذبی ناشی از توپوگرافی است. با وجود این مقادیر در تعیین ژئوئید سانتی متری می تواند حائز اهمیت باشد.
    کلیدواژگان: میدان گرانی، اثرات توپوگرافی، تقریب بیضوی، دستگاه مختصات ژئودتیکی گاوسی
  • حسین ایزدی، غلامرضا نوروزی، بیژن روشن روان، سیما شکیبا صفحات 125-138
    روش های ژئوفیزیکی، نقش مهمی در اکتشاف منابع زیرزمینی به ویژه اکتشاف کانی های فلزی و غیرفلزی، مخازن هیدروکربوری، آب های زیرزمینی و تحقیقات زمین شناسی و مهندسی بر عهده دارند. روش های مغناطیس سنجی از جمله روش های ژئوفیزیکی هستند که کاربرد عمده ای در یافتن بی هنجاری های مغناطیسی حاصل از کانی های فلزی، به ویژه کانسارهای آهن دارند. در بررسی داده های مغناطیس سنجی، یکی از اهداف عمده، تفکیک بی هنجاری های مغناطیسی در مقیاس های گوناگون، به ویژه تفکیک بی هنجاری های محلی از بی هنجاری های ناحیه ای است. بدین منظور، می توان بی هنجاری ناحیه ای را درحکم یک صفحه در نظر گرفت و با مقایسه مقادیر اندازه گیری شده روی زمین و مقادیر شبیه سازی شده از صفحه، بی هنجاری های محلی را تفکیک کرد. تعیین معادله بهترین صفحه ای که بتواند بر بی هنجاری های ناحیه ای برازش داده شود، نقش بسیار مهمی در تعیین بی هنجاری های محلی دارد. به منظور برازش بهتر صفحه به بی هنجاری های ناحیه ای، می توان از روش های معکوس استفاده کرد، زیرا این روش ها نسبت به روش های سنتی نیاز به زمان کمتری دارند و جواب نهایی نیز دقت بیشتری دارد. روش های معکوس مبتنی بر بهینه سازی، در مدل سازی و بررسی تابع های غیرخطی کاربرد ویژه ای دارد و با توجه به پیشرفت علوم رایانه، روش های بهینه سازی پیشرفته ای با الهام گیری از فرایند طبیعی طراحی شده اند که کاربرد گسترده تری دارند. در این مقاله، از روش آنالیز معکوس مبتنی بر الگوریتم ژنتیک به منظور کمینه سازی تابع هدف استفاده شده است. تابع هدف به صورت معادله صفحه درجه دومK=│A×x2+B×y2+C×x×y+D×x+E×y+F-z│ است که بهینه کردن ضرایب، ،، ، و موضوع مورد بررسی مقاله حاضر است. نتایج به دست آمده، نشان دهنده تفکیک صحیح بی هنجاری های ناحیه ای از بی هنجاری های کلی است و نتایج بسیار امیدبخشی در شناسایی بی هنجاری های محلی و پیشنهاد نقاط مناسب به منظور عملیات حفاری، حاصل شده است.
    کلیدواژگان: آنالیز معکوس، بی هنجاری مغناطیسی محلی، الگوریتم ژنتیک، کمینه سازی
  • مهتاب رضاییان، علیرضا محب الحجه، فرهنگ احمدی گیوی، محمدعلی نصراصفهانی صفحات 139-152
    برای بررسی و ردگیری انتشار یک بسته موج می توان از کمیت فعالیت موج استفاده کرد که جملات بودجه آن به طور یکتا مشخص می شوند. شار فعالیت موج به عنوان بسط سه بعدی شار الیاسن– پالم (EP) شاخصی مناسب برای بررسی دینامیکی نوسان اطلس شمالی (NAO) و چگونگی انتشار امواج است.
    هدف از این پژوهش، بررسی تفاوت فعالیت موج و چگونگی تابش موج به منطقه مدیترانه و جنوب غرب آسیا در دو فاز مثبت و منفی NAO در فصل زمستان است. داده های استفاده شده مربوط به بازه زمانی دسامبر تا فوریه سال های 1950-2011 است. میانگین همادی فعالیت پیچکی و شار آن، واگرایی شار افقی و میانگین قائم بین ترازهای 400 تا 200 هکتوپاسکال در منطقه مدیترانه برای ماه های بحرانی مثبت و منفی NAO محاسبه شده است. همچنین یک حوزه مکعب مستطیل شکل از تراز 400 تا 600 هکتوپاسکال انتخاب شده و مقادیر انتگرالی شار روی مرزهای حوزه به دست آمده است. این حوزه به سه زیرحوزه غربی، مرکزی و شرقی تقسیم شده و همه محاسبات برای آنها نیز صورت گرفته است.
    نتایج نشان می دهد که در فاز مثبت NAO، منطقه مدیترانه بیشتر از فاز منفی تحت تاثیر چرخندزایی در شمال شرق اطلس و شمال اروپا است؛ درحالی که در فاز منفی، چرخندزایی روی مدیترانه بیشتر متاثر از بسته های موج تشکیل شده در غرب اطلس است. به علاوه، منطقه مرکزی مدیترانه در فاز مثبت، نسبت به فاز منفی، چشمه موج قوی تری برای مناطق جریان سوی مدیترانه است. همچنین نتایج بیانگر چیرگی شکست واچرخندی (چرخندی) موج و انتقال شمال سوی (جنوب سوی) تکانه در فاز مثبت (منفی) نوسان اطلس شمالی است.
    کلیدواژگان: فعالیت موج، همگرایی شار EP، نوسان اطلس شمالی، ماه های بحرانی، مسیر توفان مدیترانه
  • مژگان یعقوبی، علیرضا مساح بوانی صفحات 153-172
    بخش وسیعی از کشور ما در قسمت خشک و نیمه خشک قرار دارد.. در این مناطق بارش معمولا ناچیز و نامنظم است و تغییرات شدید مکانی و زمانی دارد که تاثیر زیادی بر چرخه هیدرولوژیکی و منابع آب دارد. شناخت هیدرولوژی مناطق خشک لازمه شناخت این محیط ها و تشخیص آسیب پذیری آنها به تغییر است. مدیریت موثر منابع آب ضروری است که نیازمند سامانه پشتیبانی تصمیم گیری شامل ابزارهای مدل سازی است. انتخاب مدل، نیاز به تشخیص قابلیت و محدودیت مدل های هیدرولوژیکی درحوضه دارد. در این مقاله عملکرد سه مدل مفهومی و پیوسته HBV، HEC-HMS و IHACRES در شبیه سازی بارش- رواناب حوضه نیمه خشک اعظم هرات مورد ارزیابی قرار گرفت. در تعیین عملکرد مدل ها از معیارهای عملکرد شامل ضریب نش(E)، ضریب تعیین (R2) و معیارهای خطای Bias و RMSE استفاده شد. نتایج نشان داد، مدل HBV با ضریب نش 76/0، ضریب تعیین 77/0، معیار خطای 004/0- و72/0 بیشترین و مدل HEC-HMS با 62/0، 64/0، 007/0 و 3/1 کمترین کارایی را در دوره واسنجی دارند. در دوره صحت سنجی برای مدل HBV، این ضرایب 66/0، 67/0، 15/0- و 8/0 و برای مدل HEC-HMS، 57/0، 55/0، 03/0- و 02/1 است. درنهایت مشخص شد مدل HBV بهترین عملکرد در شبیه سازی رواناب پیوسته حوضه را دارد. در تحلیل حساسیت پارامترها، حساس ترین پارامترهای مدلHBV (UZL، MAXBAS و BETA) ارزیابی شدند. پارامترهای Soil storage،Max infiltration و Tension storage به منزله پارامترهای حساس مدل HEC-HMS مشخص شدند که تاثیر زیادی بر نتایج خروجی مدل دارند. این در حالی است که پارامترهای مدل IHACRES، به میزان یکسان از خود حساسیت نشان دادند.
    کلیدواژگان: مدل مفهومی بارش، رواناب پیوسته، حوزه رودخانه اعظم هرات، HBV، HEC، HMS، IHACRES
|
  • Pages 1-16
    Ground vibrations during an earthquake can severely damage structures and equipments housed in them. Many factors including earthquake magnitude, distance from the fault or epicenter, duration of strong shaking, soil condition of the site, and the frequency content of the motion define the properties of ground motion and its amplification. A deep understanding of the effects of these factors on the response of structures and equipments is essential for a safe and economical design. Some of these effects such as the amplitude of the motion, frequency content, and local soil conditions are best represented through a response spectrum, which describes the maximum response of a damped single-degree-of-freedom (SDOF) oscillator with various frequencies or periods to ground motion. Earthquake ground motion is usually measured by strong motion instruments, which record the acceleration of the ground. The recorded accelerograms, after corrections for instrument errors and baseline, are integrated to obtain the velocity and displacement time-histories. The maximum response of a SDOF system excited at its base by a time acceleration function is expressed in terms of only three parameters: (1) the natural frequency of the system, (2) the amount of damping, and (3) the acceleration time-history of the ground motion. Response spectrum analysis is the dominant contemporary method for dynamic analysis of building structures under seismic loading. The main reasons for the widespread use of this method are: its relative simplicity, its inherent conservatism, and its applicability to elastic analysis of complex systems. Since the detailed characteristics of future earthquakes are not known, the majority of earthquake design spectra are obtained by weighted averaging of a set of response spectra from records with similar characteristics such as soil condition, epicentral distance, magnitude and source mechanism. The design spectrum specifies the design seismic acceleration, velocity or displacement at a given frequency or period if it is derived from ground acceleration, velocity or displacement time histories. For practical applications, design spectra are presented as smooth curves or straight lines. Smoothing is carried out to eliminate the peaks and valleys in the response spectra that are not desirable for design because of the difficulties encountered in determining the exact frequencies and mode shapes of structures during severe earthquakes when the structural behavior is most likely nonlinear. Since the peak ground acceleration, velocity, and displacement for various earthquake records are different, the computed response cannot be averaged on an absolute basis. Thus, normalization is needed to make a standard basis for averaging. Various procedures are used to normalize the response spectra before averaging is carried out. Among these procedures, one has been the most commonly used, which is normalization with respect to peak ground motion to make the same peak ground motion for all ground motion time histories. Building codes commonly present design spectra in terms of acceleration amplification as a function of period on an arithmetic scale. In this study, the data from Accelerographic network stations are deployed on rock sites of Iran with shear wave velocity larger than 750 m/s, which is equivalent to site Type I in the Iranian seismic building code. The Seismosignal software is used to do both baseline correction and filtering for all the dominant horizontal and vertical components to reduce the inherent error of the motion. Among all the ground motions, only 103 vertical and 109 dominant horizontal time histories are accepted after baseline correction and filtering. The data are classified considering different combinations of the range of magnitude and distance. The epicentral distance is classified as near field (0-35 km), medium distance (35-65 km) and far field (65-100 km), while the earthquake magnitude is classified as small earthquake (4.5 Keywords: Peak ground acceleration, Time history, Damping, Response spectra, Design spectra
  • Pages 17-30
    The mixture of natural and artificial seismic sources with random distributions cause diffuse wave field with random amplitudes and phases called noise. When noise is analyzed in a long-term process, it contains surface waves which are spread in all directions. Thus, ambient noise contains data relevant to the surface waves. In recent years, as broadband seismic networks have been distributed vastly around the world, diffuse wave fields are utilized to obtain surface waves. The data of the fields are recorded in the forms of seismic ambient noise and waveforms. Seismic waveform is created as a result of multiple diffuse seismic waves of heterogeneous areas, while seismic ambient noise is caused by many types of sources such as ocean microseisms, atmospheric turbulences (Tanimoto, 1999), storms, volcano erroptions and so on. Recent studies suggest that surface waves extracted from diffuse wave fields and seismic waveforms are according to the Green function (Wapenaar, 2004). Although, the horizontal to vertical spectral ratio technique of microtremor measurement is widely applied in microzonation and site response studies during past two dacays. but the goal of this kind of geotchnical studies is different from seismologcal noise investigations. For the first time, Campillo and Paul (2003) have calculated group velocity of Rayleigh and Love surface waves from waveforms of 101 teleseismic earthquakes recorded in the national Mexican seismic network. After that investigation of ambient noise for Green function analysis have been continued by means of Shapiro and Campillo (2004; 2005)؛ Schuster et al., (2004)؛ Snieder (2004)؛ Bensen et al. (2007); Wapenaar et al. (2013)؛ Javan and Movaghari (1392). They showed it is possible to get the Green function between stations through calculating Cross Correlation Function of recorded noise. Characteristics of seismic ambient noise are independent of occurring earthquake. That’s why ambient noise is used widely and provides the opportunity to do imaging without a source, or passive imaging in order to study crustal structure between two stations. More applications include terrestrial and solar seismology, underwater acoustics, and structural health monitoring (Larose et al., 2008). In this article, we are going to compare velocity structure created by surface waves of ambient noise and earthquake surface waves based on waveforms from IIEES broadband seismic stations. Braod band seismic stations are usually installed in quiet locations some distance from significant sources of cultural noise, such as roads, railroads, and machinery. We analyze seismic noise using continuous 50 sample/s from one year data. Using recorded ambient noise in Tabas, Sharakht (Qaenat), Zahedan, Chabahar, and Bandar Abbas broadband seismic stations, the Green function of surface waves between each pair station was obtained by cross correlation technique and dispersion curve was calculated through frequency-time analysis. According to this curve, a 1-D model of velocity structure between two stations was presented. There has been a comparison between this model and the one acquired from May 11, 2013 earthquake occurred in the north of Jask at the south of Iran. The results show that we can use the ambient noise to study crustal velocity structure and upper mantle as well. Therefore, it is necessary to record ambient noise continuously in seismic stations so as to prepare fundamental research in seismology.
    Keywords: Ambient noise, Crustal structure, Green function, Group velocity, Rayleigh wave, South, east of Iran
  • Pages 31-43
    On 11th of August 2012 the region was surprisingly struck by a shallow Mw 6.4 (USGS) earthquake with pure right-lateral strike-slip character only about 50 km north of the North-Tabriz Fault. An east-west striking surface rupture of about 20 km length was observed in the field by Geological Survey of Iran. Only 11 minutes later and about 6 km further NW a second shallow event with Mw 6.2 occurred. It showed an NE-SW oriented oblique thrust mechanism (HRVD). This earthquake sequence provides an opportunity to better understand the processes of active deformation and their causes in NW-Iran. In recent years, seismologists have attempted to develop quantitative models of the earthquake rupture process with the ultimate goal of predicting strong ground motion. The choice of ground-motion model has a significant impact on hazard estimates for an active seismic zone such as the NW-Iran. Simulation procedures provide a means of including specific information about the earthquake source, the wave propagation path between the source and the site and local site response in an estimation of ground motion. Simulation procedures also provide a means of estimating the dependence of strong ground motions on variations in specific fault parameters. Several different methods for simulating strong ground motions are available in the literature. A number of possible methods that could be used to generate synthetic records include (i) deterministic methods, (ii) stochastic methods, (iii) empirical Green’s function, (iv) semi-empirical methods, (v) composite source models, and (vi) hybrid methods. The stochastic method begins with the specification of the Fourier spectrum of ground motion as a function of magnitude and distance. The acceleration spectrum is modeled by a spectrum with a ω2 shape, where ω = angular frequency (Aki, 1967; Brune, 1970; Boore 1983). Finite fault modeling has been an important tool for the prediction of ground motion near the epicenters of large earthquakes (Hartzel, 1978; Irikura, 1983; Joyner and Boore, 1986; Heaton and Hartzel, 1986; Somerville et al., 1991; Tumarkin and Archuleta, 1994; Zeng et al. 1994; Beresnev and Atkinson, 1998). One of the most useful methods to simulate ground motion for a large earthquake is based on the simulation of a number of small earthquakes as subfaults that comprise a big fault. A large fault is divided into N subfaults and each subfault is considered as a small point source (introduced by Hartzel, 1978). The ground motions contributed by each subfault can be calculated by the stochastic point-source method and then summed at the observation point, with a proper time delay, to obtain the ground motion from the entire fault. We used the dynamic corner frequency approach. In this model, the corner frequency is a function of time, and the rupture history controls the frequency content of the simulated time series of each subfault. In this study, we identify the source parameters of the first earthquake August 11, 2012 Ahar-Varzaghan earthquake using stochastic finite fault method (Motazedian and Atkinson, 2005). We estimated the causative rupture length and the downdip causative rupture width using the empirical relations of Wells and Coppersmith (1994), from the best defined aftershocks zone and depth distribution of these aftershocks as 15km and 10km, respectively. The simulated results compared with recorded ones on both frequency and time domain. The good agreement between the simulations and records, at both low and high frequencies, gives us confidence in our simulation model parameters for NW-Iran. The estimated strike and dip of the causative fault are 85º and 83º. The fault plane was divided into 5×5 elements. Rupture was propagated at (i,j)= (4×3) element from east to west. The focal depth is approximately 12 km. We then obtained a spectral decay parameter (κ) from the slope of smoothed amplitude of the Fourier spectra of acceleration at higher frequencies. The best fit coefficient for the horizontal component is κ=0.0002R+0.047. The kappa factor for the vertical component is estimated based on the same procedure and estimated κ=0.0002R+0.034. These equations represent the κo for horizontal component is larger than that of the vertical component. This confirms that the attenuation of higher frequencies is much less on the vertical than the horizontal component, as the vertical component is less sensitive to the variation of shear-wave velocity of near-surface deposits. The clear difference between vertical and horizontal values suggests that κo contains dependence on near surface site specific attenuation effects. In the absence of three-component stations, values obtained from vertical components may be helpful for a first estimate of this parameter. We also calculated residuals for each record at each frequency, where the residual is defined as log (observed PSA) - log (predicted PSA), where PSA is the horizontal component of 5% damped pseudoacceleration. We sorted simulated records according to agreement between Fourier spectrum and response spectra into two groups, A and B. The simulation using A quality agrees betther with observed records than that using B quality. The lowest residuals averaged over all frequencies are from 0.4 to 18.3 Hz for A quality and from 1.2 to 18 Hz for B quality simulated.
    Keywords: Strong ground motion, Stochastic finite fault method, Ahar, Varzaghan earthquake, NW Iran
  • Pages 45-57
    The purpose of velocity analysis is to extract the normal moveout velocity as a function of the zero-offset travel time at selected CDP locations along the seismic line. Since results of velocity analysis depend on coherency estimator, an estimator that provides a high velocity resolution is essential. Even though the conventional semblance method which is the most popular coherency estimator (Tanner and Kohler, 1969) provides a robust velocity spectrum, the tendency to smear the velocity peaks as the time increases makes the estimation of accurate velocity difficult. This estimator, however, has some resolution limits that cause problems in some cases. It fails to distinguish interfering events in a short time window and in cases of thin bedding (Lerner and Cellis, 2007). We propose here two new coherency estimators that resolve these limitations at a minor extra-cost. The estimators are based on a differential semblance (DS) coefficient (Symes and Carazzone, 1991) that is weighted by the semblance estimator. High-resolution is introduced by sorting the traces in the data in a way that highlights the time shifts between adjacent traces within a time gate. The new estimators exploit the redundancy of seismic data in the common mid-point (CMP) to bootstrap the seismic traces in a manner that nicely brings time shifts between adjacent traces to discriminate time gates built using parameters that are close to the true stacking parameters. Bootstrapping is a statistical technique used to infer estimates of standard errors and confidence interval from data samples for which the statistical properties are unattainable via simple means. The first proposed estimator is deterministic bootstrapped differential semblance (BDS) that is based on a deterministic sorting of original offset traces by alternating near and far offsets to achieve maximized time shifts between adjacent traces. Deterministic sorting that alternates near- and far-offset traces in the time window has higher resolution than does simple bootstrapping applied to the data traces. The second was the product of several BDS terms, with the first term being the deterministic BDS defined above. The other terms were generated by random sorting of traces that alternated between near and far offsets in an unpredictable manner. The proposed estimators help in discriminating several trial parameters which produce a good guess of the flattening parameters and have direct implications in retrieving velocity information from time gathers. The suggested estimators are tested on synthetic and real data examples to show the gain in resolution they yield when applied, and they are compared with coefficient semblance. Results show that deterministic BDS coefficient provides an increased resolution with no extra computing effort compared to the BDS coefficient. Further resolution can be achieved by involving several controlled bootstrapping outcomes in the estimator, but this comes at a computing cost nearly proportional to the number of terms in the high resolution estimator. The high-resolution BDS proves to be an efficient tool in building velocity spectra for time-domain velocity analysis and it provides more resolution with respect to conventional semblance estimator. The proposed estimators could be a good substitute for the semblance coefficient, and an economic alternative to other high resolution estimators such as eigenvalue methods that are expensive for the dense parameter tracking in high fold data sets.
    Keywords: Bootstrap method, Velocity analysis, Differential semblance, Normal moveout correction
  • Pages 59-68
    Natural signals are continues, therefore, digitizing is an essential task enabling us to use computing tools to process them. According to the Nyquist/Shannon sampling theory, the sampling frequency must be at least twice the maximum frequency contained in the signal which is being sampled; otherwise, some high frequencies may be aliased and result in a bad reconstruction. The Nyquist sampling rate makes it possible to reconstruct the original signal exactly from its acquired samples. To enhance the efficiency of sampling process, a procedure is to use a high sampling rate. But the huge volume of generated data by this approach is a major challenge in many fields, like seismic exploration, and moreover, sometimes the sampling equipment cannot handle the broad frequency band. Seismic data acquisition includes sampling in time and spatial directions of a waveform that is generated by some sources like dynamite. Sampling should be done according to a regular pattern of receivers. Nevertheless, generally due to some acquisition obstacles seismic data sets are irregularly sampled in spatial direction(s). This irregularity causes a low quality seismic images that contain artifacts and missing traces. One of the approaches that have been developed to deal with this defect is interpolation of the acquired data according to a regular grid. Through the interpolation we can achieve an estimation of the fully sampled desired signal. This approach can also be as a tool to design an acquisition geometry which is sparser and results in more cost effective survey. Compressive sensing (CS) theory has been developed helping us to sample data below Nyquist sampling rate while being able to reconstruct them by considering the solution of an optimization problem. This theory claims that the signals/images that can be presented sparsely under a pre-specified basis or frame can be reconstructed accurately from a few numbers of its samples. The principle of the CS is based on the Tikhonov regularization like equation (eq. 1) which utilizes sparsifying regularization terms. In equation (1), the CS sampling operator,, contains three elements: (i) a sparsifying transform C which provides a sparse presentation of signals/images according to the used basis, (ii) measurement matrix M which for seismic issue is identity matrix, and (iii) under sampling operator S which is incoherent with sparsifying operator C. Curvelet transform contains a frame set whose elements have a great correlation with curve-like reflection events presented in seismic data and can provide a sparse presentation of seismic images. The under sampling scheme used in this paper is Jitter that allows controlling the maximum gap size between known traces. Another commonly used under sampling scheme is Gaussian random or binary random. Since under sampling appearance in frequency domain is a Gaussian random noise, the interpolation problem can be treated as a nonlinear de-noising problem. Curvelet frames are an optimal choice for this purpose. The sparsity regularization plays a leading role in CS theory. This approach has also been effectively applied on other problems like de-noising and de-convolution. There are a wide range of functions that can impose sparsity in regularization equation. The performance of these functions to interpolate an incomplete data is related to their ability in coherency with initial model properties. There are a variety of potential functions and the l1-norm is the well-known and commonly used of them. But still a comprehensive study to find out which of them is more efficient for seismic image reconstruction is necessary. This defect is because of absence of a general potential function. Here we use a general potential function which enables us to compare the efficiency of a wide range of potential functions and find the optimum one for our problem. This regularization function incudes lp-norm functions and others as its especial cases which are presented in Table 1. This general function covers both convex and non-convex regularization functions. In this paper we use the potential function to compare the efficiency of different approaches in CS algorithm. Through solving regularization problems a controversial part is setting the best regularization parameter,. Here due to redundancy of curvelet transform, assigning a proper parameter will face some difficulties. Many approaches like L-curve, Stain’s unbiased risk estimate (SURE), and generalized cross validation (GCV), face some difficulties in finding this parameter. Therefore, we inclined to use some nonlinear approaches, such as NGCV (Nonlinear GCV) and WSURE (Weighted SURE). The efficiency of the mentioned methods for estimating regularization parameter and choosing the best potential function is evaluated by considering a synthetic noisy seismic image. By under-sampling this image and removing more than 60% of its traces, the initial/observed model will be reconstructed. This imperfect image serves as our acquired seismic data. In solving equation (1) we use a forward-backward splitting recursion algorithm. Finally through this algorithm we could reach the optimum potential function and a method to estimate the regularization parameter.
    Keywords: Seismic data interpolation, reconstruction, Compressive sensing, Sparsity, Curvelet transform
  • Pages 69-82
    Seismic tomography is an imaging technique which creates maps of subsurface elastic properties such as P/S wave velocity, density and attenuation, based on observed seismograms and use of sophisticated inversion algorithms. Amongst different acquisition geometries, seismic cross-hole tomography has a special position in geophysical surveys with many applications in hydrocarbons, coal and other minerals exploration and engineering purposes investigations related to constructions. Main goal of these studies is obtaining precise information about the earth structure (layers structure, impedance of layers, faults and fractures) or anomalies (objects, pipes, voids). Traveltime tomography is a conventional approach to convert special phase of waveform travletimes (such as P or S wave arrivals) to corresponding parameters. Low computational effort is needed to perform traveltime tomography, but the results suffer from the lack of high resolution. Seismic waveform tomography is an efficient tool for high resolution imaging of complex geological structures and has been widely used by researchers in the field of exploration seismology. As waveform tomography exploits waveforms, in addition to traveltimes, it has superior resolution comparing to traveltime tomography but its computational complexities have limited its everyday use in real world applications. In this study we focus on application of waveform tomography in an engineering purpose seismic cross-hole study. Our approach relies on solution of acoustic wave equation in frequency domain and minimizing residual of calculated wavefield and observed seismograms. Frequency domain approach lets simultaneous sources modeling and implementing frequency dependent absorption mechanisms. This approach leads to a large system of equations. To solve the large system of equations sparse direct solvers can be used. The mixed-grid finite-difference used to discretize continuous second order hyperbolic acoustic wave equation. Although elastic modeling is more the realistic and near to observed data, most researchers prefer to use acoustic wave equation instead of elastic one due to lower computational costs. Instead, we pre-process the observed data to increase comparability of observations and modeling. These pre-processing include suppressing phases cannot be explained by acoustic modeling such as S waves or Rayleigh waves or scaling seismograms to take into account amplitude vs. offset effects in acoustic and elastic cases. Waveform tomography is very a nonlinear problem with a very rugged cost function. To overcome this nonlinearity, we solve the problem using hierarchical approaches. We start inversion from low frequency components, where the cost function is smoother, and then proceed to higher components. Lower frequency inversion results have been used as initial velocity model for higher frequency inversion. A synthetic example has been used to test the performance of the algorithm in the absence and presence of noise. As the results show the performance of current waveform tomography algorithm decreases in case of noisy data, which implies the importance of denoising before inversion and/or employing regularization. Another strategy which helps to control noise issue is simultaneous inversion of frequency components in different groups, as showed in real data example. Lastly a real cross-hole dataset acquired for engineering purposes has been studied. The traveltime tomography result is used as starting model for waveform tomography. The results of waveform tomography are in agreement with downhole measurements.
    Keywords: Seismic waveform tomography, Cross, hole seismic, Wave Equation, Frequency domain
  • Pages 83-96
    Structural evolution of Alborz has already been mentioned by many researchers all over the world. Central Alborz is located in the bending of the eastern and the western Alborz. Damavand volcano is situated in the bending part along the great and active faults such as Mosha, North of Baijan, Ask, Nava and Ira. The Nava and Ira faults are in the eastern side of Damavand volcano with a trend of ESE parallel to the general tenor of faulted and fractured part of western Alborz. Based on the geological map, both faults are active reverse structures which are hidden under lavas. With regard to the structural studies, Ira and Nava are active and related to the other unknown faults with the same trend and also with a trend of WNW that show the new established transtension mechanisms prevailing on the Alborz. Geologically, it is believed that some new tectonic events affect the structural evolution in the region, such as new extensional system with the WNW trend activated during 5±2 m.y. In this study we extensively applied geophysical methods combined with former structural data to find any event of the neo-tectonic systems in the area. Due to being more applicable, magnetometry was used for surveying the unconformities. The field study concentrated on the faulted and fractured sedimentary bedrock of Alborz, east of Damavand. The average height level is about 4000 meter. Because of the hard topographic conditions, we could design only two North-South profiles. Total magnetic field variations were measured using a moving proton magnetometer and one system as the remote base. More than 286 data points were collected and processed to extract the best model out of the reduction to the pole transform, first vertical deviation, upward continuation and analytical signal. The model of the first vertical deviation is the best reliable output to show the anomaly of tectonic signature. With this filter, amplitude spectra were enhanced as well as the wave numbers. Another advantage of this method is the detecting any type of geological and subsurface block movements caused by the faults, folding or other tectonic events. First vertical deviation proves the best model compared to the other models and with correlation to the geological map, it presents many important insights of minor and major faults that were hidden before. In our founding, two NWN junction faults are remarkable which verify the activation of new transtensional system due to having the sign of normal-strike slip movements in the tectonic of the Eocene units. It seems that they are minor repeatedly faults with the normal movement of the hanging wall towards SW. In general we recognized, approximately eight fault mechanisms at subsurface whose signatures are not shown on the geological maps of the region. Some of them belong to the former reverse system and two of them are in accordance with the new conventional transtension system with WNW tenor and the normal movement.
    Keywords: Alborz, Damavand volcano, Ira, Nava faults, Magnetometry, Vertical derivative
  • Pages 97-111
    Generally, gravity anomaly is the difference between the observed acceleration of Earth's gravity and a normal value. Topography (all masses above geoid) plays a main role in definition of the gravity anomaly. Based on modeling of the effect of topography, there are different models of gravity anomaly such as free-air and Bouguer anomaly. The main goal of the Bouguer anomaly is removing of gravitational effect of all masses above the geoid (topography and atmosphere). This anomaly is widely used in exploration geophysics. In geodetic applications, in the absence of topography, Bouguer gravity anomaly is smooth and thus more suitable for interpolation and even stable downward continuation. In the other hand, gravity anomaly is the difference between real gravity at a point and normal gravity in corresponding point where the real and normal potentials in both points are the same. In geodesy, the gravity disturbance is defined as the difference between the real gravity observed at a point and normal gravity at the same point. In many geophysics literatures, gravity anomaly is replaced by gravity disturbance together a corrective term called geophysical indirect effect. This correction is computed by application of the free-air (and usually the Bouguer) correction over the geoid–ellipsoid separation. This correction must be computed by application of only free air correction to separation of the real equipotential surface and its equivalence in normal gravity field at gravity observation. The free-air (FA) correction is used to up/downward continuation of normal gravity anomaly. In practice, only linear approximation, 0.3086 mGal/m, is used while a second-order FA correction is more realistic than the linear approximation. Note that the FA correction is not a reduction formula for downward continuation of gravity anomaly. One of the most ambiguities in definition of Bouguer effect gravity anomaly arises from formulating the effect of topography. The gravitational of topography can be split into Bouguer term, which is the dominant term, plus minor effect, terrain roughness. In the evaluation of a topographical effect, planar or spherical models of topography can be used. Many studies have shown that planar and spherical model of topography give very different results for Bouguer anomalies. Also, it was shown that the planar topography model (in form of infinite Bouguer plate) yields to a mathematically and physically meaningless quantity. To compute the terrain correction in geophysics, the gravitational effect of only masses up to about distance 167 km (Hayford zone) is considered. In principle the domain of computation of the topographical effect is the whole of the Earth. Despite the fact that the gravitational effect decreases with distance, the effect of beyond Hayford zone is large and should be considered. The removal of the topographical masses disturbs the isostasic equilibrium of the crust. As a result, the equipotential surface can be moved up to several hundred meters. The indirect topographic effect is defined as the effect on gravity due to removing the topographical masses. The indirect effect of topography (ITE) in Bouguer gravity anomaly was first introduced by Vanicek, et al (2004). Their computations show that the numerical values of ITE can be reached up to 150 mGal in mountainous area. While, in most studies, ITE does not take into account and only direct topographical effect is considered. In analogy with topographical effect, in the computation of Bouguer gravity anomaly, the direct and indirect effects of atmospheric masses should be considered. Usually the gravity effect of the atmosphere is evaluated by IAG formula. This formula considers only the direct topographical effect as the correction to gravity anomaly. The indirect atmospherical effect is not discussed in this context. In this study, the method proposed by Sjoberg (2000) is recommended and applied. In order to investigate differences between classic and new Bouguer gravity anomalies, numerical calculations were performed in a mountainous area bounded by, where there are 2385 land gravity observation. The classic planar Bouguer anomalies were computed from where g and are observed and normal gravity, H is the orthometric height of point and is the terrain correction computed up to Hayford zone. The new spherical Bouguer anomalies were computed from where FA is second-order free-air correction, DTE is the direct topographical effect (spherical shell + terrain roughness), ITE is the indirect topographical effect, DAE is the direct atmospherical effect and, IAE is the indirect Atmospherical effect. The results indicate that there are large differences (over 100 mGal) between classical and new Bouguer anomalies. The new Bouguer anomalies are less correlated with terrain heights. Therefore the planar model cannot completely remove the gravitational effect of topography.
    Keywords: Gravity anomaly, Geodesy, Geophysics, Bouguer, Indirect effect
  • Pages 113-124
    The topographical effect is one component of the Earth's gravity field that needs to be reliably evaluated in the gravity field modeling. The topographical effect can be numerically evaluated from the knowledge of a Digital Terrain Models (DTM). After the Satellite positioning system, e.g., GPS, the computation points as well as DTMs present/convert in Gauss ellipsoidal (geodetic) coordinates system, λ, φ and h called ellipsoidal longitude, longitude and height, respectively. So far, the planar and spherical models of the topography are frequently used for computation of the effect of topographical masses in geodesy and geophysics. In practice, the planar model is widely used in the evaluation of the classical terrain correction. Vanicek et al. (2001) indicated that the planar model of topography (in form of infinite Bouguer plate) cannot be applied for the solution of the geodetic boundary value problem. Also, spherical approximation of the topography may be insufficient for precise determination of the 1cm-geoid. Moreover, the interested points on and above the Earth’s surface as well as the DTMs are presented in geodetic coordinate system. Therefore the Newton's integral and related formulas should be evaluated in terms of the geodetic coordinates system. In this study, a new exact ellipsoidal formula for potential of topography and its vertical gradient, as well as for second Helmert condensation topography effects are derived. The Newton's integral for computation of the gravitational potential and its vertical gradient has a weak singularity when the computation point is close to the integration point. According to Martinec (1998), the singularity is removed from the numerical integration using the Cauchy algorithm by adding and subtracting the Bouguer terms (the singularity contribution). In ellipsoidal approximation, the Bouguer terms are computed from an ellipsoidal shell. The ellipsoidal shell is sufficiently approximated by a shell bounded by two concentric, similar ellipsoids that so called homoeoid. The thickness of homoeoid is equal to ellipsoidal height of topography at the interest point. The roughness terms, due to deficiency of the ellipsoidal Bouguer shell can be evaluated by direct numerical integration. The results of two spherical and ellipsoidal models are numerically investigated in Iran (the highest peak exceeds 5000 m). The selected test area extends from 24° to 40° northern latitudes and from 44° to 60° eastern longitudes. Near zone of topographical integrals extends to 4° and the far zone from 4° to 180°. Near distant is divided into three zones. 1- Innermost zone to 15 minute, 2- middle zone to 1°, and outer zone from 1° to 4°. The contribution of Innermost, middle and outer zones is computed by 3", 30" and 5' DEMs. Far zone effect is computed by integration over a 30' DTM. The numerical results indicate that the magnitudes of ellipsoidal corrections (difference between ellipsoidal and spherical solutions) are small. The main bulk of this correction is long wavelength and is due to Bouguer and distance zone contributions. Therefore the ellipsoidal correction can be sufficiently used for regional and global applications such as regional Earth's gravity field approximation. Since for the compilation of 1cm geoid, the gravity with a precision better than 10 µGal is needed (Martinec, 1998), the ellipsoidal approximation of topography must be used in precise geoid computation particularly in rugged mountainous area.
    Keywords: Gravity field, Topographical effects, Ellipsoidal approximation, Geodetic coordinate system
  • Pages 125-138
    One of the most important goals in geomagnetic investigations is detecting local anomaly locations. Regional anomaly can be simulating as a trend surface, and local anomalies will be detected by comparison of measured data and simulated trend surface. The problem is trying to find best coefficients of trend surface model using inverse methods based on modern optimization techniques, which are faster and more accurate than common methods. The main idea of inverse method based on modern optimization approach is to search for a model, which gives its predicted values that are as close as possible to the observed ones. Extensive advances in computational techniques allowed researchers to develop new search strategies for use in optimization problems. Genetic Algorithm is one of the evolutionary optimization algorithms, based on the population of chromosomes, which is widely used in engineering optimization problems. Evolutionary algorithms are developed based on swarm intelligence and social behavior of individuals in nature. Besides, the populations in evolutionary algorithms called agents affected by neighbor agents and the best agent. At the end, optimum solution will be specified with respect to optimize objective function. In this paper, genetic algorithm is used for minimizing the differences between real and simulated data. In order to study geomagnetic anomaly, first, forward model should be developed and then, using inverse method based on GA, regional anomaly trend surface will be simulated. The objective function is define as, where, and are positions of the field study locations that are measured by GPS and is the magnetic value of the positions. Also, A, B, C, D, E and F are unknown coefficients that will be determined using inverse method. According to the objective function, a two-dimensional equation is proposed for simulating regional anomaly trend surface. Two-dimensional equations are better than one-dimensional and three-dimensional or higher dimensional equations. One-dimensional equations do not guarantee to cover all aspects of data. Besides, three or higher dimensional equations are also not recommended for modeling data; because, over fitting to the data may be occurred. Therefore, the two-dimensional equation is the best model for simulating the regional anomaly trend surface. It is important to note that the optimization technique will usually perform well in nonlinear forward models. The unknown coefficients of trend surface on regional magnetic anomaly in Doroh area in southeast of Iran were optimized using inverse analysis, and finally the local anomalies were detected. In order to find locations of local geomagnetic anomalies, total anomaly trend is subtracted from regional anomaly trend and then, the potential locations for drilling investigation are recognized. Our experimental results demonstrate very promising results of the optimization technique for solving inverse problems using GA for detecting local geomagnetic anomaly trend surface, which is validating through drilling investigations. Besides, upward and reduce to pole filters and combination of them, which are common filters for detecting local geomagnetic anomaly locations, are used for conformation our results.
    Keywords: Inverse analysis, Local magnetic anomaly, Genetic Algorithm, Minimization
  • Pages 139-152
    The North Atlantic Oscillation (NAO) is one of the most prominent modes of low-frequency variability over the Atlantic basin in the Northern Hemisphere. In the past decades, the impact of NAO has attracted increasing scientific interest because the NAO exerts an important impact on the regional climate and weather in the North Atlantic region and adjacent continents. Of particular interest is the impact of NAO on the Mediterranean storm track through which NAO can extend its influence to the climate far downstream including the Middle East and southwest Asia. The problem has been previously studied using the energetics by comparing ensemble averages of the terms involved in the eddy kinetic and available potential energy, where ensemble averages are taken separately over the critical positive and negative NAO months. Such analysis has resulted in certain specific results regarding the behavior of the transient eddies in Mediterranean storm track during the two phases of NAO. For example, the energy flux vectors indicate a stronger source in the central Mediterranean with a stronger sink in the Red Sea and Northeast Africa in the positive NAO. There is, however, a fundamental issue with any energy-based analysis, that is the non-uniqueness way of writing the conversion and flux terms. As a more powerful diagnostic tool, wave activity conservation law resolves the non-uniqueness issues encountered in dealing with the conversion terms. In this way, wave activity diagnostics proves useful for investigating propagation characteristics of stationary and migratory wave disturbances and their interaction with mean flows, as well as inferring preferred position of emission and absorption of Rossby waves. First introduced for waves defined by perturbation with respect to zonal mean leading to the Eliassen–Palm (EP) diagnostics, the wave activity conservation law has now been extended to other averages as well as to more general definition of waves and mean flows with no resort to averaging. In this study a form of the wave activity and its flux introduced by Esler and Haynes in 1999 is used. The data used are the NCEP/NCAR reanalysis data covering years 1950–2011 for the winter months from December to February. The critical months are defined on the basis of the monthly index of NAO and grouped in two ensembles of 31 positive and 37 negative NAO months. A critical positive (negative) is considered a month whose monthly NAO index is greater (smaller) than the mean NAO index by more than one standard deviation. The wave activity and the three components of its flux are computed for all days of each winter season, then the averages are taken and the composite maps are prepared for the two ensembles. To investigate the net flux of wave activity to the Mediterranean region, a three-dimensional domain extended vertically from 600 to 200 hPa and horizontally from 15W to 45E in longitude and from 30N to 50N in latitude is selected. For further analysis, the domain thus defined is divided to the three equal subdomains in the west, center and east of the Mediterranean domain. The main results can be summarized as follows. The connection of the Mediterranean storm track to the north east of Atlantic and north of Europe is stronger in the positive phase of NAO. However, there is a stronger connection of the Mediterranean storm track to the cyclogenesis in the west of the North Atlantic in the negative phase of NAO. In other words, the Mediterranean storm track receives stronger activity from the north and the west in, respectively, the positive and negative phases of NAO. In the upper troposphere, wave activity flux vectors indicate the dominance of anticyclonic (cyclonic) Rossby wave breaking and northward (southward) transfer of momentum in the positive (negative) phase of NAO over the Mediterranean region. In both phases, while the west and east subdomains act as sinks (receivers) of wave activity, the central subdomain acts as a source (emitter). In accordance with the results from energetics, the central Mediterranean acts as a considerably stronger source of wave activity in the positive phase. Overall, results of wave activity analysis confirm those of the energetics. In particular, the southwest Asia is expected to receive a stronger influence from the North Atlantic storm track via the Mediterranean in the positive phase of NAO. The above results are solely based on the simultaneous analysis of wave activity over the whole North Atlantic and Mediterranean storm tracks as well as the southwest Asia in critical months. It remains to see how such results carry over to the actual episodes of positive and negative NAO with proper time lags. Such analysis is expected to have the potential to lead to some seasonal forecasting capability.
    Keywords: Wave activity, EP flux, North Atlantic Oscillation, Critical months, Mediterranean storm track
  • Pages 153-172
    Arid and semi-arid regions of the world are confronted with limited water resources. A large part of Iran is arid and semi-arid and rainfall in arid and semi-arid regions is typically meager, irregular and highly variable. This irregularity affects the hydrological cycle and water resources. Investigating the hydrology of the arid and semi arid regions is essential to know this environment and determine their vulnerability to changes. This is obvious that effective water resource management is necessary and this needs a decision support system that includes modeling tools. Choosing a model needs recognition of capability and limitations of hydrological models in watershed scale. In this paper for runoff simulation in semi-arid Azam Harat river basin, three conceptual continuous Rainfall–Runoff models HBV, HEC-HMS and IHACRES were used. HBV (Hydrologiska Byrans Vattenavdelning) model was firstly developed in Swedish meteorlogical and hydrological center in 1976. Up to now, the runoff simulations of different basins with different hydrological conditions have been evaluated by this model. This model simulates the continous runoff as well as flood single event of a basin, dividing the basin into several subbasins. Dividing subbasins is based on altitude and the vegetation of the basin. In this research we used the HBV-Light version. In this version Genetic Algorithm (GA) procedure is used to calibrate the parameters of the model. HEC-HMS (Hydrologic Engineering Center- Hydrologic Modeling System) model is a new version of HEC-1 model which has been used for simulation of both continous and single event runoff of a basin. On of the main advantage of this model is simulating the snow melt of the basin. In this research, the soil moisture algorithm was chosen, as the main methodoly of simulating runoff base on the fluctuations of rainfall, evapotranspiration and soil moisture losses. IHACRES model is based on non-linear loss module and linear unit hydrograph module. The process of simulation includes converting precipitation and temperature in each time step to effective rainfall by non-linear module, then converting to surface runoff by unit hydrographs linear modulus at the same time step.Some criteria of evaluation in this study are Nash coefficient (E), coefficient of determination (R2), and the standard error of a root mean square error (RMSE) and Bias. The results show that HBV model with 0.76 Nash coefficient, 0.77 coefficient of determination, 0.72 RMSE and -0.004 Bias error and HEC-HMS with 0.62 Nash coefficient, 0.64 coefficient of determination and 1.3 RMSE and 0.007 Bias error have highest and lowest efficiencies in the calibration period, respectively. These values are 0.66, 0.67, 0.8 and -0.15 for HBV model and 0.55, 0.57, 1.02 and -0.03 for HEC-HMS model, respectively. Finally HBV model has the best performance in simulating rainfall according to watershed condition in the validation period. In parameter sensitivity analysis that was applied, the most sensitive parameters of HBV model were UZL, mAXBAS and BETA. In HEC-HMS model, parameters soil storage, Max infiltration and tension storage were the most sensitive parameters with greatest effect on the model output results. The parameters of IHACRES model demonstrate equal sensitivity.
    Keywords: Conceptual rainfall, Runoff model, Azam Harat River basin, HBV, HEC, HMS, IHACRES