Tuesday 22 August 2017

Jk kertas moving average


Outsourcing memberikan bisnis kebebasan untuk membuang sektor-sektor noncore, namun penting dari administrasinya pada perusahaan-perusahaan yang mengkhususkan diri di bidang itu. 1. Outsourcing membebaskan waktu dan sumber daya yang memungkinkan Anda berfokus pada aktivitas inti bisnis Anda. 2. Outsourcing menghemat uang dengan biaya penggajian. Biaya untuk pembukuan pegawai meliputi upah, waktu luang, pajak gaji, pajak pengangguran, asuransi kompensasi pekerja, dan tunjangan. Selain itu, Anda harus menyediakan ruang kerja, perabot kantor, perlengkapan kantor, perangkat lunak dan komputer. Rata-rata di sini mengapa Anda harus mempertimbangkan outsourcing: pemilik bisnis menghabiskan lima atau lebih jam per minggu untuk mengelola pegawai pembukuan. Dengan mengalihkan fungsi akuntansi Anda, Anda akan menerima layanan dari seorang profesional dengan biaya yang lebih rendah. 3. Outsourcing pembukuan Anda adalah cara yang jauh lebih efektif untuk mengatur keuangan bagi Orang Pajak Badan Pendapatan Kanada lebih cenderung menerima pendapat dari layanan pembukuan yang memiliki reputasi baik daripada penilaian akuntansi perusahaan in-house. Sebuah perusahaan pembukuan profesional akan bisa mengatur rekaman dengan cara yang bisa mereka pahami. Singkatnya, profesional pembukuan berbicara bahasa CRA. Sebagian besar pemilik bisnis tidak, mereka juga tidak punya waktu untuk mempelajarinya. Menyimpan uang untuk pajak dan menghemat waktu untuk audit potensial salah satu cara terbesar untuk menghemat uang sebagai pemilik usaha kecil. Sebagian besar usaha kecil yang gagal melakukannya di bawah beban pajak dan biaya lainnya. Pembukuan outsourcing adalah manfaat pengurangan biaya sebenarnya untuk usaha kecil. GHVA menyajikan: Memindahkan Bisnis Anda ke Arah yang Tepat dengan Bantuan Virtual Anda diundang ke Sesi Informational amp Networking tentang bagaimana Asisten Virtual dapat membantu menurunkan sebagian stres dan beban kerja yang Anda hadapi setiap hari dengan biaya efektif Janet Barclay, Asisten Terorganisir Laurie Meyer, Solusi Office yang Sukses Salma Burney, Gadis Virtual Friday Jacquie Manore, Workload Solution Services Inc. Pembicara Utama: Tuan Dave Howlett, pendiri dan Managing Director RealHumanBeing. org menyajikan sebagian dari presentasinya Cara Menghubungkan (seperti Manusia Sejati) Seminar howletts telah membuat ribuan orang terinspirasi dan bertekad untuk melakukan hal yang benar untuk diri mereka sendiri, perusahaan mereka, dan anak-anak mereka. Dia akan memberikan bagian 15 menit dari presentasi How To Connect yang terkenal. Biaya: 20.00 di pintu, atau pra-daftar dan simpan 5.00 Harga sudah termasuk parkir dan katering oleh Pepperwood. Catatan pembukuan yang baik berarti memiliki sistem pengarsipan yang baik. Tanpa satu Anda tidak memiliki yang lain. Simpan pembukuan Anda up-to-date. Di sisi penjualan jika Anda tidak memberikan faktur atau tanda terima penjualan, Anda tidak akan dibayar. Pembelian harus dilakukan setiap bulan atau setiap tiga bulan agar sesuai dengan periode pelaporan GST Anda. Jangan meninggalkannya setiap tahun hanya karena itu adalah periode pelaporan GST Anda. Alasan bagus untuk menjaga pembukuan Anda tetap up-to-date dalam artikel saya Pembukuan8230Mengapa Bother. Saat membayar tagihan 8211 catat metode tanggal dan pembayaran. Periksa apakah kartu itu dibayar dengan cek atau kartu kredit yang harus dibayarnya. Jika itu merupakan pembayaran sebagian - jumlah dan tanggal setiap pembayaran. Sekarang informasinya benar karena masuk ke dalam buku Anda. Ini adalah hal yang sederhana tapi informasi itu bisa berguna untuk memiliki waktu 6 atau 12 bulan di jalan. Selalu dapatkan tanda terima - Pembelian tunai sulit dikenai klaim lain, dan ya Tim Hortons akan memberi Anda tanda terima jika Anda bertanya. Jika kwitansinya begitu pudar atau kusut sehingga membuat tak terbaca 8211 yang menebak apa yang mereka tidak bisa masuk ke dalam buku. Pernyataan kartu kredit tidak selalu cukup bukti. Item yang dibeli di Wal-Mart bisa jadi apa saja dan fakta bahwa Anda membelinya dengan kartu bisnis Anda tidak membuktikan deduksi bisnis. Buat slip deposit terperinci dan simpan salinannya. Terakhir saya cek bank-bank itu masih memberikan buku deposit gratis. Atau beli notebook sederhana. Menjaga catatan rinci setiap deposit membantu kami mencocokkan pembayaran pelanggan dengan deposit pada rekening bank. Gunakan kalender untuk mengingatkan tanggal jatuh tempo jika Anda melacak salah satu dari pajak berikut 8211 PST, GST, Payroll, WCB, pendapatan kuartalan. Melakukan pembayaran tepat waktu akan membuat Anda keluar dari lubang tunggakan pajak dengan Canada Revenue Agency. Lihat lebih banyak tentang ini di artikel saya Bagaimana Saya Jatuh Begitu Jauh Ke Tunggakan Pajak Bisnis Pintar Orang tahu bahwa waktu adalah uang, dengan merencanakan ke depan. Catatan terorganisir akan membuat hidup lebih mudah bagi pembukuan Anda apakah orang itu adalah diri Anda atau seseorang yang Anda bayar. Jika Anda berurusan dengan Revenue Canada, orang bisnis dengan catatan terorganisir akan memiliki waktu yang jauh lebih mudah daripada orang yang tidak. Di bawah bagian 230 Undang-Undang Pajak Penghasilan, siapa pun yang menjalankan bisnis di Kanada dan siapa pun yang diminta untuk membayar atau mengumpulkan pajak, harus menyimpan buku dan catatan di tempat usaha atau tempat tinggal mereka, di Kanada, dalam format atau Agar memungkinkan penilaian dan pembayaran pajak. Kebanyakan orang dalam bisnis menyadari bahwa ada cara yang tepat untuk menyimpan buku. Bagi mereka yang tidak sadar penting untuk menyadari bahwa Revenue Canada memiliki kekuatan untuk meminta Anda menyimpan buku yang benar. Catatan pembukuan yang baik berarti memiliki sistem pengarsipan yang baik. Tanpa Anda tidak punya yang lain. Set up sistem pengarsipan yang bisa Anda ikuti dan menggunakannya. Ini mungkin langkah pertama yang paling penting untuk menyimpan catatan bagus. Sistem file sederhana mudah diatur dan dipelihara. GST Quarterly Filers GST Anda untuk bulan AprilMayJune 2008 akan jatuh tempo pada tanggal 31 Juli 2008. Bagaimana saya tahu jika I8217m a Quarterly Filer Keluar dari formulir GST Anda yang bernama 8220Goods and Services TaxHarmonized Sales Tax (GSTHST) Kembali untuk Pendaftar8221. Informasi penting yang Anda butuhkan adalah tiga kotak di pojok kanan atas di Halaman 1. Kotak pertama menunjukkan tanggal jatuh tempo pengiriman uang Anda, kotak kedua menunjukkan nomor rekening bisnis Anda dan kotak ketiga menunjukkan periode pelaporan. Atau Anda mungkin seorang filer tahunan. Kotak laporan akan memberi tahu Anda rentang tanggal untuk periode pengiriman Anda. Berapa Banyak yang Harus Saya Bayarkan Mengatur penerimaan Penjualan Anda untuk menghitung GST yang dikumpulkan dari penjualan. Mulai 1 Januari 2008, tingkat GST adalah 5. Kumpulkan dan atur penerimaan bisnis Anda untuk menghitung GST yang dibayarkan pada pembelian. Kurangi GSTPurchases dari GSTSales dan kirimkan selisihnya ke Receiver General. (I8217m berasumsi bahwa penjualan lebih besar daripada pembelian.) Jika pembelian GSTPurchases Anda lebih besar daripada GSTSales Anda mungkin memiliki pengembalian dana, tapi semuanya tergantung. Selalu ada pengecualian aturan. Saat ini ada banyak cara untuk melakukan pembayaran Anda. Anda bisa - - mempelkan email ke cek - sampaikan bank lokal Anda - perbankan online - GST Netfile cra-arc. gc. camenu-e. html - GST Telefile cra-arc. gc. camenu-e. html Kirim pembayaran Anda tepat waktu. General Receiver sangat tak kenal ampun terhadap keterlambatan dan akan menerapkan denda dan biaya bunga yang diperparah setiap hari. Klik tautan ini ke situs Badan Pendapatan Kanada untuk semua hal yang Anda ingin ketahui tentang GST. Cra-arc. gc. cataxbusinesstopicsgstmenu-e. html Angkat tangan Anda jika Anda mulai mengerjakan pembukuan tahun 2008. Bagus dan sisanya. Tunggu apa lagi Mengapa tunggu sampai 30 April untuk melihat hasil kerja tahun ini. Dengan memulai sekarang Anda bisa membuat laporan Profit amp Loss yang akan menunjukkan apakah Anda berhasil atau kehilangan uang dan bagaimana Anda membelanjakannya. Laporan yang satu ini adalah informasi hebat yang bisa membantu Anda lebih jauh dari sekarang. Apakah Anda menggunakan jasa pembukuan atau Anda melakukannya sendiri Kami mengenakan banyak topi saat mencoba menjalankan bisnis kami dan mungkin kami terlalu banyak memakainya. Jika Anda berjuang dengan pembukuan, dan saya tahu ini bukan tugas yang menyenangkan, mungkin Anda perlu mempertimbangkan untuk mendapatkan bantuan. Kebanyakan Pembukuan Profesional akan memberikan outsourcing pelatihan kerja dalam penggunaan perangkat lunak atau bantuan dalam mencari tahu apa kategori pengeluaran yang akan digunakan. Kutipan dari artikel Bisnis Berbasis Rumah-Don8217t mengabaikan pembukuan manajemen. Kurangnya keahlian manajerial adalah salah satu penyebab kegagalan bisnis tertinggi. Ikuti kursus, carilah saran ahli atau bantuan, tapi pelajari keterampilan manajemen dasar sebelum Anda memulai. Canadabusiness. caservletContentServerpagenameCBSCFEdisplayampcGuideFactSheetampcid1081945277281en Tentu saja Anda memerlukan beberapa jenis sistem untuk merekam semuanya. Ini bisa berupa program akuntansi, spreadsheet atau berbasis kertas. Di kotak komentar beri tahu saya sistem seperti apa yang Anda gunakan untuk pembukuan Anda. Aku sangat ingin tahu. Di artikel mendatang saya akan memposting temuan saya beserta info tentang berbagai sistem. Canadian Bookkeepers Association (CBA) adalah organisasi nirlaba nasional yang berkomitmen untuk kemajuan pemegang buku profesional. Keanggotaan di CBA memberi para pembukuan dengan sumber daya untuk berhasil dalam lingkungan yang senantiasa berubah. Asosiasi kami menciptakan keunggulan melalui pengetahuan dan berkembang pesat, mewakili pendekatan pengelolaan keuangan yang komprehensif untuk bisnis bagi perusahaan ukuran manapun. Keanggotaan kami berkembang pesat setiap hari dan mewakili pemegang buku di sebagian besar provinsi dan wilayah di Kanada. MISI kami meliputi: Mempromosikan, Mendukung, Menyediakan dan Mendorong Pembukuan Kanada. Mempromosikan dan meningkatkan kesadaran akan Pembukuan di Kanada sebagai disiplin profesional. Untuk mendukung jaringan nasional, regional dan lokal di antara Pembukuan Kanada. Memberikan informasi tentang prosedur terdepan, pendidikan dan teknologi yang meningkatkan industri, dan juga, profesional pembukuan Kanada. Untuk mendukung dan mendorong praktik pembukuan yang bertanggung jawab dan akurat di seluruh Kanada. Kami berkomitmen terhadap pertumbuhan yang menguntungkan anggota dan pembukuan kami di Kanada sebagai disiplin profesional. Tujuan kami meliputi kemajuan dalam Pendidikan Jarak Jauh, Sertifikasi Pemegang Rekening, dan Bab Daerah. Kami menghargai saran yang memperbaiki situs ini dan Asosiasi. Kami mendengarkan dan menghargai masukan Anda Kami bekerja untuk menunjuk pemegang buku di Kanada. Penunjukannya adalah 8220Certified Professional Bookkeeper8221 Asosiasi Pembukuan Kanada secara resmi dikenal sebagai Aliansi Pembukuan Kanada. CBA mulai menerima anggota pada awal 2003. 9 Februari 2004 Asosiasi Pembukuan Kanada tergabung sebagai asosiasi nirlaba. Pertumbuhan keanggotaan jauh melampaui apa yang semula diantisipasi. Kami sangat senang dengan pertumbuhan Asosiasi. Kami telah tumbuh dengan setiap tonggak sejarah ke dalam organisasi nirlaba Nasional saat ini kami memiliki anggota di hampir setiap provinsi dan wilayah. Menggunakan jaring saraf untuk mengenali digit tulisan tangan Perceptrons Sigmoid neurons Arsitektur jaringan syaraf tiruan Jaringan sederhana untuk mengklasifikasikan digit tulisan tangan Belajar Dengan kemiringan gradien Menerapkan jaringan kita untuk mengklasifikasikan digit Menuju pembelajaran yang mendalam Bagaimana algoritma backpropagation bekerja Warm up: pendekatan berbasis matriks yang cepat untuk menghitung keluaran dari jaringan syaraf Dua asumsi yang kita butuhkan tentang fungsi biaya Produk hadamard, s odot t Empat persamaan fundamental di balik backpropagation Bukti dari empat persamaan fundamental (opsional) Algoritma backpropagation Kode untuk backpropagation Dalam arti apa adalah backpropagation sebuah algoritma cepat Backpropagation: gambaran besar Memperbaiki cara jaringan saraf mempelajari fungsi biaya lintas entropi Overfitting dan regularisasi Inisialisasi berat tulisan tangan dikenali Ition revisited: kode Bagaimana memilih jaringan syaraf tiruan hyper-parameter Teknik lain Bukti visual bahwa jaring saraf dapat menghitung fungsi apapun Dua peringatan Universalitas dengan satu input dan satu output Banyak variabel input Ekstensi di luar neuron sigmoid Memperbaiki fungsi langkah Kesimpulan Mengapa Jaringan syaraf yang dalam sulit untuk dilatih Masalah gradien yang hilang Apa yang menyebabkan masalah gradien yang hilang Gradien yang tidak stabil pada jaring saraf dalam Gradien yang tidak stabil di jaringan yang lebih kompleks Hambatan lain untuk belajar mendalam Pembelajaran mendalam Memperkenalkan jaringan konvolusi Jaringan saraf konvolusi dalam praktik Kode untuk jaringan konvolusi kami Recent Kemajuan dalam pengenalan gambar Pendekatan lain terhadap jaring saraf dalam Pada masa depan jaringan syaraf tiruan Lampiran: Adakah algoritma sederhana untuk kecerdasan Berkat semua pendukung yang membuat buku ini mungkin, dengan ucapan terima kasih khusus kepada Pavel Dudrenov. Terima kasih juga kepada semua kontributor Hall of Fame Bugfinder. Belajar mendalam Buku oleh Ian Goodfellow, Yoshua Bengio, dan Aaron Courville Pada bab terakhir kita melihat bagaimana jaringan saraf dapat mempelajari bobot dan bias mereka dengan menggunakan algoritma gradient descent. Namun ada celah dalam penjelasan kami: kami tidak membahas bagaimana menghitung gradien fungsi biaya. Thats quite a gap Dalam bab ini Ill menjelaskan algoritma cepat untuk menghitung gradien tersebut, sebuah algoritma yang dikenal sebagai backpropagation. Algoritma backpropagation awalnya diperkenalkan pada tahun 1970an, namun nilainya tidak sepenuhnya dihargai sampai kertas 1986 yang terkenal oleh David Rumelhart. Geoffrey Hinton. Dan Ronald Williams. Makalah tersebut menjelaskan beberapa jaringan syaraf tiruan dimana backpropagation bekerja jauh lebih cepat daripada pendekatan pembelajaran sebelumnya, sehingga memungkinkan untuk menggunakan jaring saraf untuk memecahkan masalah yang sebelumnya tidak terpecahkan. Saat ini, algoritma backpropagation adalah tenaga kerja belajar di jaringan syaraf tiruan. Bab ini lebih banyak secara matematis dibandingkan dengan sisa buku lainnya. Jika Anda tidak tergila-gila dengan matematika, Anda mungkin tergoda untuk melewatkan bab ini, dan memperlakukan backpropagation sebagai kotak hitam yang rinciannya ingin Anda abaikan. Mengapa meluangkan waktu untuk mempelajari rincian tersebut Alasannya, tentu saja, adalah pemahaman. Inti dari backpropagation adalah ekspresi sebagian parsial parsial parsial C dari fungsi biaya C berkenaan dengan berat w (atau bias b) dalam jaringan. Ekspresi memberitahu kita seberapa cepat perubahan biaya saat kita mengubah bobot dan bias. Dan sementara ekspresinya agak rumit, ia juga memiliki keindahan untuk itu, dengan setiap elemen memiliki interpretasi intuitif dan alami. Jadi backpropagation bukan hanya algoritma cepat untuk belajar. Ini benar-benar memberi kita wawasan rinci tentang bagaimana mengubah bobot dan bias mengubah keseluruhan perilaku jaringan. Itu layak dipelajari secara rinci. Dengan mengatakan itu, jika Anda ingin membaca skim bab ini, atau langsung melompat ke bab berikutnya, itu bagus. Saya telah menulis sisa buku ini agar mudah diakses bahkan jika Anda memperlakukan backpropagation sebagai kotak hitam. Ada, tentu saja, poin kemudian di buku di mana saya merujuk kembali hasil dari bab ini. Tapi pada poin itu Anda masih harus bisa memahami kesimpulan utama, bahkan jika Anda tidak mengikuti semua penalaran. Sebelum membahas backpropagation, mari kita pemanasan dengan algoritma berbasis matriks cepat untuk menghitung output dari jaringan syaraf tiruan. Kita sebenarnya sudah sempat melihat algoritma ini mendekati akhir bab terakhir. Tapi saya menggambarkannya dengan cepat, jadi nilainya perlu ditinjau ulang secara rinci. Secara khusus, ini adalah cara yang baik untuk merasa nyaman dengan notasi yang digunakan dalam perambatan balik, dalam konteks yang familier. Mari kita mulai dengan notasi yang memungkinkan kita mengacu pada bobot di jaringan dengan cara yang tidak ambigu. Gunakan dengan baik wl untuk menunjukkan bobot koneksi dari neuron k pada lapisan (l-1) ke neuron j pada lapisan l. Jadi, misalnya, diagram di bawah ini menunjukkan bobot pada koneksi dari neuron keempat di lapisan kedua ke neuron kedua di lapisan ketiga jaringan: Notasi ini tidak praktis pada awalnya, dan memerlukan beberapa pekerjaan untuk dikuasai. Tapi dengan sedikit usaha Anda akan menemukan notasi menjadi mudah dan alami. Satu kekhasan notasi adalah urutan indeks j dan k. Anda mungkin berpikir bahwa masuk akal untuk menggunakan j untuk merujuk pada neuron masukan, dan k ke neuron keluaran, bukan sebaliknya, seperti yang sebenarnya dilakukan. Saya menjelaskan alasan untuk permainan kata ini di bawah ini. Kami menggunakan notasi yang sama untuk bias dan aktivasi jaringan. Secara eksplisit, kita menggunakan blj untuk bias j neuron pada lapisan l. Dan kita menggunakan alj untuk aktivasi j neuron di lapisan l. Diagram berikut menunjukkan contoh notasi yang digunakan: Dengan notasi ini, aktivasi j neuron pada lapisan l terkait dengan aktivasi pada lapisan (l-1) dengan persamaan (bandingkan Persamaan (4) mulai frac Diskusi nonumberend dan sekitarnya di bab terakhir) mulai aj sigmaleft (sumk wak blj right), tag end dimana jumlah di atas semua neuron k di lapisan (l-1). Untuk menulis ulang ungkapan ini dalam bentuk matriks, kita mendefinisikan matriks bobot wl untuk setiap lapisan, l. Entri matriks bobot hanya sebagai bobot yang menghubungkan lapisan l neuron, yaitu entri pada kolom j dan kolom k adalah wl. Demikian pula, untuk setiap lapisan, kita mendefinisikan vektor bias. Bl. Anda mungkin bisa menebak bagaimana ini bekerja - komponen vektor bias hanyalah nilai blj, satu komponen untuk setiap neuron di lapisan l. Dan akhirnya, kita mendefinisikan sebuah vektor aktivasi al yang komponennya adalah aktivasi alj. Bahan terakhir yang perlu kita tulis ulang (23) memulai j sigmaleft (sumk w a k blj right) nonumberend dalam bentuk matriks adalah gagasan untuk melakukan vectorisasi fungsi seperti sigma. Kami bertemu dengan vektorisasi secara singkat di bab terakhir, tapi untuk rekap, idenya adalah bahwa kita ingin menerapkan fungsi seperti sigma ke setiap elemen dalam vektor v. Kami menggunakan notasi sigma yang jelas (v) untuk menunjukkan aplikasi elementwise semacam ini. Sebuah fungsi. Artinya, komponen sigma (v) hanyalah sigma (v) j sigma (vj). Sebagai contoh, jika kita memiliki fungsi f (x) x2 maka bentuk vektorisasi dari f memiliki efek mulai jatuh (kiri mulai 2 3 ujung kanan kanan) kiri mulai f (2) f (3) ujung kanan kiri mulai 4 9 End right, tag end yaitu, vektorized f hanya kuadratkan setiap elemen vektor. Dengan notasi ini, Persamaan (23) memulai j sigmaleft (sumk w a k blj right) nonumberend dapat ditulis ulang dalam bentuk vektor yang indah dan kompak memulai sebuah sigma (wl a bl). Tag end Ekspresi ini memberi kita cara berpikir yang jauh lebih global tentang bagaimana pengaktifan dalam satu lapisan berhubungan dengan aktivasi pada lapisan sebelumnya: kita hanya menerapkan matriks bobot pada aktivasi, kemudian menambahkan vektor bias, dan akhirnya menerapkan fungsi sigma Omong-omong, ekspresi inilah yang memotivasi permainan kata-kata dalam notasi wl yang disebutkan sebelumnya. Jika kita menggunakan j untuk mengindeks neuron masukan, dan k untuk mengindeks neuron keluaran, maka kawin perlu mengganti matriks bobot pada Persamaan (25) memulai sebuah sigma (wl a bl) nonumberend oleh transpos dari matriks bobot. Ada sedikit perubahan, tapi mengganggu, dan kawin kehilangan kesederhanaan mudahnya mengatakan (dan berpikir) menerapkan matriks bobot pada aktivasi. Pandangan global itu seringkali lebih mudah dan lebih ringkas (dan melibatkan lebih sedikit indeks) daripada pandangan neuron-per-neuron yang pernah ada sekarang. Anggap saja sebagai cara untuk melarikan diri dari indeks neraka, sambil tetap tepat tentang apa yang sedang terjadi. Ekspresi juga berguna dalam praktik, karena kebanyakan perpustakaan matriks menyediakan cara cepat untuk menerapkan perkalian matriks, penambahan vektor, dan vektorisasi. Memang, kode di bab terakhir membuat penggunaan implisit dari ekspresi ini untuk menghitung perilaku jaringan. Bila menggunakan Persamaan (25) mulailah sebuah sigma (wl a bl) nonumberend untuk menghitung al, kita menghitung kuantitas perantara zl equiv wl a bl di sepanjang jalan. Kuantitas ini ternyata cukup berguna untuk diberi nilai nama: kita menyebut zl input tertimbang ke neuron pada lapisan l. Nah buatlah cukup banyak penggunaan input tertimbang kemudian di bab ini. Persamaan (25) memulai sebuah sigma (wl a bl) nonumberend kadang ditulis dalam bentuk input tertimbang, seperti al sigma (zl). Perlu dicatat juga bahwa zl memiliki komponen zlj sumk wl a kblj, yaitu zlj hanyalah input tertimbang untuk fungsi aktivasi neuron j pada lapisan l. Tujuan dari backpropagation adalah untuk menghitung turunan parsial parsial C parsial w dan parsial C pars b b dari fungsi biaya C berkenaan dengan berat w atau bias b dalam jaringan. Untuk backpropagation untuk bekerja kita perlu membuat dua asumsi utama tentang bentuk fungsi biaya. Sebelum menyatakan asumsi tersebut, penggunaannya berguna untuk memiliki contoh fungsi biaya dalam pikiran. Baik menggunakan fungsi biaya kuadrat dari bab terakhir (c. f. Persamaan (6) mulai C (w, b) equiv frac sumx y (x) - a2 nonumberend). Dalam notasi dari bagian terakhir, biaya kuadrat memiliki bentuk mulai C frac sumx y (x) - aL (x) 2, tag end dimana: n adalah jumlah total contoh pelatihan jumlahnya melebihi contoh pelatihan individual, xyy (X) adalah output yang diinginkan yang diinginkan L menunjukkan jumlah lapisan dalam jaringan dan aL aL (x) adalah vektor keluaran aktivasi dari jaringan saat x dimasukkan. Oke, jadi asumsi apa yang perlu kita buat tentang fungsi biaya kita, C, agar backpropagation dapat diterapkan Asumsi pertama yang kita butuhkan adalah fungsi biaya dapat ditulis sebagai rata-rata C frac sumx Cx daripada fungsi biaya Cx untuk individu. Contoh pelatihan, x. Ini adalah kasus untuk fungsi biaya kuadratik, di mana biaya untuk contoh pelatihan tunggal adalah Cx frac y-aL 2. Asumsi ini juga berlaku untuk semua fungsi biaya lainnya yang sesuai dalam buku ini. Alasan mengapa kita memerlukan asumsi ini adalah karena apa sebenarnya backpropagation yang kita lakukan adalah menghitung derivatif parsial parsial parsial Cx parsial dan parsial Cx parsial b untuk contoh pelatihan tunggal. Kami kemudian memulihkan sebagian C parsial w dan parsial C parsial b dengan rata-rata berdasarkan contoh pelatihan. Sebenarnya, dengan asumsi ini, anggaplah contoh pelatihan x telah diperbaiki, dan jatuhkan subskrip x, tulis biaya Cx sebagai C. Nah pada akhirnya letakkan x kembali, tapi untuk saat ini gangguan notasinya lebih baik. Kiri tersirat Asumsi kedua yang kita buat tentang biaya adalah bahwa hal itu dapat ditulis sebagai fungsi dari keluaran dari jaringan syaraf tiruan: Misalnya, fungsi biaya kuadrat memenuhi persyaratan ini, karena biaya kuadrat untuk contoh pelatihan tunggal x dapat ditulis sebagai Mulailah C frac y-aL2 frac sumj (yj-aLj) 2, tag end dan dengan demikian adalah fungsi dari aktivasi output. Tentu saja, fungsi biaya ini juga tergantung pada output y yang diinginkan, dan Anda mungkin bertanya-tanya mengapa tidak mengenai biaya juga sebagai fungsi dari y. Ingat, contoh pelatihan masukan x adalah tetap, sehingga output y juga merupakan parameter tetap. Secara khusus, bukan sesuatu yang dapat kita modifikasi dengan mengubah bobot dan bias dengan cara apa pun, yaitu bukan sesuatu yang jaringan sarafnya pelajari. Jadi, masuk akal untuk menganggap C sebagai fungsi dari aktivasi keluaran aL sendiri, hanya dengan parameter yang membantu menentukan fungsi itu. Algoritma backpropagation didasarkan pada operasi aljabar linier yang umum - hal-hal seperti penambahan vektor, mengalikan vektor dengan matriks, dan seterusnya. Tapi salah satu operasinya sedikit kurang umum digunakan. Secara khusus, anggaplah s dan t adalah dua vektor dengan dimensi yang sama. Kemudian kita menggunakan s odot t untuk menunjukkan produk elementwise dari dua vektor. Dengan demikian komponen s odot t hanya (s odot t) j sj tj. Sebagai contoh, mulailah leftbegin 1 2 end right odot leftbegin 3 4end kanan kiri mulai 1 3 2 4 ujung kanan kiri mulai 3 8 ujung kanan. Tag end Jenis penggandaan elementwise ini terkadang disebut produk Hadamard atau produk Schur. Lihat itu sebagai produk Hadamard. Perpustakaan matriks yang baik biasanya menyediakan implementasi cepat produk Hadamard, dan sangat berguna saat menerapkan backpropagation. Backpropagation adalah tentang memahami bagaimana mengubah bobot dan bias dalam jaringan mengubah fungsi biaya. Pada akhirnya, ini berarti menghitung turunan parsial partial C partial wl dan parsial parsial parsial parsial. Tapi untuk menghitungnya, pertama-tama kita mengenalkan kuantitas perantara, deltalj, yang kita sebut kesalahan di j neuron di lapisan l. Backpropagation akan memberi kita sebuah prosedur untuk menghitung deltalj kesalahan, dan kemudian akan menghubungkan deltalj dengan parsial parsial C parsial dan parsial parsial C pars. Untuk memahami bagaimana kesalahan didefinisikan, bayangkan ada setan di jaringan syaraf kita: Iblis duduk di j neuron di lapisan l. Sebagai masukan ke neuron masuk, setan mengacaukan operasi neuron. Ini menambahkan sedikit perubahan Delta zlj ke input berbobot neuron, sehingga bukannya menghasilkan sigma (zlj), neuron tersebut bukan menghasilkan sigma (zljDelta zlj). Perubahan ini menyebar melalui lapisan selanjutnya di jaringan, yang akhirnya menyebabkan biaya keseluruhan berubah dengan jumlah frac Delta zlj. Sekarang, iblis ini adalah iblis yang baik, dan sedang berusaha membantu Anda memperbaiki biaya, yaitu mereka berusaha menemukan Delta zlj yang membuat biaya lebih kecil. Misalkan frac memiliki nilai yang besar (baik positif maupun negatif). Kemudian iblis dapat menurunkan biaya cukup sedikit dengan memilih Delta zlj untuk memiliki tanda berlawanan dengan frac. Sebaliknya, jika frac mendekati nol, maka iblis tidak dapat memperbaiki biaya sama sekali dengan mengganti zlj masukan tertimbang. Sejauh yang bisa dikatakan iblis, neuron sudah cukup dekat dengan optimal. Ini hanya untuk perubahan kecil Delta zlj, tentu saja. Anggaplah bahwa iblis dibatasi untuk membuat perubahan kecil seperti itu. Jadi, ada perasaan heuristik di mana frac adalah ukuran kesalahan di neuron. Termotivasi oleh cerita ini, kita mendefinisikan kesalahan deltalj neuron j pada lapisan l dengan memulai deltalj equiv frac. Tag end Sesuai dengan konvensi biasa kami, kami menggunakan deltal untuk menunjukkan vektor kesalahan yang terkait dengan lapisan l. Backpropagation akan memberi kita cara untuk menghitung deltal untuk setiap lapisan, dan kemudian menghubungkan kesalahan tersebut dengan jumlah bunga riil, parsial parsial C parsial dan parsial parsial C pars. Anda mungkin bertanya-tanya mengapa iblis mengubah zlj masukan tertimbang. Tentunya lebih baik membayangkan setan mengubah aktivasi output alj, dengan hasil yang kawakan menggunakan frac sebagai ukuran kesalahan kita. Sebenarnya, jika Anda melakukan hal ini, bekerja dengan sangat baik dengan diskusi di bawah ini. Tapi ternyata membuat presentasi backpropagation sedikit lebih aljabar rumit. Jadi tetaplah menempel dengan frase deltalj sebagai ukuran kesalahan kita. Dalam masalah klasifikasi seperti MNIST istilah error kadang-kadang digunakan untuk menilai tingkat kegagalan klasifikasi. Misalnya. Jika jaring saraf dengan benar mengklasifikasikan 96,0 persen digit, maka kesalahannya adalah 4,0 persen. Jelas, ini memiliki arti yang sangat berbeda dari vektor delta kita. Dalam prakteknya, Anda tidak perlu repot untuk menceritakan mana makna yang dimaksudkan dalam penggunaan tertentu. Rencana serangan: Backpropagation didasarkan pada empat persamaan fundamental. Bersama-sama, persamaan tersebut memberi kita cara untuk menghitung deltal kesalahan dan gradien fungsi biaya. Saya nyatakan empat persamaan di bawah ini. Berhati-hatilah, meskipun: Anda seharusnya tidak berharap untuk seketika mengasimilasi persamaan. Harapan seperti itu akan menimbulkan kekecewaan. Sebenarnya, persamaan penyajian kembali sangat kaya sehingga memahami mereka dengan baik memerlukan banyak waktu dan kesabaran saat Anda secara bertahap mempelajari lebih dalam persamaan. Kabar baiknya adalah kesabaran seperti itu dilunasi berkali-kali. Jadi diskusi di bagian ini hanyalah permulaan, membantu Anda dalam memahami pemahaman tentang persamaan. Heres pratinjau cara-cara yang baik menggali lebih dalam ke dalam persamaan kemudian dalam bab ini: III memberikan bukti singkat dari persamaan. Yang membantu menjelaskan mengapa mereka benar menyatakan dengan baik persamaan dalam bentuk algoritmik sebagai pseudocode, dan lihat bagaimana pseudocode dapat diimplementasikan sebagai kode Python yang nyata dan berjalan, di bagian akhir bab ini. Kembangkan gambaran intuitif tentang apa arti persamaan hubungan balik, dan bagaimana seseorang bisa menemukannya dari awal. Sepanjang perjalanan baik kembali berulang kali ke empat persamaan fundamental, dan saat Anda memperdalam pemahaman Anda, persamaan tersebut akan tampak nyaman dan, mungkin, bahkan indah dan alami. Persamaan untuk kesalahan pada lapisan keluaran, deltaL: Komponen deltaL diberikan dengan memulai deltaLj frac sigma (zLj). Tag end Ini adalah ekspresi yang sangat alami. Istilah pertama di sebelah kanan, sebagian C parsial aLj, hanya mengukur seberapa cepat biaya berubah sebagai fungsi dari aktivasi output j. Jika, misalnya, C tidak bergantung banyak pada neuron output tertentu, j, maka deltaLj akan menjadi kecil, itulah yang dinanti harapkan. Istilah kedua di sebelah kanan, sigma (zLj), mengukur seberapa cepat fungsi aktivasi sigma berubah pada zLj. Perhatikan bahwa segala sesuatu di (BP1) mulai deltaLj frac sigma (zLj) nonumberend mudah dihitung. Secara khusus, kita menghitung zLj saat menghitung perilaku jaringan, dan satu-satunya overhead tambahan kecil untuk menghitung sigma (zLj). Bentuk yang tepat dari sebagian C parsial aLj, tentu saja, tergantung pada bentuk fungsi biaya. Namun, asalkan fungsi biaya diketahui harus ada sedikit masalah komputasi partial C parsial aLj. Misalnya, jika menggunakan fungsi biaya kuadratik maka C frac sumj (yj-aLj) 2, dan parsial C parsial aLj (ajL-yj), yang jelas mudah dihitung. Persamaan (BP1) mulai deltaLj frac sigma (zLj) nonumberend adalah ekspresi komponen untuk deltaL. Anggapannya sangat baik, tapi bukan bentuk berbasis matriks yang kita inginkan untuk backpropagation. Namun, mudah untuk menulis ulang persamaan dalam bentuk matriks, seperti mulai deltaL nablaa C odot sigma (zL). Tag end Di sini, nablaa C didefinisikan sebagai vektor yang komponennya adalah turunan parsial parsial C parsial aLj. Anda bisa memikirkan nablaa C seperti mengekspresikan laju perubahan C sehubungan dengan aktivasi keluaran. Mudah untuk melihat bahwa Persamaan (BP1a) mulai deltaL nablaa C odot sigma (zL) nonumberend dan (BP1) mulai deltaLj frac sigma (zLj) nonumberend setara, dan karena alasan sekarang mulai digunakan dengan baik (BP1) mulai deltaLj frac sigma (ZLj) nonumberend secara bergantian untuk mengacu pada kedua persamaan. Sebagai contoh, dalam kasus biaya kuadrat kita memiliki nablaa C (aL-y), dan bentuk berbasis matriks (BP1) sepenuhnya mulai deltaLj frac sigma (zLj) nonumberend menjadi mulai deltaL (aL-y) odot Sigma (zL). Tag end Seperti yang Anda lihat, segala sesuatu dalam ungkapan ini memiliki bentuk vektor yang bagus, dan mudah dihitung menggunakan perpustakaan seperti Numpy. Sebuah persamaan untuk deltal kesalahan dalam hal kesalahan pada lapisan berikutnya, delta: Secara khusus mulailah deltal ((w) T delta) odot sigma (zl), tag end dimana (w) T adalah transpos dari matriks berat w Untuk lapisan (l1). Persamaan ini tampak rumit, namun setiap elemen memiliki interpretasi yang bagus. Misalkan kita mengetahui error delta pada lapisan l1. Ketika kita menerapkan matriks bobot transpos, (w) T, kita dapat berpikir secara intuitif mengenai hal ini sebagai menggerakkan kesalahan ke belakang melalui jaringan, memberi kita semacam ukuran kesalahan pada keluaran lapisan l. Kami kemudian mengambil produk hadamard odot sigma (zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017Using neural nets to recognize handwritten digits Perceptrons Sigmoid neurons The architecture of neural networks A simple network to classify handwritten digits Learning with gradient descent Implementing our network to classify digits Toward deep learning How the backpropagation algorithm works Warm up: a fast matrix-based approach to computing the output from a neural network The two assumptions we need about the cost function The Hadamard product, s odot t The four fundamental equations behind backpropagation Proof of the four fundamental equations (optional) The backpropagation algorithm The code for backpropagation In what sense is backpropagation a fast algorithm Backpropagation: the big picture Improving the way neural networks learn The cross-entropy cost function Overfitting and regularization Weight initialization Handwriting recognition revisited: the code How to choose a neural networks hyper-parameters Other techniques A visual proof t hat neural nets can compute any function Two caveats Universality with one input and one output Many input variables Extension beyond sigmoid neurons Fixing up the step functions Conclusion Why are deep neural networks hard to train The vanishing gradient problem Whats causing the vanishing gradient problem Unstable gradients in deep neural nets Unstable gradients in more complex networks Other obstacles to deep learning Deep learning Introducing convolutional networks Convolutional neural networks in practice The code for our convolutional networks Recent progress in image recognition Other approaches to deep neural nets On the future of neural networks Appendix: Is there a simple algorithm for intelligence Thanks to all the supporters who made the book possible, with especial thanks to Pavel Dudrenov. Thanks also to all the contributors to the Bugfinder Hall of Fame. Deep Learning. book by Ian Goodfellow, Yoshua Bengio, and Aaron Courville In the last chapter we saw how neural networks can learn their weights and biases using the gradient descent algorithm. There was, however, a gap in our explanation: we didnt discuss how to compute the gradient of the cost function. Thats quite a gap In this chapter Ill explain a fast algorithm for computing such gradients, an algorithm known as backpropagation . The backpropagation algorithm was originally introduced in the 1970s, but its importance wasnt fully appreciated until a famous 1986 paper by David Rumelhart. Geoffrey Hinton. and Ronald Williams. That paper describes several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks. This chapter is more mathematically involved than the rest of the book. If youre not crazy about mathematics you may be tempted to skip the chapter, and to treat backpropagation as a black box whose details youre willing to ignore. Why take the time to study those details The reason, of course, is understanding. At the heart of backpropagation is an expression for the partial derivative partial C partial w of the cost function C with respect to any weight w (or bias b) in the network. The expression tells us how quickly the cost changes when we change the weights and biases. And while the expression is somewhat complex, it also has a beauty to it, with each element having a natural, intuitive interpretation. And so backpropagation isnt just a fast algorithm for learning. It actually gives us detailed insights into how changing the weights and biases changes the overall behaviour of the network. Thats well worth studying in detail. With that said, if you want to skim the chapter, or jump straight to the next chapter, thats fine. Ive written the rest of the book to be accessible even if you treat backpropagation as a black box. There are, of course, points later in the book where I refer back to results from this chapter. But at those points you should still be able to understand the main conclusions, even if you dont follow all the reasoning. Before discussing backpropagation, lets warm up with a fast matrix-based algorithm to compute the output from a neural network. We actually already briefly saw this algorithm near the end of the last chapter. but I described it quickly, so its worth revisiting in detail. In particular, this is a good way of getting comfortable with the notation used in backpropagation, in a familiar context. Lets begin with a notation which lets us refer to weights in the network in an unambiguous way. Well use wl to denote the weight for the connection from the k neuron in the (l-1) layer to the j neuron in the l layer. So, for example, the diagram below shows the weight on a connection from the fourth neuron in the second layer to the second neuron in the third layer of a network: This notation is cumbersome at first, and it does take some work to master. But with a little effort youll find the notation becomes easy and natural. One quirk of the notation is the ordering of the j and k indices. You might think that it makes more sense to use j to refer to the input neuron, and k to the output neuron, not vice versa, as is actually done. Ill explain the reason for this quirk below. We use a similar notation for the networks biases and activations. Explicitly, we use blj for the bias of the j neuron in the l layer. And we use alj for the activation of the j neuron in the l layer. The following diagram shows examples of these notations in use: With these notations, the activation a j of the j neuron in the l layer is related to the activations in the (l-1) layer by the equation (compare Equation (4) begin frac nonumberend and surrounding discussion in the last chapter) begin a j sigmaleft( sumk w a k blj right), tag end where the sum is over all neurons k in the (l-1) layer. To rewrite this expression in a matrix form we define a weight matrix wl for each layer, l. The entries of the weight matrix wl are just the weights connecting to the l layer of neurons, that is, the entry in the j row and k column is wl . Similarly, for each layer l we define a bias vector . bl. You can probably guess how this works - the components of the bias vector are just the values blj, one component for each neuron in the l layer. And finally, we define an activation vector al whose components are the activations alj. The last ingredient we need to rewrite (23) begin a j sigmaleft( sumk w a k blj right) nonumberend in a matrix form is the idea of vectorizing a function such as sigma. We met vectorization briefly in the last chapter, but to recap, the idea is that we want to apply a function such as sigma to every element in a vector v. We use the obvious notation sigma(v) to denote this kind of elementwise application of a function. That is, the components of sigma(v) are just sigma(v)j sigma(vj). As an example, if we have the function f(x) x2 then the vectorized form of f has the effect begin fleft(left begin 2 3 end right right) left begin f(2) f(3) end right left begin 4 9 end right, tag end that is, the vectorized f just squares every element of the vector. With these notations in mind, Equation (23) begin a j sigmaleft( sumk w a k blj right) nonumberend can be rewritten in the beautiful and compact vectorized form begin a sigma(wl a bl). tag end This expression gives us a much more global way of thinking about how the activations in one layer relate to activations in the previous layer: we just apply the weight matrix to the activations, then add the bias vector, and finally apply the sigma function By the way, its this expression that motivates the quirk in the wl notation mentioned earlier. If we used j to index the input neuron, and k to index the output neuron, then wed need to replace the weight matrix in Equation (25) begin a sigma(wl a bl) nonumberend by the transpose of the weight matrix. Thats a small change, but annoying, and wed lose the easy simplicity of saying (and thinking) apply the weight matrix to the activations. That global view is often easier and more succinct (and involves fewer indices) than the neuron-by-neuron view weve taken to now. Think of it as a way of escaping index hell, while remaining precise about whats going on. The expression is also useful in practice, because most matrix libraries provide fast ways of implementing matrix multiplication, vector addition, and vectorization. Indeed, the code in the last chapter made implicit use of this expression to compute the behaviour of the network. When using Equation (25) begin a sigma(wl a bl) nonumberend to compute al, we compute the intermediate quantity zl equiv wl a bl along the way. This quantity turns out to be useful enough to be worth naming: we call zl the weighted input to the neurons in layer l. Well make considerable use of the weighted input zl later in the chapter. Equation (25) begin a sigma(wl a bl) nonumberend is sometimes written in terms of the weighted input, as al sigma(zl). Its also worth noting that zl has components zlj sumk wl a kblj, that is, zlj is just the weighted input to the activation function for neuron j in layer l. The goal of backpropagation is to compute the partial derivatives partial C partial w and partial C partial b of the cost function C with respect to any weight w or bias b in the network. For backpropagation to work we need to make two main assumptions about the form of the cost function. Before stating those assumptions, though, its useful to have an example cost function in mind. Well use the quadratic cost function from last chapter (c. f. Equation (6) begin C(w, b) equiv frac sumx y(x) - a2 nonumberend ). In the notation of the last section, the quadratic cost has the form begin C frac sumx y(x)-aL(x)2, tag end where: n is the total number of training examples the sum is over individual training examples, x y y(x) is the corresponding desired output L denotes the number of layers in the network and aL aL(x) is the vector of activations output from the network when x is input. Okay, so what assumptions do we need to make about our cost function, C, in order that backpropagation can be applied The first assumption we need is that the cost function can be written as an average C frac sumx Cx over cost functions Cx for individual training examples, x. This is the case for the quadratic cost function, where the cost for a single training example is Cx frac y-aL 2. This assumption will also hold true for all the other cost functions well meet in this book. The reason we need this assumption is because what backpropagation actually lets us do is compute the partial derivatives partial Cx partial w and partial Cx partial b for a single training example. We then recover partial C partial w and partial C partial b by averaging over training examples. In fact, with this assumption in mind, well suppose the training example x has been fixed, and drop the x subscript, writing the cost Cx as C. Well eventually put the x back in, but for now its a notational nuisance that is better left implicit. The second assumption we make about the cost is that it can be written as a function of the outputs from the neural network: For example, the quadratic cost function satisfies this requirement, since the quadratic cost for a single training example x may be written as begin C frac y-aL2 frac sumj (yj-aLj)2, tag end and thus is a function of the output activations. Of course, this cost function also depends on the desired output y, and you may wonder why were not regarding the cost also as a function of y. Remember, though, that the input training example x is fixed, and so the output y is also a fixed parameter. In particular, its not something we can modify by changing the weights and biases in any way, i. e. its not something which the neural network learns. And so it makes sense to regard C as a function of the output activations aL alone, with y merely a parameter that helps define that function. The backpropagation algorithm is based on common linear algebraic operations - things like vector addition, multiplying a vector by a matrix, and so on. But one of the operations is a little less commonly used. In particular, suppose s and t are two vectors of the same dimension. Then we use s odot t to denote the elementwise product of the two vectors. Thus the components of s odot t are just (s odot t)j sj tj. As an example, begin leftbegin 1 2 end right odot leftbegin 3 4end right left begin 1 3 2 4 end right left begin 3 8 end right. tag end This kind of elementwise multiplication is sometimes called the Hadamard product or Schur product . Well refer to it as the Hadamard product. Good matrix libraries usually provide fast implementations of the Hadamard product, and that comes in handy when implementing backpropagation. Backpropagation is about understanding how changing the weights and biases in a network changes the cost function. Ultimately, this means computing the partial derivatives partial C partial wl and partial C partial blj. But to compute those, we first introduce an intermediate quantity, deltalj, which we call the error in the j neuron in the l layer. Backpropagation will give us a procedure to compute the error deltalj, and then will relate deltalj to partial C partial wl and partial C partial blj. To understand how the error is defined, imagine there is a demon in our neural network: The demon sits at the j neuron in layer l. As the input to the neuron comes in, the demon messes with the neurons operation. It adds a little change Delta zlj to the neurons weighted input, so that instead of outputting sigma(zlj), the neuron instead outputs sigma(zljDelta zlj). This change propagates through later layers in the network, finally causing the overall cost to change by an amount frac Delta zlj. Now, this demon is a good demon, and is trying to help you improve the cost, i. e. theyre trying to find a Delta zlj which makes the cost smaller. Suppose frac has a large value (either positive or negative). Then the demon can lower the cost quite a bit by choosing Delta zlj to have the opposite sign to frac . By contrast, if frac is close to zero, then the demon cant improve the cost much at all by perturbing the weighted input zlj. So far as the demon can tell, the neuron is already pretty near optimal This is only the case for small changes Delta zlj, of course. Well assume that the demon is constrained to make such small changes. And so theres a heuristic sense in which frac is a measure of the error in the neuron. Motivated by this story, we define the error deltalj of neuron j in layer l by begin deltalj equiv frac . tag end As per our usual conventions, we use deltal to denote the vector of errors associated with layer l. Backpropagation will give us a way of computing deltal for every layer, and then relating those errors to the quantities of real interest, partial C partial wl and partial C partial blj. You might wonder why the demon is changing the weighted input zlj. Surely itd be more natural to imagine the demon changing the output activation alj, with the result that wed be using frac as our measure of error. In fact, if you do this things work out quite similarly to the discussion below. But it turns out to make the presentation of backpropagation a little more algebraically complicated. So well stick with deltalj frac as our measure of error In classification problems like MNIST the term error is sometimes used to mean the classification failure rate. Misalnya. if the neural net correctly classifies 96.0 percent of the digits, then the error is 4.0 percent. Obviously, this has quite a different meaning from our delta vectors. In practice, you shouldnt have trouble telling which meaning is intended in any given usage. Plan of attack: Backpropagation is based around four fundamental equations. Together, those equations give us a way of computing both the error deltal and the gradient of the cost function. I state the four equations below. Be warned, though: you shouldnt expect to instantaneously assimilate the equations. Such an expectation will lead to disappointment. In fact, the backpropagation equations are so rich that understanding them well requires considerable time and patience as you gradually delve deeper into the equations. The good news is that such patience is repaid many times over. And so the discussion in this section is merely a beginning, helping you on the way to a thorough understanding of the equations. Heres a preview of the ways well delve more deeply into the equations later in the chapter: Ill give a short proof of the equations. which helps explain why they are true well restate the equations in algorithmic form as pseudocode, and see how the pseudocode can be implemented as real, running Python code and, in the final section of the chapter. well develop an intuitive picture of what the backpropagation equations mean, and how someone might discover them from scratch. Along the way well return repeatedly to the four fundamental equations, and as you deepen your understanding those equations will come to seem comfortable and, perhaps, even beautiful and natural. An equation for the error in the output layer, deltaL: The components of deltaL are given by begin deltaLj frac sigma(zLj). tag end This is a very natural expression. The first term on the right, partial C partial aLj, just measures how fast the cost is changing as a function of the j output activation. If, for example, C doesnt depend much on a particular output neuron, j, then deltaLj will be small, which is what wed expect. The second term on the right, sigma(zLj), measures how fast the activation function sigma is changing at zLj. Notice that everything in (BP1) begin deltaLj frac sigma(zLj) nonumberend is easily computed. In particular, we compute zLj while computing the behaviour of the network, and its only a small additional overhead to compute sigma(zLj). The exact form of partial C partial aLj will, of course, depend on the form of the cost function. However, provided the cost function is known there should be little trouble computing partial C partial aLj. For example, if were using the quadratic cost function then C frac sumj (yj-aLj)2, and so partial C partial aLj (ajL-yj), which obviously is easily computable. Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend is a componentwise expression for deltaL. Its a perfectly good expression, but not the matrix-based form we want for backpropagation. However, its easy to rewrite the equation in a matrix-based form, as begin deltaL nablaa C odot sigma(zL). tag end Here, nablaa C is defined to be a vector whose components are the partial derivatives partial C partial aLj. You can think of nablaa C as expressing the rate of change of C with respect to the output activations. Its easy to see that Equations (BP1a) begin deltaL nablaa C odot sigma(zL) nonumberend and (BP1) begin deltaLj frac sigma(zLj) nonumberend are equivalent, and for that reason from now on well use (BP1) begin deltaLj frac sigma(zLj) nonumberend interchangeably to refer to both equations. As an example, in the case of the quadratic cost we have nablaa C (aL-y), and so the fully matrix-based form of (BP1) begin deltaLj frac sigma(zLj) nonumberend becomes begin deltaL (aL-y) odot sigma(zL). tag end As you can see, everything in this expression has a nice vector form, and is easily computed using a library such as Numpy. An equation for the error deltal in terms of the error in the next layer, delta : In particular begin deltal ((w )T delta ) odot sigma(zl), tag end where (w )T is the transpose of the weight matrix w for the (l1) layer. This equation appears complicated, but each element has a nice interpretation. Suppose we know the error delta at the l1 layer. When we apply the transpose weight matrix, (w )T, we can think intuitively of this as moving the error backward through the network, giving us some sort of measure of the error at the output of the l layer. We then take the Hadamard product odot sigma(zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017

No comments:

Post a Comment