Bill Clinton To Match Every Dollar Donated To Clinton Foundation This Week

first_imgUntil midnight on June 17, Bill Clinton will match every dollar donated to the Clinton Foundation.“Nothing we do is more powerful than changing a life, ” he wrote to supporters. “That’s why I need your help.“The Clinton Foundation is working every day to find lasting solutions to the world’s greatest challenges – providing families with a sustainable path out of poverty, helping communities fight the effects of climate change, increasing opportunity for women and girls, and reducing the prevalence of preventable diseases and childhood obesity.“All of our initiatives are based on our belief that when we empower individuals, we create lasting change.“And once we find a solution that works, we scale projects by bringing together partners, removing barriers, and taking action. But it is only with your support that we can transform more lives and communities from what they are today to what they can be tomorrow.“Donate $5 or more today, and I will personally match every dollar you give – up to our goal of $100,000 – before June 17.“Your gift will go twice as far: When you give $5, it’ll become $10. $10 will become $20, and $25 will become $50. And no matter what you give, that adds up to more lives changed.“I believe that together we can accomplish more than any of us can on our own.”To make a donation, click here.last_img read more

Get handson Emma Watson looks from Beauty and the Beast

first_imgHollywood actress Emma Watson is not only championing gender equality as the UN Women Goodwill Ambassador, but is also impressing Indian fashionistas with her Disney princess look in musical romantic fantasy film Beauty and the Beast. We have rounded up her statement beauty and hair looks respectively that have left many inspired!For make-upUse a light-weight foundation blended well that will give you a healthy and moisturised look paired with a concealer under your eyes. Also Read – Add new books to your shelfBrush out your brows and fill them in softly with a brow pencil.To achieve the modern graphic eye liner, line your eyes with a soft brown gel liner and flick it out like a cat eye, except leave the flick unfilled.Complete the look with a coral lipstick which will balance out the eyeliner.For hairMake sure you have some tools handy when you decide to do this on yourself; a dryer, a medium sized tongs, bobby pins, u-pins and a back-combing brush. It’s always better if your hair is a day old and not washed the same day. Also Read – Over 2 hours screen time daily will make your kids impulsiveMake sure your hair has enough texture to work with; take large sections and spray every section with a heat protected product that will make your curls last more longer and define them. Use the medium tongs on every section and curl it away from the face giving it a looser texture to work with.Once you’re done with the style, open it using some smoothing cream to loosen the curls and add more shine and softness.Tie a ponytail at your occipital bone and fix the hair around the ponytail to create more texture and definition to the style. Take smaller pieces of the sections and dress them around the ponytail for better grip and a much fuller look making sure all the hair in the ponytail is incorporated. Once the back is complete, take smaller sections from the front and dress it away from the face. You can twist and open the twist slightly teasing it with your fingers and then fix it on the ponytail making sure the whole look comes together.The front section has to be raised while blow drying to make sure that extra volume is created on the roots. Once that’s done, hold the hair in the same position and spray it to create a wave and fix it in the same position.Fix the front section towards the side or towards the ponytail depending on how long or short the hair is.At the end spritz a shine or setting spray to lock in the moisture and the look.last_img read more

Govt develops Energy Action Plan to ensure generation of worldclass electricity in

first_imgKolkata: The state Power department has developed a comprehensive ‘Energy Action Plan’ in order to generate world-class electricity in Bengal.The Power department has been exploiting all its resources to ensure that the people here in the state can avail the quality of electricity that is normally found in Western countries. In its attempt to produce best quality power, the state government has focused on the renewable energy sector. A senior official of the department said through the development of an ‘Energy Action Plan’, the department aims to produce the best quality electricity, at par with the Western countries, in the next 2-3 years. Also Read – Rain batters Kolkata, cripples normal life”We are venturing into the unknown areas of renewable energy sources and in the future years, there will be a paradigm shift from conventional energy to renewable energy. We are taking all necessary steps to make the whole process more sustainable. How the grid integration will be done remains a big challenge for us,” a senior official of the Power department said. In the last one year, more than 10 power sub-stations have been constructed across the state to maintain better quality of electricity and also to address the voltage problem that has often been reported from some pockets, the official added. In the solar energy sector, Bengal has already achieved a significant growth through various projects. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedStressing on the generation of hydroelectricity, the Bengal government has taken up a number of new initiatives. Several hydroelectric projects are coming up on Teesta river, namely Teesta I, Teesta II, Teesta V, Teesta Intermediate state and Rammam Stage I in Darjeeling, each having a capacity of 80-84 MW. “The number of hydroelectricity resources is not plenty in Bengal. Despite the challenges, we are trying our best to generate hydroelectricity, which is one of our main focus areas in the state now,” the official said. It may be mentioned here that the Centre, during the Paris Convention in 2015, had vowed to catch up with other developed nations in the field of energy generation and power. The Centre has also made some commitments before the United Nations, saying that it will achieve the target of producing 40 percent of its power through renewable sources by the end of 2030. The overall carbon emission level in the country will also be reduced within the same period. India has so far been successful in generating 20 percent of its total power through renewable sources. The country will achieve the goal if all the states give more emphasis on renewable energy, thereby contributing towards the cause. Bengal is one of the states that has done a great deal of work on building infrastructure in the renewable energy sector. Since the Mamata Banerjee government came to power in the state, there has been a significant infrastructural reform in the energy sector. Power generation from solar energy has been given paramount importance through the launch of the ‘Aloshree’ project, a brainchild of the Chief Minister. To this end, solar panels have been set up on the rooftops of various government buildings, schools, colleges and other offices by the Power department.last_img read more

Acknowledging efforts of Ponty Chadha Foundation

first_imgMata Bhagwanti Chadha Niketan, an initiative of The Ponty Chadha Foundation, recently received the ‘National Award for Empowerment of Persons with Disabilities (Divyangjan) 2018’ by the Vice-President Venkaiah Naidu. The award was received by Shanam Chadha, Trustee – The Ponty Chadha Foundation at a function organised by Department of Empowerment of Persons with Disabilities (Divyangjan) under the Ministry of Social Justice and Empowerment on the occasion of International Day of Persons with Disabilities. The institution got the award under the category – ‘Best Institution working for Cause of Persons with Disabilities’. Also Read – Add new books to your shelfManpreet Singh Chadha, Founder Member, and Trustee, The Ponty Chadha Foundation said, “It was my father’s vision to create an environment where children with special needs could become socially and economically independent, and today I am proud that his dream project has reached this pinnacle. Working on his philosophy and vision, we strive to work towards creating a better society for the special children. I am thankful for all the support received from the Principal, teachers, parents, and well-wishers who have worked tirelessly towards the betterment of this segment.” Also Read – Over 2 hours screen time daily will make your kids impulsiveOn the development, Dr Vandana Sharma, Director, and Principal, Mata Bhagwanti Chadha Niketan School said, “It’s a great pleasure for all of us to be acknowledged with this prestigious award. We have been trying very hard to achieve the highest standards of rehabilitation service for the community of differently abled. Now it’s time to celebrate the vision of our founders and take our mission forward with greater motivation. We humbly accept the award with gratitude and a sense of greater responsibility towards empowerment of children and adults with special needs every year.” Mata Bhagwanti Chadha Niketan was founded in 1999 by Gurdeep Singh Chadha (Ponty Chadha) with a mission to educate and prepare special children to live within the community, having achieved three-fold self-dependence – physical, social and financial to the best of their abilities. The school is one of the largest private charitable rehabilitation institutions, which provides free-of-cost service to over 1000 students.last_img read more

Generative Adversarial Networks Generate images using Keras GAN Tutorial

first_imgYou might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0]num_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images.  Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keraslast_img read more